That’s not right. You could easily spend a billion dollars just on better evals and better interpretability.
For the real alignment problem, the fact that 0.1 bill a year hasn’t yielded returns, doesn’t mean 100 billion won’t. It’s one problem. No one has gotten much traction on it. You’d expect it to look like a step function, not a smooth curve.
The Superalignment team at OpenAI kept complaining that they did not get the 20% compute they were promised, and this was a major cause of the OpenAI drama. This shows how important resources are for alignment.
A lot of alignment researchers stayed at OpenAI despite the drama, but still quit sometime later after citing poor productivity. Maybe they consider it more important to work somewhere with better resources, than to access to OpenAI’s newest models etc.
Alignment research costs money and resources just like capabilities research. Better funded AI labs like OpenAI and DeepMind are racing ahead of poorly funded AI labs in poor countries which you never hear about. Likewise, if alignment research was better funded, it also has a better chance of winning the race.
Note: after I agreed with your comment the score dropped back to 0 because someone else disagreed. Maybe they disagree that you can easily spend a fraction of a billion on evals?
I know very little about AI evals. Are these like the IQ tests for AIs? Why would a good eval cost millions of dollars?
That’s not right. You could easily spend a billion dollars just on better evals and better interpretability.
For the real alignment problem, the fact that 0.1 bill a year hasn’t yielded returns, doesn’t mean 100 billion won’t. It’s one problem. No one has gotten much traction on it. You’d expect it to look like a step function, not a smooth curve.
I completely agree!
The Superalignment team at OpenAI kept complaining that they did not get the 20% compute they were promised, and this was a major cause of the OpenAI drama. This shows how important resources are for alignment.
A lot of alignment researchers stayed at OpenAI despite the drama, but still quit sometime later after citing poor productivity. Maybe they consider it more important to work somewhere with better resources, than to access to OpenAI’s newest models etc.
Alignment research costs money and resources just like capabilities research. Better funded AI labs like OpenAI and DeepMind are racing ahead of poorly funded AI labs in poor countries which you never hear about. Likewise, if alignment research was better funded, it also has a better chance of winning the race.
Note: after I agreed with your comment the score dropped back to 0 because someone else disagreed. Maybe they disagree that you can easily spend a fraction of a billion on evals?
I know very little about AI evals. Are these like the IQ tests for AIs? Why would a good eval cost millions of dollars?