Unfortunately, we can’t count on compute remaining a bottleneck protecting us from danger. It’s functioning in that way for now, yes, but the possibility is there of algorithmic advances causing the compute threshold for ‘dangerously capable model’ to drop rapidly.
Should we be grateful for it for now? Sure.
Should we count on it keeping us safe in the future? Definitely not.
I think compute governance as a way to prevent dangerously capable models is doomed as a long term tactic. I think that likely buys us a couple of years, maybe 4 or 5 years at best. I mean, those years could be critical, so let’s not fail to secure them! But lets not fool ourselves into thinking that 20 years from now, a worldwide compute governance agency will be keeping the world safe from powerful AGI. It is a stopgap measure that cannot hold.
Unfortunately, we can’t count on compute remaining a bottleneck protecting us from danger. It’s functioning in that way for now, yes, but the possibility is there of algorithmic advances causing the compute threshold for ‘dangerously capable model’ to drop rapidly.
Should we be grateful for it for now? Sure.
Should we count on it keeping us safe in the future? Definitely not.
I couldn’t agree more! I think this is well-said.
I mainly linkposted this article because I thought it was a valuable look into the public perspective on this; Human civilization in its current state seems to be pretty broadly interested in accelerating AI capabilities, and sees nothing wrong with that.
I think compute governance as a way to prevent dangerously capable models is doomed as a long term tactic. I think that likely buys us a couple of years, maybe 4 or 5 years at best. I mean, those years could be critical, so let’s not fail to secure them! But lets not fool ourselves into thinking that 20 years from now, a worldwide compute governance agency will be keeping the world safe from powerful AGI. It is a stopgap measure that cannot hold.
My thinking on the AGI macrostrategy here is that there’s already more than enough interest, among government officials in the US and China, to limit the AI disruption introduced by massive compute production; although it’s for completely different reasons as people in the AI safety community. It’s just that currently, the rewards seem to outweigh the risks, in their minds.
Unfortunately, we can’t count on compute remaining a bottleneck protecting us from danger. It’s functioning in that way for now, yes, but the possibility is there of algorithmic advances causing the compute threshold for ‘dangerously capable model’ to drop rapidly.
Should we be grateful for it for now? Sure.
Should we count on it keeping us safe in the future? Definitely not.
I think compute governance as a way to prevent dangerously capable models is doomed as a long term tactic. I think that likely buys us a couple of years, maybe 4 or 5 years at best. I mean, those years could be critical, so let’s not fail to secure them! But lets not fool ourselves into thinking that 20 years from now, a worldwide compute governance agency will be keeping the world safe from powerful AGI. It is a stopgap measure that cannot hold.
I couldn’t agree more! I think this is well-said.
I mainly linkposted this article because I thought it was a valuable look into the public perspective on this; Human civilization in its current state seems to be pretty broadly interested in accelerating AI capabilities, and sees nothing wrong with that.
My thinking on the AGI macrostrategy here is that there’s already more than enough interest, among government officials in the US and China, to limit the AI disruption introduced by massive compute production; although it’s for completely different reasons as people in the AI safety community. It’s just that currently, the rewards seem to outweigh the risks, in their minds.