I expect the main things most people can do is apply pressure on their governments to take policy action. Making this happen is no small feat, and is mostly a matter of sufficient awareness so that everyone knows that everyone knows it’s real and there are options to stop it until we finish more safety work. Coordination on this scale is not just a few people acting, it’s getting people to organically come to the ideas and apply pressure, and doing that requires sufficient credible signals that it’s real and saying it’s real and having everyone believe it.
Nod. But, I think trying to warn “AGI is literally here” feels kinda like the wrong move to me anyways.
The move I would make is “AI keeps improving in ways that are on the path to generalization and strategic awareness. Here is where it was 3 years ago. Here’s where it was last year. Here’s where it was last month”. I think that’s consistently alarming whether or not people agree on what counts as AGI. (and, every few months there are more alarming things to point at).
I think it’s currently at the point where people paying attention should notice “this sure doesn’t seem to obviously NOT be AGI”, but, I think it’s still at a point where crying “AGI” might leave people underwhelmed and then get Boy Cried Wolf syndrome. (and meanwhile just focusing on it’s object level capabilities seems more robustly good)
I expect the main things most people can do is apply pressure on their governments to take policy action. Making this happen is no small feat, and is mostly a matter of sufficient awareness so that everyone knows that everyone knows it’s real and there are options to stop it until we finish more safety work. Coordination on this scale is not just a few people acting, it’s getting people to organically come to the ideas and apply pressure, and doing that requires sufficient credible signals that it’s real and saying it’s real and having everyone believe it.
Nod. But, I think trying to warn “AGI is literally here” feels kinda like the wrong move to me anyways.
The move I would make is “AI keeps improving in ways that are on the path to generalization and strategic awareness. Here is where it was 3 years ago. Here’s where it was last year. Here’s where it was last month”. I think that’s consistently alarming whether or not people agree on what counts as AGI. (and, every few months there are more alarming things to point at).
I think it’s currently at the point where people paying attention should notice “this sure doesn’t seem to obviously NOT be AGI”, but, I think it’s still at a point where crying “AGI” might leave people underwhelmed and then get Boy Cried Wolf syndrome. (and meanwhile just focusing on it’s object level capabilities seems more robustly good)