I only have anecdata but I’ve talked to quite a few people and most people say it’s is a good idea to use the myriad of other concerns about AI as a force multiplier on shared policy goals.
I’ve talked to quite a few people and most people say it’s is a good idea to use the myriad of other concerns about AI as a force multiplier on shared policy goals.
Speaking only for myself, here: There’s room for many different approaches, and I generally want people to shoot the shots that they see on their own inside view, even if I think they’re wrong. But I wouldn’t generally endorse this strategy, at least without regard for the details of how the coalition is structured and what it’s doing.
I think our main problem is a communication problem of getting people to understand the situation with AI
that model capabilities are steadily increasing;
that the labs are aiming at literal superintelligence, no really, something more capable than any human alive, and then even better than that; that the labs are explicitly aiming to do an RSI, which looks increasingly likely to succeed;
that there is not a known science of reliably controlling or shaping the motivations of superhuman AIs.
that there are competitive pressures for all of the labs and all of the countries to beat their competitors, so slowing down or pausing requires international coordination.
These are slippery points to get across specifically because audiences tend to slip into visualizing something other than “actual strategic superintelligence”, that is automating science and technological progress and capable of strategically outmaneuvering adversaries—even when I talk with people from the labs, they often tend to gravitate to a fuzzier vision that has the form factor of the current AI chatbots / agents, but is much more competent.
Most of the time, I’m trying to land these points, despite the slipperiness, and talking about present-day harms that don’t have a through-line to the core alignment problems seem like more of a distraction than a help.
If we already had developed policies that would substantially improve the situation and were politically feasible, and we just needed to get a big enough coalition to get them implemented, I would feel differently.
But insofar as we have policies substantially help, they’re rather radical (on the order of “don’t allow private individuals to own more than 8 GPUs” and “negotiate with China for an international pause in frontier AI development”), and are only politically realistic if the stakeholders have a close-to-accurate picture of the situation.
I only have anecdata but I’ve talked to quite a few people and most people say it’s is a good idea to use the myriad of other concerns about AI as a force multiplier on shared policy goals.
Speaking only for myself, here: There’s room for many different approaches, and I generally want people to shoot the shots that they see on their own inside view, even if I think they’re wrong. But I wouldn’t generally endorse this strategy, at least without regard for the details of how the coalition is structured and what it’s doing.
I think our main problem is a communication problem of getting people to understand the situation with AI
that model capabilities are steadily increasing;
that the labs are aiming at literal superintelligence, no really, something more capable than any human alive, and then even better than that; that the labs are explicitly aiming to do an RSI, which looks increasingly likely to succeed;
that there is not a known science of reliably controlling or shaping the motivations of superhuman AIs.
that there are competitive pressures for all of the labs and all of the countries to beat their competitors, so slowing down or pausing requires international coordination.
These are slippery points to get across specifically because audiences tend to slip into visualizing something other than “actual strategic superintelligence”, that is automating science and technological progress and capable of strategically outmaneuvering adversaries—even when I talk with people from the labs, they often tend to gravitate to a fuzzier vision that has the form factor of the current AI chatbots / agents, but is much more competent.
Most of the time, I’m trying to land these points, despite the slipperiness, and talking about present-day harms that don’t have a through-line to the core alignment problems seem like more of a distraction than a help.
If we already had developed policies that would substantially improve the situation and were politically feasible, and we just needed to get a big enough coalition to get them implemented, I would feel differently.
But insofar as we have policies substantially help, they’re rather radical (on the order of “don’t allow private individuals to own more than 8 GPUs” and “negotiate with China for an international pause in frontier AI development”), and are only politically realistic if the stakeholders have a close-to-accurate picture of the situation.