The problem is: public advocacy is way too centered on LLMs, from my perspective.[9] Thus, those researchers I mentioned, who are messing around with new paradigms on arXiv, are in a great position to twist “Pause AI” type public advocacy into support for what they’re doing!
I am a long-time volunteer with the organization bearing the name PauseAI. Our message is that increasingAI capabilities is the problem—not which paradigm is used to get there. The current paradigm is dangerous in some fairly legible ways, but that doesn’t at all imply that other paradigms are any better. Any effort to create increasingly capable and increasingly general AI systems ought to be very illegal unless paired with a robust safety case, and we mostly don’t tie this to the specifics of LLMs.
Well, the government regulators hardly matter anyway, since regulating the activity of “playing with toy models, and publishing stuff on arXiv and GitHub” is a hell of an ask—I think it’s so unlikely to happen that it’s a waste of time to even talk about it, even if it were a good idea all-things-considered.
Yeah, restricting the creation and dissemination of most AGI-related research is definitely a much harder ask. I can imagine a world that has an appetite for that kind of invasive regulation (if it is necessary), but it would probably require intervening steps to get there, including first regulating only the biggest players in the AGI race (which is a very popular idea across all political spectra in the western world).
1.6.2 I’m broadly pessimistic about existing efforts towards regulating AGI
My overall p(doom from AI by 2040) is about 70%, which shows pessimism on my part as well. But of course, that’s why I’m trying so hard. My ranking of “ways we survive” from most to least likely goes: Robust Governance Solutions > Sheer Dumb Luck > Robust Technical Solutions. So advocacy is where I spend my time.
In any case, a world that is more aware of the problem is one that is more likely to solve it by some means or another. I’m working to buy us some luck, so to speak.
I am a long-time volunteer with the organization bearing the name PauseAI. Our message is that increasing AI capabilities is the problem—not which paradigm is used to get there. The current paradigm is dangerous in some fairly legible ways, but that doesn’t at all imply that other paradigms are any better. Any effort to create increasingly capable and increasingly general AI systems ought to be very illegal unless paired with a robust safety case, and we mostly don’t tie this to the specifics of LLMs.
Yeah, restricting the creation and dissemination of most AGI-related research is definitely a much harder ask. I can imagine a world that has an appetite for that kind of invasive regulation (if it is necessary), but it would probably require intervening steps to get there, including first regulating only the biggest players in the AGI race (which is a very popular idea across all political spectra in the western world).
My overall p(doom from AI by 2040) is about 70%, which shows pessimism on my part as well. But of course, that’s why I’m trying so hard. My ranking of “ways we survive” from most to least likely goes: Robust Governance Solutions > Sheer Dumb Luck > Robust Technical Solutions. So advocacy is where I spend my time.
In any case, a world that is more aware of the problem is one that is more likely to solve it by some means or another. I’m working to buy us some luck, so to speak.