If I believe AGI is imminent, that it’s an extinction risk, and that therefore, we should stop building AI; how do I get people to take the second half of that seriously, given that policymakers and funders seem to respond to that argument as though everything after “therefore” is empty and meaningless, to be used only as a shibboleth to calm the public? To put this another way—if I imagine myself in a world where AGI predictions primarily have the effect of hype, and that thus people who are systematically dishonest have an incentive to make them and manipulate anyone else who believes the prediction into making it counterproductively; then, that doesn’t change my prediction that I think it’s imminent.
if you imagine yourself in a world where agi is imminent, does that change your prediction about whether any of the people predicting it are honest about their reasons? does it change any policy proposals? because if, eg, you’d say “well, they’re honest, and might be right, but those are bad policies for that world, too” then I’m much more enthusiastic about policies that focus on immediately measurable harms. But if your policies are chosen for worlds where you actually think AGI is extremely unlikely, rather than merely that hype serves altman’s interests, then I think you’re making a serious mistake.
I’m in favor of whatever policies actually reduce damage from AI, and if it is the case that we live in a world where AGI is not near, I only want policies for the AI we have now. If we live in a world where AGI is near, then I want policies that intervene most effectively to prevent damage from AI including AGI, which might well be best designed as policies that focus on current harms first—in my view, most of the current harms are from mechanisms similar in kind but not intensity to how AGI would be bad, and so policies that are designed to handle them may be quite helpful in preventing doom. But of course samuel alternate boy is likely to disagree.
If you believe AGI is imminent, then of course you want to develop policies that will address the related problem. I do not think we should summarily dismiss the potential risks of AGI, and I do say explicitly that I am not arguing about the AGI timelines or probabilities here. What I do argue is that we should not base our belief in imminent AGI—and therefore policy choices—solely on the messaging from the AI industry players. And they, whether we like it or not, are now to great extent shaping the public discourse.
Which of these two approaches:
Sam Altman says AGI is coming > let’s focus all policy effort and resources on his scenarios of AGI, or
AGI is potentially coming > let’s review arguments and research from across the field to weigh the probabilities > let’s distribute policy effort and resources across short-, mid-, and long-term risks accordingly
do you think will bear better policy choices? Approach #2 may well conclude that AGI is extremely likely, but these conclusions will have a sounder and broader base.
If I believe AGI is imminent, that it’s an extinction risk, and that therefore, we should stop building AI; how do I get people to take the second half of that seriously, given that policymakers and funders seem to respond to that argument as though everything after “therefore” is empty and meaningless, to be used only as a shibboleth to calm the public? To put this another way—if I imagine myself in a world where AGI predictions primarily have the effect of hype, and that thus people who are systematically dishonest have an incentive to make them and manipulate anyone else who believes the prediction into making it counterproductively; then, that doesn’t change my prediction that I think it’s imminent.
if you imagine yourself in a world where agi is imminent, does that change your prediction about whether any of the people predicting it are honest about their reasons? does it change any policy proposals? because if, eg, you’d say “well, they’re honest, and might be right, but those are bad policies for that world, too” then I’m much more enthusiastic about policies that focus on immediately measurable harms. But if your policies are chosen for worlds where you actually think AGI is extremely unlikely, rather than merely that hype serves altman’s interests, then I think you’re making a serious mistake.
I’m in favor of whatever policies actually reduce damage from AI, and if it is the case that we live in a world where AGI is not near, I only want policies for the AI we have now. If we live in a world where AGI is near, then I want policies that intervene most effectively to prevent damage from AI including AGI, which might well be best designed as policies that focus on current harms first—in my view, most of the current harms are from mechanisms similar in kind but not intensity to how AGI would be bad, and so policies that are designed to handle them may be quite helpful in preventing doom. But of course samuel alternate boy is likely to disagree.
If you believe AGI is imminent, then of course you want to develop policies that will address the related problem. I do not think we should summarily dismiss the potential risks of AGI, and I do say explicitly that I am not arguing about the AGI timelines or probabilities here. What I do argue is that we should not base our belief in imminent AGI—and therefore policy choices—solely on the messaging from the AI industry players. And they, whether we like it or not, are now to great extent shaping the public discourse.
Which of these two approaches:
Sam Altman says AGI is coming > let’s focus all policy effort and resources on his scenarios of AGI, or
AGI is potentially coming > let’s review arguments and research from across the field to weigh the probabilities > let’s distribute policy effort and resources across short-, mid-, and long-term risks accordingly
do you think will bear better policy choices? Approach #2 may well conclude that AGI is extremely likely, but these conclusions will have a sounder and broader base.