This is a nice explanation of why we should not expect much useful policy now or in the very near future.
This is not a reason to give up on policy. These concerns sound completely crazy now, but that might quickly change. It took about two weeks for the idea that we should be shutting down and staying at home to avoid COVID from sounding crazy to being the default consensus. It’s looking like we might have incompetent and semi-competent agents in the public eye long enough before takeover-capable AGI to make a difference.
My preferred current public policy action is to figure out how to capitolize on that shift of public opinion if and when it occurs. Having those conversations with policymakers is one way to do that, even if you do initially sound crazy. There are probably a bunch of other good strategies I haven’t seen discussed.
But yes, you must be aware that you sound crazy in order to even do useful work in making the ideas seem less crazy on the fourth repetition.
IMO, the big catalyst will probably come when serious job losses happen due to AI, and once that happens I do expect a real response from society.
What happens next will I think be determined by how fast AIs go from automating away white collar jobs to the rest of the economy.
If it’s a couple months to a year or even faster, then this could cause a very strong response from governments, though polarization could alter that.
If it’s going to take 5-10 years, then I worry about polarization derailing AI regulation/AI safety a lot more, because the initial wave of permanent job losses will be concentrated on Democratic bases, whereas the Republican party base will initially benefit from automation (but the Republican base will be automated away eventually, but in the short run they get better-paying jobs), and this is where I predict a serious split will happen on AI safety politically with Democrats being pro-regulation of AI and Republicans being anti-regulation of AI.
I think the big takeaway is that brainstorming policy for AI safety is more useful for AI governance than trying to convince the public, because it’s very hard to shift the conversation until AI automation happens, but once the crisis does hit we want to be seen as having any sort of credible policy, and policy is usually passed most urgently during crises.
Agreed. Particularly on polarization being the big risk. Strategizing on how to avoid polarization when public concern hits seems like the highest priority. I don’t have good ideas, even though I think about polarization a lot.
My nomination for policy is something vague but intuitive like “no AI that can take a whole job” or something like that, since that’s also the type of AI that can solve new problems and ultimately take over.
Of course we’ll be close enough by then that someone will be able to make human-level AI with self-directed learning pretty quickly in a jurisdiction that doesn’t outlaw it (or in secret, or in a government program); but laws would reduce proliferation, which seems useful to shift the odds somewhat.
This is a nice explanation of why we should not expect much useful policy now or in the very near future.
This is not a reason to give up on policy. These concerns sound completely crazy now, but that might quickly change. It took about two weeks for the idea that we should be shutting down and staying at home to avoid COVID from sounding crazy to being the default consensus. It’s looking like we might have incompetent and semi-competent agents in the public eye long enough before takeover-capable AGI to make a difference.
My preferred current public policy action is to figure out how to capitolize on that shift of public opinion if and when it occurs. Having those conversations with policymakers is one way to do that, even if you do initially sound crazy. There are probably a bunch of other good strategies I haven’t seen discussed.
But yes, you must be aware that you sound crazy in order to even do useful work in making the ideas seem less crazy on the fourth repetition.
IMO, the big catalyst will probably come when serious job losses happen due to AI, and once that happens I do expect a real response from society.
What happens next will I think be determined by how fast AIs go from automating away white collar jobs to the rest of the economy.
If it’s a couple months to a year or even faster, then this could cause a very strong response from governments, though polarization could alter that.
If it’s going to take 5-10 years, then I worry about polarization derailing AI regulation/AI safety a lot more, because the initial wave of permanent job losses will be concentrated on Democratic bases, whereas the Republican party base will initially benefit from automation (but the Republican base will be automated away eventually, but in the short run they get better-paying jobs), and this is where I predict a serious split will happen on AI safety politically with Democrats being pro-regulation of AI and Republicans being anti-regulation of AI.
I think the big takeaway is that brainstorming policy for AI safety is more useful for AI governance than trying to convince the public, because it’s very hard to shift the conversation until AI automation happens, but once the crisis does hit we want to be seen as having any sort of credible policy, and policy is usually passed most urgently during crises.
Agreed. Particularly on polarization being the big risk. Strategizing on how to avoid polarization when public concern hits seems like the highest priority. I don’t have good ideas, even though I think about polarization a lot.
My nomination for policy is something vague but intuitive like “no AI that can take a whole job” or something like that, since that’s also the type of AI that can solve new problems and ultimately take over.
Of course we’ll be close enough by then that someone will be able to make human-level AI with self-directed learning pretty quickly in a jurisdiction that doesn’t outlaw it (or in secret, or in a government program); but laws would reduce proliferation, which seems useful to shift the odds somewhat.