IMO, the big catalyst will probably come when serious job losses happen due to AI, and once that happens I do expect a real response from society.
What happens next will I think be determined by how fast AIs go from automating away white collar jobs to the rest of the economy.
If it’s a couple months to a year or even faster, then this could cause a very strong response from governments, though polarization could alter that.
If it’s going to take 5-10 years, then I worry about polarization derailing AI regulation/AI safety a lot more, because the initial wave of permanent job losses will be concentrated on Democratic bases, whereas the Republican party base will initially benefit from automation (but the Republican base will be automated away eventually, but in the short run they get better-paying jobs), and this is where I predict a serious split will happen on AI safety politically with Democrats being pro-regulation of AI and Republicans being anti-regulation of AI.
I think the big takeaway is that brainstorming policy for AI safety is more useful for AI governance than trying to convince the public, because it’s very hard to shift the conversation until AI automation happens, but once the crisis does hit we want to be seen as having any sort of credible policy, and policy is usually passed most urgently during crises.
Agreed. Particularly on polarization being the big risk. Strategizing on how to avoid polarization when public concern hits seems like the highest priority. I don’t have good ideas, even though I think about polarization a lot.
My nomination for policy is something vague but intuitive like “no AI that can take a whole job” or something like that, since that’s also the type of AI that can solve new problems and ultimately take over.
Of course we’ll be close enough by then that someone will be able to make human-level AI with self-directed learning pretty quickly in a jurisdiction that doesn’t outlaw it (or in secret, or in a government program); but laws would reduce proliferation, which seems useful to shift the odds somewhat.
IMO, the big catalyst will probably come when serious job losses happen due to AI, and once that happens I do expect a real response from society.
What happens next will I think be determined by how fast AIs go from automating away white collar jobs to the rest of the economy.
If it’s a couple months to a year or even faster, then this could cause a very strong response from governments, though polarization could alter that.
If it’s going to take 5-10 years, then I worry about polarization derailing AI regulation/AI safety a lot more, because the initial wave of permanent job losses will be concentrated on Democratic bases, whereas the Republican party base will initially benefit from automation (but the Republican base will be automated away eventually, but in the short run they get better-paying jobs), and this is where I predict a serious split will happen on AI safety politically with Democrats being pro-regulation of AI and Republicans being anti-regulation of AI.
I think the big takeaway is that brainstorming policy for AI safety is more useful for AI governance than trying to convince the public, because it’s very hard to shift the conversation until AI automation happens, but once the crisis does hit we want to be seen as having any sort of credible policy, and policy is usually passed most urgently during crises.
Agreed. Particularly on polarization being the big risk. Strategizing on how to avoid polarization when public concern hits seems like the highest priority. I don’t have good ideas, even though I think about polarization a lot.
My nomination for policy is something vague but intuitive like “no AI that can take a whole job” or something like that, since that’s also the type of AI that can solve new problems and ultimately take over.
Of course we’ll be close enough by then that someone will be able to make human-level AI with self-directed learning pretty quickly in a jurisdiction that doesn’t outlaw it (or in secret, or in a government program); but laws would reduce proliferation, which seems useful to shift the odds somewhat.