Thoughts on efforts to shift public (or elite, or political) opinion on AI doom.
Currently, it seems like we’re in a state of being Too Early. AI is not yet scary enough to overcome peoples’ biases against AI doom being real. The arguments are too abstract and the conclusions too unpleasant.
Currently, it seems like we’re in a state of being Too Late. The incumbent players are already massively powerful and capable of driving opinion through power, politics, and money. Their products are already too useful and ubiquitous to be hated.
Unfortunately, these can both be true at the same time! This means that there will be no “good” time to play our cards. Superintelligence (2014) was Too Early but not Too Late. There may be opportunities which are Too Late but not Too Early, but (tautologically) these have not yet arrived. As it is, current efforts must fight on bith fronts.
I like this framing; we’re both too early and too late. But it might transition quite rapidly from too early to right on time.
One idea is to prepare strategies and arguments and perhaps prepare the soil of public discourse in preparation for the time when it is no longer too early. Job loss and actually harmful AI shenanigans are very likely before takeover-capable AGI. Preparing for the likely AI scares and negative press might help public opinion shift very rapidly as it sometimes does (e.g., COVID opinions went from no concern to shutting down half the economy very quickly).
The average American and probably the average global citizen already dislikes AI. It’s just the people benefitting from it that currently like it, and that’s a minority.
Whether that’s enough is questionable, but it makes sense to try and hope that the likely backlash is at least useful in slowing progress or proliferation somewhat.
Too Early does not preclude Too Late
Thoughts on efforts to shift public (or elite, or political) opinion on AI doom.
Currently, it seems like we’re in a state of being Too Early. AI is not yet scary enough to overcome peoples’ biases against AI doom being real. The arguments are too abstract and the conclusions too unpleasant.
Currently, it seems like we’re in a state of being Too Late. The incumbent players are already massively powerful and capable of driving opinion through power, politics, and money. Their products are already too useful and ubiquitous to be hated.
Unfortunately, these can both be true at the same time! This means that there will be no “good” time to play our cards. Superintelligence (2014) was Too Early but not Too Late. There may be opportunities which are Too Late but not Too Early, but (tautologically) these have not yet arrived. As it is, current efforts must fight on bith fronts.
I like this framing; we’re both too early and too late. But it might transition quite rapidly from too early to right on time.
One idea is to prepare strategies and arguments and perhaps prepare the soil of public discourse in preparation for the time when it is no longer too early. Job loss and actually harmful AI shenanigans are very likely before takeover-capable AGI. Preparing for the likely AI scares and negative press might help public opinion shift very rapidly as it sometimes does (e.g., COVID opinions went from no concern to shutting down half the economy very quickly).
The average American and probably the average global citizen already dislikes AI. It’s just the people benefitting from it that currently like it, and that’s a minority.
Whether that’s enough is questionable, but it makes sense to try and hope that the likely backlash is at least useful in slowing progress or proliferation somewhat.