What evidence do we have so far that public opinion turning even further against AI would meaningfully slow down capabilities progress in the time period here?
You mention public concern should tilt the AI 2027 scenario towards success, but in August 2027 in the scenario the public is already extremely against AI (and OpenBrain specifically is at negative 40% approval).
Good question. I probably should have emphasized that the difference between this and the AI 2027 scenario is that the route to AGI takes a little longer, so there is much more public exposure to agentic LLMs.
I did emphasize that this may all come too late to make much difference from a regulatory standpoint. Even if that happens, it’s going to change the environment which people make crucial decisions about deploying the first takeover capable AIs. That cuts both ways; polarization dominates belief diffusion seems complex but it might be possible to get a better guess than I have now, which is almost none.
The other change I think were pretty guaranteed to get is dramatically improved funding for alignment and safety. That might come so late as to be barely useful, too. Or early enough to make a huge difference.
What evidence do we have so far that public opinion turning even further against AI would meaningfully slow down capabilities progress in the time period here?
You mention public concern should tilt the AI 2027 scenario towards success, but in August 2027 in the scenario the public is already extremely against AI (and OpenBrain specifically is at negative 40% approval).
Good question. I probably should have emphasized that the difference between this and the AI 2027 scenario is that the route to AGI takes a little longer, so there is much more public exposure to agentic LLMs.
I did emphasize that this may all come too late to make much difference from a regulatory standpoint. Even if that happens, it’s going to change the environment which people make crucial decisions about deploying the first takeover capable AIs. That cuts both ways; polarization dominates belief diffusion seems complex but it might be possible to get a better guess than I have now, which is almost none.
The other change I think were pretty guaranteed to get is dramatically improved funding for alignment and safety. That might come so late as to be barely useful, too. Or early enough to make a huge difference.