If people start losing jobs from automation, that could finally build political momentum for serious regulation.
Suggested in Zvi’s comments the other month (22 likes):
The real problem here is that AI safety feels completely theoretical right now. Climate folks can at least point to hurricanes and wildfires (even if connecting those dots requires some fancy statistical footwork). But AI safety advocates are stuck making arguments about hypothetical future scenarios that sound like sci-fi to most people. It’s hard to build political momentum around “trust us, this could be really bad, look at this scenario I wrote that will remind you of a James Cameron movie”
Here’s the thing though—the e/acc crowd might accidentally end up doing AI safety advocates a huge favor. They want to race ahead with AI development, no guardrails, full speed ahead. That could actually force the issue. Once AI starts really replacing human workers—not just a few translators here and there, but entire professions getting automated away—suddenly everyone’s going to start paying attention. Nothing gets politicians moving like angry constituents who just lost their jobs.
Here’s a wild thought: instead of focusing on theoretical safety frameworks that nobody seems to care about, maybe we should be working on dramatically accelerating workplace automation. Build the systems that will make it crystal clear just how transformative AI can be. It feels counterintuitive—like we’re playing into the e/acc playbook. But like extreme weather events create space to talk about carbon emissions, widespread job displacement could finally get people to take AI governance seriously. The trick is making sure this wake-up call happens before it’s too late to do anything about the bigger risks lurking around the corner.
Rather than make things worse as a means of compelling others to make things better, I would rather just make things better.
Brinksmanship and accelerationism (in the Marxist sense) are high variance strategies ill-suited to the stakes of this particular game.
[one way this makes things worse is stimulating additional investment on the frontier; another is attracting public attention to the wrong problem, which will mostly just generate action on solutions to that problem, and not to the problem we care most about. Importantly, the contingent of people-mostly-worried-about-jobs are not yet our allies, and it’s likely their regulatory priorities would not address our concerns, even though I share in some of those concerns.]
Their main effect will be to accelerate AI R&D automation, as best I can tell.
If people start losing jobs from automation, that could finally build political momentum for serious regulation.
Suggested in Zvi’s comments the other month (22 likes):
Source: https://thezvi.substack.com/p/the-paris-ai-anti-safety-summit/comment/92963364
Just skimming the thread, I didn’t see anyone offer a serious attempt at counterargument, either.
Rather than make things worse as a means of compelling others to make things better, I would rather just make things better.
Brinksmanship and accelerationism (in the Marxist sense) are high variance strategies ill-suited to the stakes of this particular game.
[one way this makes things worse is stimulating additional investment on the frontier; another is attracting public attention to the wrong problem, which will mostly just generate action on solutions to that problem, and not to the problem we care most about. Importantly, the contingent of people-mostly-worried-about-jobs are not yet our allies, and it’s likely their regulatory priorities would not address our concerns, even though I share in some of those concerns.]