More generally, trying to ban or restrict AI (especially via the government) seems highly counterproductive as a strategy if you think AI risk looks a lot like Human Risk, because we have extensive evidence from the human world showing that highly centralized systems that put a lot of power into few hands are very, very bad.
You want to decentralize, open source, and strongly limit government power.
Current AI Safety discourse is the exact opposite of this because people think that AI society will be “totally different” from how human society works. But I think that since the problems of human society are all emergent effects not strongly tied to human biology in particular, real AI Safety will just look like Human Safety, i.e. openness, freedom, good institutions, decentralization, etc.
I think that the position you’re describing should be part of your hypothesis space when you’re just starting out thinking about this question. And I think that people in the AI safety community often underrate the intuitions you’re describing.
But overall, after thinking about the details, I end up disagreeing. The differences between risks from human concentration of power and risks from AI takeover lead to me thinking you should handle these situations differently (which shouldn’t be that surprising, because the situations are very different).
Well it depends on the details of how the AI market evolves and how capabilities evolve over time, whether there’s a fast, localized takeoff or a slower period of widely distributed economic growth.
This in turn depends to some extent on how seriously you take the idea of a single powerful AI undergoing recursive self-improvement, versus AI companies mostly just selling any innovations to the broader market, and whether returns to further intelligence diminish quickly or not.
In a world with slow takeoff, no recursive self-improvement and diminishing returns, AI looks a lot like any other technology and trying to artificially centralize it just enables tyranny and likely massively reduces the upside, potentially permanently locking us into an AI-driven police state run by some 21st Century Stalin who promised to keep us safe from the bad AIs.
More generally, trying to ban or restrict AI (especially via the government) seems highly counterproductive as a strategy if you think AI risk looks a lot like Human Risk, because we have extensive evidence from the human world showing that highly centralized systems that put a lot of power into few hands are very, very bad.
You want to decentralize, open source, and strongly limit government power.
Current AI Safety discourse is the exact opposite of this because people think that AI society will be “totally different” from how human society works. But I think that since the problems of human society are all emergent effects not strongly tied to human biology in particular, real AI Safety will just look like Human Safety, i.e. openness, freedom, good institutions, decentralization, etc.
I think that the position you’re describing should be part of your hypothesis space when you’re just starting out thinking about this question. And I think that people in the AI safety community often underrate the intuitions you’re describing.
But overall, after thinking about the details, I end up disagreeing. The differences between risks from human concentration of power and risks from AI takeover lead to me thinking you should handle these situations differently (which shouldn’t be that surprising, because the situations are very different).
Well it depends on the details of how the AI market evolves and how capabilities evolve over time, whether there’s a fast, localized takeoff or a slower period of widely distributed economic growth.
This in turn depends to some extent on how seriously you take the idea of a single powerful AI undergoing recursive self-improvement, versus AI companies mostly just selling any innovations to the broader market, and whether returns to further intelligence diminish quickly or not.
In a world with slow takeoff, no recursive self-improvement and diminishing returns, AI looks a lot like any other technology and trying to artificially centralize it just enables tyranny and likely massively reduces the upside, potentially permanently locking us into an AI-driven police state run by some 21st Century Stalin who promised to keep us safe from the bad AIs.