The development of AI today looks a lot like the early days of computing: centralized, expensive, and tightly controlled.
We’re in the mainframe era — big models behind APIs, optimized for scale, not for user agency.
There was nothing inevitable about the rise of personal computing.
It happened because people demanded access. They wanted systems they could understand, modify, and use on their own terms — and they got them. That shift unlocked an explosion of creativity, capability, and control.
We could see the same thing happen with AI.
Not through artificial minds or sentient machines, but through practical tools people run themselves, tuned to their needs, shaped by real-world use.
The kinds of fears people project onto AI today — takeover, sentience, irreversible control — aren’t just unlikely on local machines.
They’re incompatible with the very idea of tools people can inspect, adapt, and shut off.
That gives me a better idea of where you’re coming from.
I think the crux here is your skepticism taht we will get sapient AI soon after we get useful tool AI. This is a common opiniion or unstated assumption (as it was in your original piece).
(I think “sapience” is I think the more relevant term, based roughly on “understanding”, vs “sentience” based roughly on “feeling”. But sentience is used where sapience should be more often than not, so if that’s not what you mean, you should clarify. Similarly, safety is used overlapping with x-risk, and not. So if you meant it doesn’t matter if AI feels or produces minor harms, I agree- but I don’t think that’s what you meant, and I’d expect it to be misinterpreted by a majority if it was.)
Now, I actually agree with you that training alone won’t produce sapient AGI, what I’ve termed “Real AGI”. Or at least not obviously or quickly.
But developers will probably pursue a variety of creative means to get to competent and therefore useful and dangerous AGI. And I think a fair assessment is that some routes could work very rapidly—nobody knows for sure. I think highly capable tool AI is setting the stage for sapient and agentic AGI very directly: with a capable enough tool, you mearly prompt it repeatedly with “continue working to accomplish goal X” and it will reflect and plan as it considers appropriate- and be very very dangerous to the extent it is competent, since your definition of goal X could easily be interpreted differently than you intended it. And if it’s not, someone else in your wide web of democratized AI usage will give their proto-AGI a humanity-threatening goal, either on purpose or by accident- probably both, repeated hundreds to millions of times to various degrees.
The development of AI today looks a lot like the early days of computing: centralized, expensive, and tightly controlled. We’re in the mainframe era — big models behind APIs, optimized for scale, not for user agency.
There was nothing inevitable about the rise of personal computing. It happened because people demanded access. They wanted systems they could understand, modify, and use on their own terms — and they got them. That shift unlocked an explosion of creativity, capability, and control.
We could see the same thing happen with AI. Not through artificial minds or sentient machines, but through practical tools people run themselves, tuned to their needs, shaped by real-world use.
The kinds of fears people project onto AI today — takeover, sentience, irreversible control — aren’t just unlikely on local machines. They’re incompatible with the very idea of tools people can inspect, adapt, and shut off.
That gives me a better idea of where you’re coming from.
I think the crux here is your skepticism taht we will get sapient AI soon after we get useful tool AI. This is a common opiniion or unstated assumption (as it was in your original piece).
(I think “sapience” is I think the more relevant term, based roughly on “understanding”, vs “sentience” based roughly on “feeling”. But sentience is used where sapience should be more often than not, so if that’s not what you mean, you should clarify. Similarly, safety is used overlapping with x-risk, and not. So if you meant it doesn’t matter if AI feels or produces minor harms, I agree- but I don’t think that’s what you meant, and I’d expect it to be misinterpreted by a majority if it was.)
Now, I actually agree with you that training alone won’t produce sapient AGI, what I’ve termed “Real AGI”. Or at least not obviously or quickly.
But developers will probably pursue a variety of creative means to get to competent and therefore useful and dangerous AGI. And I think a fair assessment is that some routes could work very rapidly—nobody knows for sure. I think highly capable tool AI is setting the stage for sapient and agentic AGI very directly: with a capable enough tool, you mearly prompt it repeatedly with “continue working to accomplish goal X” and it will reflect and plan as it considers appropriate- and be very very dangerous to the extent it is competent, since your definition of goal X could easily be interpreted differently than you intended it. And if it’s not, someone else in your wide web of democratized AI usage will give their proto-AGI a humanity-threatening goal, either on purpose or by accident- probably both, repeated hundreds to millions of times to various degrees.
More in LLM AGI will have memory, and memory changes alignment, ,and If we solve alignment, do we die anyway?,
Democratizing AI is a common intuition, and I think it’s motivated by valid concerns. Yours are less common. See Fear of centralized power vs. fear of misaligned AGI: Vitalik Buterin on 80,000 Hours for both sides of the argument.