And we’re not going to slow this down with abstract concerns. Chip makers are going to keep making chips. There’s money in compute — whether it powers centralized server farms or locally run AI models. That hardware momentum won’t stop because we have philosophical doubts or ethical concerns. It will scale because it can.
But if that growth is concentrated in systems we don’t fully understand, we’re not scaling intelligence — we’re scaling misunderstanding. The best chance we have to stay aligned is to get AI into the hands of real people, running locally, where assumptions get tested and feedback actually matters.
The development of AI today looks a lot like the early days of computing: centralized, expensive, and tightly controlled. We’re in the mainframe era — big models behind APIs, optimized for scale, not for user agency.
There was nothing inevitable about the rise of personal computing. It happened because people demanded access. They wanted systems they could understand, modify, and use on their own terms — and they got them. That shift unlocked an explosion of creativity, capability, and control.
We could see the same thing happen with AI. Not through artificial minds or sentient machines, but through practical tools people run themselves, tuned to their needs, shaped by real-world use.
The kinds of fears people project onto AI today — takeover, sentience, irreversible control — aren’t just unlikely on local machines. They’re incompatible with the very idea of tools people can inspect, adapt, and shut off.