The Altman Technocracy

Imagine if the city of New York was inherited by newly sentient human beings a couple hundred years from now. These descendants of ours have no civil engineers or architects among them. They cannot even guess as to how these magnificent, glassy structures are made. Yet, every day they walk into these buildings. They climb “stairs” and use “elevators” to get to work. To this particular generation of human beings, “buildings” have always been there, and always will be. There’s no need to understand the structure, or technical nature behind them.

One day, several of these buildings collapse—the death toll is in the thousands. In NY News, it’s simply stated: ‘God struck again!’ or something to that effect. The ignorance of this generation is so deep and entrenched, that the notion of buildings collapsing is attributable to God.


Now, compare this analogy to the modern day understanding of OpenAI, algorithms, and potentially the atrophy of critical thinking. I know few people these days who aren’t using ChatGPT and Midjourney in some small way. The more conservative ones will only use it for menial, automated tasks. But most (I suspect) are using it for almost everything, ousting their brain for a Machine.

What will the long-term effects on critical thinking be? My analogy argues that AI-assisted existence (AAIE?) will eventually give technical minds like Sam Altman a monopoly on knowledge; a technocracy on a scale we’ve never seen in human history. Mindless acceptance of spammed prompts that are built on hallucinations, and a Jenga tower of assumptions (apparently made for practical purposes?) is becoming an increasingly likely future.


I claim that very few people actually understand what they are using and what it effects it has on their mind. When our children inherit far more advanced iterations of ChatGPT, the ‘buildings falling out of the sky’ for reasons they don’t understand will be mechanisms of control for technocrats.

Perhaps I’m being paranoid. But I do stand firmly on the idea that there will be ‘invisible enemies’ that manifest in strange ways in the future. So strange in fact that we won’t be able to identify that it came from AI at first. Worst case scenario we’ll be so pacified and neglectful of our intellectual faculties that we won’t identify the source at all.


We need to simplify how we explain artificial intelligence. We need more intellectual initiatives the public finds appealing. We need to take the absolute proposals more seriously. I think visions into what the future might look like and how to respond to it (not just models of AI alignment) are important.

Inspired by Have epistemic conditions always been this bad? — LessWrong.