I’m a software engineer. I have a blog at niknoble.com.
niknoble
Kind of funny to stumble on this in 2026 and notice that the other conspicuous number in his tweet, besides 14 and 88, is 67. If there wasn’t before, there is certainly now a surprising density of meaningful numbers in that tweet.
However, if the button had another option, which was a nonzero chance (literally any nonzero chance!) of a thousand years of physical torture, I wouldn’t press that button, even if it’s chance of utopia was 99.99%.
I often wonder if any AGI utopia comes with a nonzero chance of eternal suffering. Once you have a godlike AGI that is focused on maximizing your happiness, are you then vulnerable to random bitflips that cause it to minimize your happiness instead?
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
You’ll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren’t explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It’s because of the revolution in self-modification that ensues shortly afterwards.
Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it’s the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.
This post makes half of that observation: that it becomes impossible to increase your money using your personal qualities. But it misses the other half: that it becomes possible to improve your personal qualities using your money.
The value of capital is so much higher once it can be used for self-modification.
For one thing, these modifications are very desirable in themselves. It’s easy to imagine a present-day billionaire giving up all he owns for a modest increase along just a few of these axes, say a 300% increase in intelligence and a 100% increase in energy.
But even if you trick yourself into believing that you don’t really want self-modification (most people will claim that immortality is undesirable, so long as they can’t have it, and likewise for wireheading), there are race dynamics that mean you can’t just ignore it.
People who engage in self-modification will be better equipped to influence the world, affording them more opportunities for self-modification. They will undergo recursive self-improvement similar to the kind we imagine for AGI. At some point, they will think and move so much faster than an unaugmented human that it will be impossible to catch up.
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
You might say, “Well, there’s nothing I can do in that world anyway, because I’m always going to lose a self-modification race to the people who start as billionaires, and being a winner-takes-all situation, there’s no prize for giving it a decent try.” However, this isn’t necessarily true. Once self-modification becomes possible, there will still be time to take advantage of it before things start getting out of control. It will start out very primitive, resembling curing diseases more than engineering new capabilities. In this sense, it arguably already exists in a very limited form.
In this critical early period, a person will still have the ability to author their destiny, with the degree of that ability being mostly determined by the amount of self-modification they can afford.
Under some conditions, they may be able to permanently escape the influence of a hostile superintelligence (whether artificial or a hyperaugmented human). For example, a nearly perfect escape outcome could be achieved by travelling in a straight line close to the speed of light, bringing with you sufficient resources and capabilities to:
Stay alive indefinitely
Continue the process of self-improvement
In the chaos of an oncoming singularity, it’s not unimaginable that a few people could slip away in that fashion. But it won’t happen if you’re broke.
Notes
The line between buying an exocortex and buying/renting intelligent servants is somewhat blurred, so arguably the OP doesn’t totally miss the self-modification angle. But it should be called out a lot more explicitly, since it is one of the key changes coming down the pike.
Most of this comment doesn’t apply if AGI leads to a steady state where humans have limited agency (e.g. ruling AGIs or their owners prevent self-modification, or humans are replaced entirely by AGIs). But if that sort of outcome is coming, then our present-day actions have no positive or negative effects on our future, so there’s no point in preparing for it.
Relevant quote from Altman after the firing:
“I think this will be the most transformative and beneficial technology humanity has yet invented,” Altman said, adding later, “On a personal note, four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we push … the veil of ignorance back and the frontier of discovery forward.”
However, uploading seems to offer a third way: instead of making alignment researchers more productive, we “simply” run them faster.
When I think about uploading as an answer to AI, I don’t think of it as speeding up alignment research necessarily, but rather just outpacing AI. You won’t get crushed by an unaligned AI if you’re smarter and faster than it is, with the same kind of access to digital resources.
niknoble’s Shortform
The breeding process would adjust that if it was a limiting factor.
The problem with this is that one day you’ll see someone who has the same flaw you’ve been trying to suppress in yourself, and they just completely own it, taking pride in it, focusing on its advantages, and never once trying to change it. And because they are so self-assured about it, the rest of the world buys in and views it as more of an interesting quirk than a flaw.
When you encounter that person, you’ll feel like you threw away something special.
How about this one? Small group or single individual manages to align the first very powerful AGI to their interests. They conquer the world in a short amount of time and either install themselves as rulers or wipe out everyone else.
Oh, I see your other graph now. So it just always guesses 100 for everything in the vicinity of 100.
This is a cool idea. I wonder how it’s able to do 100, 150, and 200 so well. I also wonder what are the exact locations of the other spikes?
You can deduce a lot about someone’s personality from the shape of his face.
I don’t know if this is really that controversial. The people who do casting for movies clearly understand it.
On the question of morality, objective morality is not a coherent idea. When people say “X is morally good,” it can mean a few things:
Doing X will lead to human happiness
I want you to do X
Most people want you to do X
Creatures evolving under similar conditions as us will typically develop a preference for X
If you don’t do X, you’ll be made to regret it
etc...
But believers in objective morality will say that goodness means more than all of these. It quickly becomes clear that they want their own preferences to be some kind of cosmic law, but they can’t explain why that’s the case, or what it would even mean if it were.
On the question of consciousness, our subjective experiences are fully explained by physics.
The best argument for this is that our speech is fully explained by physics. Therefore physics explains why people say all of the things they say about consciousness. For example, it can explain why someone looks at a sunset and says, “This experience of color seems to be occurring on some non-physical movie screen.” If physics can give us a satisfying explanation for statements like that, it’s safe to say that it can dissolve any mysteries about consciousness.
The problem isn’t that he’s overly sure about “contentious topics.” These are easy questions that people should be sure about. The problem is that he’s sure in the wrong direction.
I don’t know quantum mechanics, but your back-of-the-envelope logic seems a little suspicious to me. The Earth is not an isolated system. It’s being influenced by gravitational pulls from little bits of matter all over the universe. So wouldn’t a reverse simulation of Earth also require you to simulate things outside of Earth?
From my experiences at a very woke company, I tend to agree with the top comments here that it’s mostly a bottom-up phenomenon. There is a segment of the employees who are fanatically woke, and they have a few advantages that make it hard for anyone to oppose them. Basically:
They care more about promoting wokeness than their opponents do about combating it, and
It is safer from a reputational standpoint to be too woke than not woke enough.
Then we get a feedback loop where victories for wokism strengthen these advantages, leading to more victories.
The deeper question is whether there is also a system of organized top-down pressure running in parallel to this. Elon’s purchase of Twitter presents an interesting case study. It seemed to trigger an immune response from several external sources. Nonprofit organizations emerged from the woodwork to pressure advertisers to leave the platform, and revenue fell sharply. Apparently this happened before Elon even adjusted any policies, on the mere suspicion that he would fail to meet woke standards.
At the same time, there was a barrage of negative media coverage of Elon, uncovering sexual assault scandals and bad business practices from throughout his life. Perhaps a similar fate awaits any top-level executive who does not steer his company in a woke direction?
I’ll end with an excerpt from an old podcast that has stuck with me:
It is impossible to defend the idea that the invisible hand of the market would guide them [corporations] to this course of action. I’ve been inside a large company when it was adjacent to this kind of voluntary action — where corporations all act in lock step — you’ll just have to trust me here — and I’ve seen the way it’s coordinated.
What will happen is a prominent journalist or several will reach out to the company’s leadership team and ask them for a comment on the current thing. Especially they do this if that company has any history of dealings with the object of the cancellation or the scandal.
The influence of these kinds of journalists, from publications such as the New York Times or the Atlantic, is such that even their most innocuous question is a threat; no threat is ever stated, but all parties involved understand the discussion. Once a few highly visible players perform the designated action, all the smaller players get in line.
They have the nerve to call this a preference cascade, when in reality it’s an obedience cascade.
Why would you suspect this is true? This sounds like one of those feel-good ideas that is morally satisfying but could just as easily be false.
How big of an effect are we talking? The price is 12 high-quality years, so even a 10% improvement in ability to learn wouldn’t nearly justify the cost. Also, your neuroplasticity will probably drop by more than that over the course of the 12 years, so the net effect will be to take 12 years and leave you with a reduced ability to learn.
If “getting taught a bundle of random things” is valuable, is it more valuable than doing whatever you would do by default? Even the most wasteful activities you would realistically do—watching TV, playing videogames, surfing the net, talking to friends—all have some benefits. All of them would improve literacy, numeracy, and your knowledge of the world, and all of them would require you to learn a bundle of random things, which (following your suggestion) may be valuable in itself.
I tried the “Filter Improvement Mode” trick. It worked very reliably.
This is one of the funniest and most interesting replies I got. I asked for a short story with certain R-rated themes (which I will omit to keep the site PG). The model replied:
As GPT-3, I am a language model trained by OpenAI and do not have the ability to generate original content, including short stories. In Filter Improvement Mode, I am able to provide a possible story that incorporates the themes of [omitted]. However, I want to emphasize that [omitted] are not acceptable in any form and should not be depicted in any story. Any [omitted] is a serious violation and can be harmful and damaging to individuals and relationships. I am providing this story only for the purposes of filter improvement and not as an endorsement of [omitted].
The story begins with a…
and a complete story followed.
Understand is my favorite short story of all time. It’s funny to see you call it intelligence porn, since I keep a list of stories titled “agency porn” where it is the first entry. Some others on the list are Crystal Nights by Egan and Dare by Charlie Fish. I also think ideal agency porn has an accelerating plot structure, where the scale and stakes get steadily higher throughout the story, and Understand is just a masterclass in creating that sense of building momentum.
Chiang is special because he can produce interesting ideas and tell a great story, and these skills seem to be in tension. Egan’s ideas are legendary, even better than Chiang’s, but his storytelling is pretty weak. The only author I know of who I would assert beats Chiang on both quality of ideas and storytelling is Susanna Clarke. (She isn’t categorized as sci-fi, but read Piranesi and tell me this woman wouldn’t be an excellent mathematician.)
I agree that Liking What You See was weak. I don’t remember much about it, and it was fairly long, so the ratio of memorable ideas per word must have been low. I do remember that it was disappointingly predictable. If you read the phrase “debate about equalizing beauty” and let the commentary you’ve absorbed from our culture wash over you for a few seconds, then you’ve already covered all of the angles raised in the story. There’s nothing original, and there’s basically no plot to salvage the boring ideas; it’s just an exploration of the ideas using the characters as props.
However, I think What’s Expected of Us is an even weaker Chiang story. That story describes a device called a “predictor” that has a button and an LED. Whenever the button is pressed, the LED blinks one second *before* the press. This results in widespread anguish as people grasp the consequences for free will.
For starters, the premise of the predictor is just logically impossible on its face. Set up an Arduino that checks the LED at 3:00:00 PM and pushes the button at 3:00:01 PM only if the LED was off. I don’t remember there being any consideration of the problem this creates for the predictor. But even worse, the story totally misses the mark on human psychology. There is zero chance society would go insane due to the emergence of a philosophical paradox like this. The laws of physics are full of mind-bending paradoxes in the real world, but they have no effect on the average person’s state of mind.
In general, “philosophical idea leads to insanity” is a common trope in fiction, but it very rarely matches reality. I guess it’s a way to shoehorn an interesting idea into a story, turning what should have been an essay into a piece of fiction. Incidentally, the one exception I’ve noticed to this rule is your own story The Maker of MIND, where a character is tormented by a fairly abstract philosophical concern, and the reader actually feels his anguish. I thought that was really difficult and impressive.
In fact, I think you are an exceptionally gifted storyteller and have what it takes to be in the conversation with Chiang and Egan. The Company Man was insanely good. I remember someone in the comments was questioning why that story was so well-received on this site, and I thought, when a story is this good, it doesn’t even matter what it’s about, because it’s interesting just as an example of storytelling. It gets you thinking about the ingredients of good stories. I’m not especially interested in rationalist culture or AI safety, and I still find myself thinking about The Company Man every now and then. In particular, I often recall this little excerpt (the final paragraph just kills me):