I’m a software engineer. I have a blog at niknoble.com.
niknoble
He gave me a simple heuristic: if you’re spending time wondering if a specific thought is OCD related, it probably is. I have found that to be true every time.
Analogues:
If you’re spending time wondering whether you’re dreaming, you probably are
If you’re spending time wondering whether a psychedelic drug is altering your thought process, it probably is
I don’t think “directionally correct” is a standalone concept. It’s just normal usage of the words “directionally” and “correct,” so you understand it automatically if you understand those words.
I suppose I can’t point to anything clearly false in your original post, especially with these clarifications, but I’m still left with a feeling that you would not have written it if you fully appreciated the extent to which a smart twelve-year-old is the same kind of thing as us.
The article is objectively easy to read, give or take an awkward sentence or obscure word. I’m fairly confident that if we had 12-year-old you read this article, then told him that future you had declared the article “isn’t written for children,” he would be amused at your impression of him.
I think there is a huge gap between AI being better than humans at theorem proving and AI being able to “do all the things that humans can reliably and measurably do.” Theorem proving is a lot like go or chess; it’s intuition-guided search over a fairly simple search space. It’s the kind of thing that we would expect computers to surpass humans at soon, even in a world where human-level AI is a long ways away.
Ah, okay. I interpreted “written for adults who already have some exposure to our culture” as “exposure to [our society’s] culture,” which would include the median adult, but I see now that you actually meant “exposure to [rationalist] culture,” which would not include the median adult.
I still disagree with the sentiment though. A smart 12-year-old even with no exposure to rationalist culture should be able to understand the sentence in context. To a first approximation, he is just like me and you but has never seen the word “agency” used in this way, so we can put ourselves in his shoes by imagining the passage said this:
Respect yourself in the past, present, and future. Don’t make excuses for being young. Even if you, the reader, are currently 4 years old, don’t let adults make excuses on your behalf. “Age is just a number” is not true, but is directionally correct compared to the societal status quo that rids children of dealership. You can start setting the foundations for the life you want today, no matter how young you are. Childhood doesn’t have to be all fun and games (fun and games are good, but they can also continue your entire life)! Start planning the life you want by thinking freely in your own head. You can beat others by starting earlier because you respect yourself and haven’t fallen for the “children aren’t people”-style propaganda.
On reading this, we are struck by the word “dealership” which is clearly being used in some unusual way, but we can still understand the passage more or less completely. The “dealership” sentence in particular is conceding that being young comes with real limitations, but it asserts that these limitations are weaker than is commonly believed. We realize that we don’t really need to know what “dealership” means in this context, and it’s kind of obvious anyway from the surrounding text, but we’re feeling motivated so we paste it into Google. Under the main definition, there is a secondary one: “the capacity, condition, or state of acting or of exerting power.” Ah, that must be it. “rids children of dealership” = “imagines children lack the capacity to exert power.” Ok, that was kind of a waste; we already got that from the rest of the passage. But whatever, it was worth the 30 seconds to learn some new vocabulary. Maybe it will help us in the future.
Was that really so bad?
Now imagine a 40-year-old comes along and tells us, “I feel like this article isn’t written for 27-year-olds like yourselves. It’s written for 40-year-olds who already have some exposure to 4chan (the ‘dealership’ jargon is popular on the /biz/ board). I was pretty smart at 27, but I don’t imagine being able to understand that sentence back then. I think it would be more suitable for you guys if we rewrote it like this: <insert simplified version with lot of examples>.”
I’m not claiming your changes aren’t an improvement. But if they are, it’s because they make the passage clearer in general, not because they reduce it from adult-level complexity to something even a lowly 12-year-old can understand.
Funding your own existence doesn’t lead to Malthusian issues, if it’s not at the expense of those who didn’t consent to this externality.
Had to think about this for a while, but I’m assuming this means that by funding your own existence through consensual economic transactions, you’re necessarily generating as much value as you consume, so you’re not reducing the amount of pie available for everyone else.
That seems vaguely reasonable. I guess an important thing is then that new people aren’t able to extract any resources through purely political means, which goes to your point about having hard limits on who provides for the new people.
Somehow it still feels suspicious. Like, if today we introduce a trillion initially-broke self-funding humans into the world, who all need a place to live, surely that makes it harder for me to rent an apartment, right? I’ll admit that I know very little about economics.
Good points. It’s still pretty fraught though:
Your proposals assume that a new person is given some predetermined amount of resources (as a lump sum or regular payments from their creator) and nothing else. But what if that person competes (politically, economically, etc.) to get more resources on top of their initial allotment? Then they’re still going to eat into the broader pie. You could say, “we’ll have to prevent them from owning more resources than their allotment,” but it’s unclear that this could be enforced.
Your proposals assume that people are only being created through official channels, but it might be very easy to illegally spawn off unregistered people who fund their own existence. Plus, an enormous amount of unregistered people may be created before we have time to set up any rules at all.
Your ideas would be a lot easier to implement if we all lived in a digital world and could only add people to our world via some specific API. But it’s not so easy in the base reality.
I’ve thought about what would happen if various AI players got their hands on Godlike powers via ASI. My subjective impressions, from least to most troubling:
Demis Hassabis—Best possible outcome. Utopia for all of us. Something like Metamorphosis of Prime Intellect. Way better than a random unaligned superintelligence, and way better than aligned ASI controlled by a committee of world governments.
Elon Musk—Concerning but okay. We’ll probably get utopia, and it’s probably still better than a random unaligned ASI or one controlled by governments, but Musk will have some greatly elevated position and you won’t want to piss him off. His enemies from the pre-AGI days will be in danger. I would enjoy his world but do my best to avoid ever attracting his attention.
Dario Amodei—I don’t want this. We’ll probably get a situation vastly better than the present world, but that is a low bar and I would rather take my chances with an ASI controlled by a broad committee, or maybe even a random unaligned one. Amodei seems to have strong principles, but ones that are fairly alien to my own, and which take a great interest in me and how I live. That’s a troubling combination, especially with immortality on the cards.
Sam Altman—We’re dead within days. A coldly rational actor with ASI kills everyone else as quickly as possible, knowing that they can always be recreated once said actor has put safeguards in place to ensure no one else can build ASI. (This is the same sort of behavior that reliably emerges with governments and nuclear weapons. Individuals have exactly the same incentives as governments, but we usually don’t see this because individuals are much less powerful and therefore have totally different circumstances.) I have zero doubt that Sam understands this simple game theory, and have never seen evidence from him of any deeper principles or desires that would cause him to act against his incentives in this case.
All that being said, I think the idea of one man controlling humanity’s future is looking pretty implausible at the present, and if it does happen it’s probably coming late enough that the man in question hasn’t been born yet. This stuff feels pretty firmly in the realm of sci-fi speculations.
I often wonder if there is a No-Alignment Theorem that says you can’t always control the actions of an intelligent entity. Maybe something with the flavor of the undecidability of the halting problem or Godel’s Incompleteness Theorems, where the issue stems from the fact that an intelligent entity can model itself and reflect from a distance on the goals you’ve given it.
I doubt such a thing exists, but it’s fun to think about. It would also require a mathematical formulation of an intelligent entity, which seems to be quite a ways off. And even if such a theorem does exist, it would almost certainly be irrelevant for doing alignment in practice, the same way Godel’s Incompleteness Theorems do not affect the day-to-day work of mathematicians.
The synthetic population bomb:
In a world of abundance, it will be easy to create digital people, and those people will have as much of a claim to resources as anyone else. If we create people faster than we extract new resources, then the pie grows but each person’s slice shrinks. In the end, we live in a technological wonderland but own fewer and fewer atoms.
Ironically, this shows that you are still deep in the “societal status quo that rids children of agency.” Any English sentence that is parsable by a median adult is parsable by a smart twelve-year-old.
If this guessing game ever takes place, I will be identified by the string `]@`. Having laid claim to this string first, I’ll obviously take precedence over anyone who tries to claim it later.
That’s 2 characters for me, which should put me ahead of everyone else on this site at the time of this post.
Even with a Schelling codebook, the 5 ascii char approach is dangerous because there are some 5-character strings that would suggest some other scheme is being used. For example, the guesser might have to assume the string
weiyiis referring to the Chinese chess player (the most prominent person with that name by far) and not a codebook entry.On the other hand, maybe being “logically omniscient,” the guesser would inevitably conclude that a codebook is the only reasonable scheme, and that would outweigh the enormous coincidence of the 5-character string so clearly identifying a person in English.
A while back I created a multiple choice test about a fake alien planet to play with some of these ideas.
I am a great fan of Ted Chiang. Many see Understand as his weakest story. I love it, as it is the finest work of intelligence porn ever written.
Understand is my favorite short story of all time. It’s funny to see you call it intelligence porn, since I keep a list of stories titled “agency porn” where it is the first entry. Some others on the list are Crystal Nights by Egan and Dare by Charlie Fish. I also think ideal agency porn has an accelerating plot structure, where the scale and stakes get steadily higher throughout the story, and Understand is just a masterclass in creating that sense of building momentum.
Chiang is special because he can produce interesting ideas and tell a great story, and these skills seem to be in tension. Egan’s ideas are legendary, even better than Chiang’s, but his storytelling is pretty weak. The only author I know of who I would assert beats Chiang on both quality of ideas and storytelling is Susanna Clarke. (She isn’t categorized as sci-fi, but read Piranesi and tell me this woman wouldn’t be an excellent mathematician.)
Given I love Understand, what is my least favorite Ted Chiang story? That would be Liking What You See: A Documentary
I agree that Liking What You See was weak. I don’t remember much about it, and it was fairly long, so the ratio of memorable ideas per word must have been low. I do remember that it was disappointingly predictable. If you read the phrase “debate about equalizing beauty” and let the commentary you’ve absorbed from our culture wash over you for a few seconds, then you’ve already covered all of the angles raised in the story. There’s nothing original, and there’s basically no plot to salvage the boring ideas; it’s just an exploration of the ideas using the characters as props.
However, I think What’s Expected of Us is an even weaker Chiang story. That story describes a device called a “predictor” that has a button and an LED. Whenever the button is pressed, the LED blinks one second *before* the press. This results in widespread anguish as people grasp the consequences for free will.
For starters, the premise of the predictor is just logically impossible on its face. Set up an Arduino that checks the LED at 3:00:00 PM and pushes the button at 3:00:01 PM only if the LED was off. I don’t remember there being any consideration of the problem this creates for the predictor. But even worse, the story totally misses the mark on human psychology. There is zero chance society would go insane due to the emergence of a philosophical paradox like this. The laws of physics are full of mind-bending paradoxes in the real world, but they have no effect on the average person’s state of mind.
In general, “philosophical idea leads to insanity” is a common trope in fiction, but it very rarely matches reality. I guess it’s a way to shoehorn an interesting idea into a story, turning what should have been an essay into a piece of fiction. Incidentally, the one exception I’ve noticed to this rule is your own story The Maker of MIND, where a character is tormented by a fairly abstract philosophical concern, and the reader actually feels his anguish. I thought that was really difficult and impressive.
In fact, I think you are an exceptionally gifted storyteller and have what it takes to be in the conversation with Chiang and Egan. The Company Man was insanely good. I remember someone in the comments was questioning why that story was so well-received on this site, and I thought, when a story is this good, it doesn’t even matter what it’s about, because it’s interesting just as an example of storytelling. It gets you thinking about the ingredients of good stories. I’m not especially interested in rationalist culture or AI safety, and I still find myself thinking about The Company Man every now and then. In particular, I often recall this little excerpt (the final paragraph just kills me):
“So why are you here? Why are you working on The Project?” he asks.
I explain my theory about the near-certain world destruction mitigated by the slight possibility of incomprehensibly large material wealth.
“Oh, like, the Bostrom stuff. I used to be super into the Bostrom stuff. I was so worried. That’s why I started The Project, you know. It started as like a safety thing. All triggered by that silly book.”
“And what changed your mind?”
He takes a giant hit from the hookah, the type of hit you only take if you have a spare pair of lungs on hand. “I went on a spiritual journey in Peru,” he says.
Kind of funny to stumble on this in 2026 and notice that the other conspicuous number in his tweet, besides 14 and 88, is 67. If there wasn’t before, there is certainly now a surprising density of meaningful numbers in that tweet.
However, if the button had another option, which was a nonzero chance (literally any nonzero chance!) of a thousand years of physical torture, I wouldn’t press that button, even if it’s chance of utopia was 99.99%.
I often wonder if any AGI utopia comes with a nonzero chance of eternal suffering. Once you have a godlike AGI that is focused on maximizing your happiness, are you then vulnerable to random bitflips that cause it to minimize your happiness instead?
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
You’ll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren’t explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It’s because of the revolution in self-modification that ensues shortly afterwards.
Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it’s the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.
This post makes half of that observation: that it becomes impossible to increase your money using your personal qualities. But it misses the other half: that it becomes possible to improve your personal qualities using your money.
The value of capital is so much higher once it can be used for self-modification.
For one thing, these modifications are very desirable in themselves. It’s easy to imagine a present-day billionaire giving up all he owns for a modest increase along just a few of these axes, say a 300% increase in intelligence and a 100% increase in energy.
But even if you trick yourself into believing that you don’t really want self-modification (most people will claim that immortality is undesirable, so long as they can’t have it, and likewise for wireheading), there are race dynamics that mean you can’t just ignore it.
People who engage in self-modification will be better equipped to influence the world, affording them more opportunities for self-modification. They will undergo recursive self-improvement similar to the kind we imagine for AGI. At some point, they will think and move so much faster than an unaugmented human that it will be impossible to catch up.
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
You might say, “Well, there’s nothing I can do in that world anyway, because I’m always going to lose a self-modification race to the people who start as billionaires, and being a winner-takes-all situation, there’s no prize for giving it a decent try.” However, this isn’t necessarily true. Once self-modification becomes possible, there will still be time to take advantage of it before things start getting out of control. It will start out very primitive, resembling curing diseases more than engineering new capabilities. In this sense, it arguably already exists in a very limited form.
In this critical early period, a person will still have the ability to author their destiny, with the degree of that ability being mostly determined by the amount of self-modification they can afford.
Under some conditions, they may be able to permanently escape the influence of a hostile superintelligence (whether artificial or a hyperaugmented human). For example, a nearly perfect escape outcome could be achieved by travelling in a straight line close to the speed of light, bringing with you sufficient resources and capabilities to:
Stay alive indefinitely
Continue the process of self-improvement
In the chaos of an oncoming singularity, it’s not unimaginable that a few people could slip away in that fashion. But it won’t happen if you’re broke.
Notes
The line between buying an exocortex and buying/renting intelligent servants is somewhat blurred, so arguably the OP doesn’t totally miss the self-modification angle. But it should be called out a lot more explicitly, since it is one of the key changes coming down the pike.
Most of this comment doesn’t apply if AGI leads to a steady state where humans have limited agency (e.g. ruling AGIs or their owners prevent self-modification, or humans are replaced entirely by AGIs). But if that sort of outcome is coming, then our present-day actions have no positive or negative effects on our future, so there’s no point in preparing for it.
Relevant quote from Altman after the firing:
“I think this will be the most transformative and beneficial technology humanity has yet invented,” Altman said, adding later, “On a personal note, four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room when we push … the veil of ignorance back and the frontier of discovery forward.”
I always feel pressure to lie in the opposite way during job interviews. In software engineering, interviewers want to see relatable hobbies and strong social connections, with parenting being the holy grail, and they are as leery as you are of glorifying work. Literally thousands of versions of this post have gone viral in tech circles over the last 20 years, and as a result your view has percolated into the vast majority of corporate cultures, such that saying “at this company, we are like one big family” has acquired the same ring as “I can’t be racist because I have black friends.”
I also find that it’s more spiritually unpleasant to face “what do you do outside of work” or “what did you do over the weekend” when the true answer is socially unacceptable than it is to exaggerate when asked “why do you want to work at this company” or “you don’t mind doing a little overtime, do you?” Parents should be grateful that they have a permanent gold-standard answer to the first two. And people aren’t really expected to be honest on the second two anyway.
I understand there are pockets within tech that this culture hasn’t reached, and that it’s different in other industries like finance. I also agree that the “waging gives life meaning” argument is mostly ridiculous cope from people who have no choice, and they will drop the act the moment it becomes optional, similar to what will happen with aging and wireheading.