Habryka is right here. The bullet point misrepresents Bostrom’s position.
The bullet says “Contra Bostrom 2014 AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent”—presented as correcting something Bostrom got wrong. But Bostrom’s actual quote explicitly says the AI does understand what we meant (“The AI may indeed understand that this is not what we meant”). The problem in Bostrom’s framing isn’t lack of understanding, it’s misalignment between what we meant and what we coded.
Gemini 3 says similar:
Analysis
Habryka is technically correct regarding the text. Bostrom’s “Orthogonality Thesis” specifically separates intelligence (understanding) from goals (motivation). Bostrom explicitly argued that a superintelligence could have perfect understanding of human culture and intentions but still be motivated solely to maximize paperclips if that is what its utility function dictates. The failure mode Bostrom describes is not “oops, I misunderstood you,” but “I understood you perfectly, but my utility function rewards literal obedience, not intended meaning.”
I will take this to mean you share similar flawed generalization/reading strategies. I struggle to put the cognitive error here into words, but it seems to me like an inability to connect the act of specifying a wrong representation of utility with the phrase ‘lack of understanding’, or making an odd literalist interpretation whereby the fact that Bostrom argues in general for a separation between motivations and intelligence (orthogonality thesis) means that I am somehow misinterpreting him when I say that the mesagoal inferred from the objective function before understanding of language is a “misunderstanding” of the intent of the objective function. This is a very strange and very pedantic use of “understand”. “Oh but you see Bostrom is saying that the thing you actually wrote means this, which it understood perfectly.”
No.
If I say something by which I clearly mean one thing, and that thing was in principle straightforwardly inferrable from what I said (as is occurring right now), and the thing which is inferred instead is straightforwardly absurd by the norms of language and society, that is called a misunderstanding, a failure to understand, if you specify a wrong incomplete objective to the AI and it internalizes the wrong incomplete objective as opposed to what you meant, it (the training/AI building system as a whole) misunderstood you even if it understands your code to represent the goal just fine. This is to say that you want some way for the AI or AI building system to understand, by which we mean correctly infer the meaning and indirect consequences of the meaning, of what you wrote, at initialization, you want it to infer the correct goal at the point where a mesagoal is internalized. This process can be rightfully called UNDERSTANDING and when an AI system fails at this it has FAILED TO UNDERSTAND YOU at the point in time which mattered even if later there is some epistemology that understands in principle what was meant by the goal but is motivated by the mistaken version that it internalized when a mesagoal was formed.
But also as I said earlier Bostrom states this many times, we have a lot more to go off than the one line I quoted there. Here he is on page 171 in the section “Motivation Selection Methods”:
Problems for the direct consequentialist approach are similar to those
for the direct rule-based approach. This is true even if the AI is intended
to serve some apparently simple purpose such as implementing a version
of classical utilitarianism. For instance, the goal “Maximize the expecta-
tion of the balance of pleasure over pain in the world” may appear simple.
Yet expressing it in computer code would involve, among other things,
specifying how to recognize pleasure and pain. Doing this reliably might
require solving an array of persistent problems in the philosophy of
mind—even just to obtain a correct account expressed in a natural lan-
guage, an account which would then, somehow, have to be translated into
a programming language.
A small error in either the philosophical account or its translation
into code could have catastrophic consequences. Consider an AI that
has hedonism as its final goal, and which would therefore like to tile the
universe with “hedonium” (matter organized in a configuration that is
optimal for the generation of pleasurable experience). To this end, the
AI might produce computronium (matter organized in a configuration
that is optimal for computation) and use it to implement digital minds
in states of euphoria. In order to maximize efficiency, the AI omits from
the implementation any mental faculties that are not essential for the
experience of pleasure, and exploits any computational shortcuts that
according to its definition of pleasure do not vitiate the generation of
pleasure. For instance, the AI might confine its simulation to reward
circuitry, eliding faculties such as memory, sensory perception, execu-
tive function, and language; it might simulate minds at a relatively
coarse-grained level of functionality, omitting lower-level neuronal pro-
cesses; it might replace commonly repeated computations with calls to
a lookup table; or it might put in place some arrangement whereby mul-
tiple minds would share most parts of their underlying computational
machinery (their “supervenience bases” in philosophical parlance).
Such tricks could greatly increase the quantity of pleasure producible
This part makes it very clear that what Bostrom means by “code” is, centrally, some discrete program representation (i.e. a traditional programming language, like python, as opposed to some continuous program representation like a neural net embedding).
Bostrom expands on this point on page 227 in the section “The Value-Loading Problem”:
We can use this framework of a utility-maximizing agent to consider
the predicament of a future seed-AI programmer who intends to solve the
control problem by endowing the AI with a final goal that corresponds
to some plausible human notion of a worthwhile outcome. The program-
mer has some particular human value in mind that he would like the AI
to promote. To be concrete, let us say that it is happiness. (Similar issues
would arise if we were interested in justice, freedom, glory, human rights,
democracy, ecological balance, or self-development.) In terms of the
expected utility framework, the programmer is thus looking for a util-
ity function that assigns utility to possible worlds in proportion to the
amount of happiness they contain. But how could he express such a utility
function in computer code? Computer languages do not contain terms
such as “happiness” as primitives. If such a term is to be used, it must first
be defined. It is not enough to define it in terms of other high-level human
concepts—“happiness is enjoyment of the potentialities inherent in our
human nature” or some such philosophical paraphrase. The definition
must bottom out in terms that appear in the AI’s programming language,
and ultimately in primitives such as mathematical operators and ad-
dresses pointing to the contents of individual memory registers. When
one considers the problem from this perspective, one can begin to
appreciate the difficulty of the programmer’s task.
Here Bostrom is saying that it is not even rigorously imaginable how you would
translate the concept of “happiness” into discrete program code. Which in 2014 when the book is published is correct, it’s not rigorously imaginable, that’s why being able to pretrain neural nets which understand the concept in the kind of way where they simply wouldn’t make mistakes like “tile the universe with smiley faces”, which can be used as part of a goal specification, is a big deal.
With this in mind let’s return to the section I quoted the line in my post from, which says:
Defining a final goal in terms of human expressions of satisfaction or
approval does not seem promising. Let us bypass the behaviorism and
specify a final goal that refers directly to a positive phenomenal state, such
as happiness or subjective well-being. This suggestion requires that the
programmers are able to define a computational representation of the
concept of happiness in the seed AI. This is itself a difficult problem, but
we set it to one side for now (we will return to it in Chapter 12). Let us
suppose that the programmers can somehow get the AI to have the goal of
making us happy. We then get:
Final goal: “Make us happy”
Perverse instantiation: Implant electrodes into the pleasure centers of our brains
The perverse instantiations we mention are only meant as illustrations.
There may be other ways of perversely instantiating the stated final goal,
ways that enable a greater degree of realization of the goal and which are
therefore preferred (by the agent whose final goals they are—not by the
programmers who gave the agent these goals). For example, if the goal is to
maximize our pleasure, then the electrode method is relatively inefficient.
A more plausible way would start with the superintelligence “uploading”
our minds to a computer (through high-fidelity brain emulation). The AI
could then administer the digital equivalent of a drug to make us ecstat-
ically happy and record a one-minute episode of the resulting experience.
It could then put this bliss loop on perpetual repeat and run it on fast
computers. Provided that the resulting digital minds counted as “us,” this
outcome would give us much more pleasure than electrodes implanted
in biological brains, and would therefore be preferred by an AI with the
stated final goal.
“But wait! This is not what we meant! Surely if the AI is superintelligent,
it must understand that when we asked it to make us happy, we didn’t mean
that it should reduce us to a perpetually repeating recording of a drugged-
out digitized mental episode!”—The AI may indeed understand that this
is not what we meant. However, its final goal is to make us happy, not
to do what the programmers meant when they wrote the code that rep-
resents this goal. Therefore, the AI will care about what we meant only
instrumentally. For instance, the AI might place an instrumental value on
What Bostrom is saying is that one of if not the first impossible problem(s) you encounter is having any angle of attack on representing our goals in the kind of way which generalizes even at a human level inside the computer such that you can point a optimization process at it. That obviously a superintelligent AI would understand what we had meant by the initial objective, but it’s going to proceed according to either the mesagoal it internalizes or the literal code sitting in its objective function slot, because the part of the AI which motivates it is not controlled by the part of the AI, developed later in training, which understands what you meant in principle after acquiring language. The system which translates your words or ideas into the motivation specification must understand you at the point where you turned that translated concept into an optimization objective, at the start of the training or some point where the AI is still corrigible and you can therefore insert objectives and training goals into it.
Your bullet points says nothing about corrigibility.
My post says that a superintelligent AI is a superplanner which develops instrumental goals by planning far into the future. The more intelligent the AI is the farther into the future it can effectively plan, and therefore the less corrigible it is. Therefore by the time you encounter this bullet point it should already be implied that superintelligence and the corrigibility of the AI are tightly coupled, which is also an assumption clearly made in Bostrom 2014 so I don’t really understand why you don’t understand.
ChatGPT still thinks I am wrong so let’s think step by step. Bostrom says (i.e. leads the reader to understand through his gestalt speech, not that he literally says this in one passage) that, in the default case:
When you specify your final goal, it is wrong.
It is wrong because it is a discrete program representation of a nuanced concept like “happiness” that does not fully capture what we think happiness is.
Eventually you will have a world model with a correct understanding of happiness, because the AI is superintelligent.
This representation of happiness in the superintelligent world model “understands us” and would presumably produce better results if we could point at that understanding instead.
The fact we don’t do this to begin with heavily implies, almost as a necessary consequence really, that the representation of happiness which is a correct understanding of what we meant was not available at the time we specified what happiness is.
In a way all I am saying is that when you specify the program that will train your superintelligent AI, in Bostrom 2014 the AI’s superintelligent understanding is not available before you train it.
The final goal representation is part of the program that you write before the AI exists.
If you had a non superintelligent corrigible AI that builds a world model with a correct specification of happiness in it, you would use that specification.
If you had a correct specification of happiness, it would not be wrong.
Therefore Bostrom does not expect us to do this, because then the default would not be that your specification is wrong. Bostrom expects by default that our specification is wrong.
If Bostrom does not expect us to do this, that implies he does not expect us to build an AI that builds a correct representation of happiness until it is incorrigible or otherwise not able to be used to specify happiness for our superintelligent AI.
The default way an AI becomes incorrigible is by becoming more powerful than us.
Therefore Bostrom expects we will not have an AI that correctly understands concepts like happiness until after it is already superintelligent.
Maybe this argument is right, but the paragraph I am confused about does not mention the word corrigibility once. It just says (paraphrased) “AIs will in fact understand what we mean, which totally pwns Bostrom because he said the opposite, as you can see in this quote” and then fails to provide a quote that says that, at all.
Like, if you said “Contra Bostrom, AI will be corrigible, which you can see in this quote by Bostrom” then I would not be making this comment thread! I would have objections and could make arguments, and maybe I would bother to make them, but I would not be having the sense that you just said a sentence that really just sounds fully logically contradictory on its own premises, and then when asked about it keep importing context that is not references in the sentence at all.
So did you just accidentally make a typo and meant to say “Contra Bostrom 2014 AIs will in fact probably be corrigible: ‘The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal.’”
If that’s the paragraph you meant to write, and this is just a typo, then everything makes sense. If it isn’t, then I am sorry to say that not much that you’ve said helped me understand what you meant by that paragraph.
My understanding: JDP holds that when the training process chisels a wrong goal into an AI because we gave it a wrong training objective (e. g., “maximize smiles” while we want “maximize eudaimonia”), this event could be validly described as the AI “misunderstanding” us.
So when JDP says that “AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent”, and claims that this counters this Bostrom quote...
“The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal.”
… what JDP means to refer to is the “its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal” part, not the “the AI may indeed understand that this is not what we meant” part. (Pretend the latter part doesn’t exist.)
Reasoning: The fact that the AI’s goal ended up at “maximize happiness” after being trained against the “maximize happiness” objective, instead of at whatever the programmers intended by the “maximize happiness” objective, implies that there was a moment earlier in training when the AI “misunderstood” that goal (in the sense of “misunderstand” described in my first paragraph).
JDP then holds that this won’t happen, contrary to that part of Bostrom’s statement: that training on “naïve” pointers to eudaimonia like “maximize smiles” and such will Just Work, that the SGD will point AIs at eudaimonia (or at corrigibility or whatever we meant).[1] Or, in JDP’s parlance, that the AI will “understand” what we meant by “maximize smiles” well before it’s superintelligent.
If you think that this use of “misunderstand” is wildly idiosyncratic, or that JDP picked a really bad Bostrom quote to make his point, I agree.
(Assuming I am also not misunderstanding everything, there sure is a lot of misunderstanding around.)
I want to flag that thinking you have a representation that could be used in principle to do the right thing is not the same thing as believing it will “Just Work”. If you do a naive RL process on neural embeddings or LLMs evaluators you will definitely get bad results. I do not believe in “alignment by default” and push back on such things frequently whenever they’re brought up. What has happened is that the problem has gone from “not clear how you would do this even in principle, basically literally impossible with current knowledge” to merely tricky.
not the “the AI may indeed understand that this is not what we meant” part. (Pretend the latter part doesn’t exist.)
Ok, but the latter part does exist! I can’t ignore it. Like, it’s a sentence that seems almost explicitly designed to clarify that Bostrom thinks the AI will understand what we mean. So clearly, Bostrom is not saying “the AI will not understand what we mean”. Maybe he is making some other error in the book about how when the AI understands the way it does, it has to be corrigible, or that “happiness” is a confused kind of model of what an AI might want to optimize, but clearly that sentence is an atrocious sentence for demonstrating that “Bostrom said that the AI will not understand what we mean”. Like, he literally said the opposite right there, in the quote!
(JDP, you’re welcome to chime in and demonstrate that your writing was actually perfectly clear and that I’m just also failing basic reading comprehension.)
So clearly, Bostrom is not saying “the AI will not understand what we mean”
Consider the AI at two different points in time, AI-when-embryo early in training and AI-when-superintelligence at the end.
The quote involves Bostrom (a) literally saying that AI-when-superintelligence will understand what we meant,[1] (b) making a statement which logically implies, as an antecedent, that “AI-when-embryo won’t understand what we meant”.[2] Therefore, you can logically infer from this quote that Bostrom believes that the statement “AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent” is false.
JDP, in my understanding, assumes that the reader would do just that: automatically zero-in on (b), infer the antecedent from it, and dismiss (a) as irrelevant context. I love it when blog posts have lil’ tricksy logic puzzles in them.
clearly that sentence is an atrocious sentence for demonstrating that “Bostrom said that the AI will not understand what we mean”
“However, [AI-when-superintelligence’s] final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal[, because AI-when-embryo “misunderstood” that code’s intent.]”
This is correct, though that particular chain of logic doesn’t actually imply the “before superintelligence” part, since there is a space between embryo and superintelligent where it could theoretically come to understand. I argue why I think Bostrom implicitly rejects this or thinks it must be irrelevant with the 13 steps above. But I think it’s important context that this to me doesn’t come out as 13 steps or a bunch of sys2 reasoning, I just look at the thing and see the implication and then have to do a bunch of sys2 reasoning to articulate it if someone asks. To me it doesn’t feel like a hard thing from the inside, so I wouldn’t expect it to be hard for someone else either. From my perspective it basically came across as bad faith, because I literally could not imagine someone wouldn’t understand what I’m talking about until several people went “no I don’t get it”, that’s how basic it feels from the inside here. I now understand that no this actually isn’t obvious, the hostile tone above was frustration from not knowing that yet.
Clearly! I’m a little reluctant to rephrase it until I have a version that I know conveys what I actually meant, but one that would be very semantically close to the original would be:
“—Contra Bostrom 2014 it is possible to get high quality, nuanced representations of concepts like “happiness” at training initialization. The problem of representing happiness and similar ideas in a computer will not be first solved by the world model of a superintelligent or otherwise incorrigible AI, as in the example Bostrom gives on page 147 in the 2017 paperback under the section “Malignant Failure Modes”: “But wait! This is not what we meant! Surely if the AI is superintelligent, it must understand that when we asked it to make us happy, we didn’t mean that it should reduce us to a perpetually repeating recording of a drugged- out digitized mental episode!”—The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that rep- resents this goal.”″
Part of why I didn’t write it that way in the first place is it would make it a lot bulkier than the other bullet points, so I trimmed it down.
The fact we don’t do this to begin with heavily implies, almost as a necessary consequence really, that the representation of happiness which is a correct understanding of what we meant was not available at the time we specified what happiness is.
It depends on what you mean by “available”—we already had a representation of happiness in a human brain. And building corrigible AI that builds a correct representation of happiness is not enough—like you said, we need to point at it.
If you had a non superintelligent corrigible AI that builds a world model with a correct specification of happiness in it, you would use that specification.
If you can use it.
If Bostrom does not expect us to do this, that implies he does not expect us to build an AI that builds a correct representation of happiness until it is incorrigible or otherwise not able to be used to specify happiness for our superintelligent AI.
Yes, the key is “otherwise not able to be used”.
Therefore Bostrom expects we will not have an AI that correctly understands concepts like happiness until after it is already superintelligent.
No, unless by “correctly understands” you mean “have an identifiable representation that humans can use to program other AI”—he may expect that we will have an intelligence that correctly understands concepts like happiness while not yet being superintelligent (like we have humans, that are better at this than “maximize happiness”) but we still won’t be able to use it.
This is in principle a thing that Nick Bostrom could have believed while writing Superintelligence but the rest of the book kind of makes it incompatible with Occam’s Razor. It’s possible he meant the issues with translating concepts into discrete program representations as the central difficulty and then whether we would be able to make use of such a representation as a noncentral difficulty. (It’s Bostrom, he’s a pretty smart dude, this wouldn’t surprise me, it might even be in the text somewhere but I’m not reading the whole thing again). But even if that’s the case the central consistently repeated version of the value loading problem in Bostrom 2014 centers on how it’s simply not rigorously imaginable how you would get the relevant representations in the first place.
It’s important to remember also that Bostrom’s primary hypothesis in Superintelligence is that AGI will be produced by recursive self improvement such that it’s genuinely not clear you will have a series of functional non superintelligent AIs with usable representations before you have a superintelligent one. The book very much takes the EY “human level is a weird threshold to expect AI progress to stop at” thesis as the default.
But even if that’s the case the central consistently repeated version of the value loading problem in Bostrom 2014 centers on how it’s simply not rigorously imaginable how you would get the relevant representations in the first place.
I’m not so sure. Like, first of all, you mean something like “get before superintelligence” or “get into the goal slot”, because there is obviously a method to just get the representations—just build a superintelligence with a random goal, it will have your representations. That difference was explicitly stated then, it is often explicitly stated now—all that “AI will understand but not care”. The focus on the frameworks where it gets hard to translate from humans to programs is consistent with him trying to constrain methods of generating representations to only useful ones.
There is a reason why it is called “the value loading problem” and not “the value understanding problem”. “The value translation problem” would be somewhat in the middle: having actual human utility program would certainly solve some of Bostrom’s problems.
I don’t know whether Bostrom actually thought about non-superintelligent AI that already understands but don’t care. But I don’t think this line of argumentations of yours is correct about why such a scenario contradicts his points. Even if he didn’t consider it, it’s not “contra”, unless it actually contradicts him. What actually may contradict him is not “AI will understand values early” but “AI will understand values early and training such early AI will make it care about right things”.
can’t extract the exact concept (e.g., concept of human values) from AI. Even if it has this concept somewhere. Yes, we can look which activations correlate with some behaviour, and stuff like that. But it’s far from enough.
can’t train AI to optimize some concept from the world model of its earlier version. We have no ability to formalize the training objective like this.
Maybe Bostrom thought the weak AIs will not have good enough world model, like you interpret him. Or maybe he already thought that we will not be able to use world model of one AI to direct other. But the conclusion stays anyway.
I also think that current AIs probably don’t have the concept of human values that would actually be fine to optimize hard. And I’m not sure that AIs will have it before they will have the ability to stop us from changing their goal. But if it was the only problem, I would agree that the risk is more manageable.
I honestly have no idea what is going on. I have read your post, but not in excruciating detail. I do not know what you are talking about with corrigibility or whatever in response to my comment, as it really has nothing to do with my question or uncertainty. The language models seem to think similar.
I am not making a particularly complicated point. My point is fully 100% limited to this paragraph. This paragraph as far as I can understand is trying to make a local argument, and I have no idea how this logical step is supposed to work out.
Contra Bostrom 2014 AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent. (Before, you ask it’s on page 147 in the 2017 paperback under the section “Malignant Failure Modes”: “The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal.”)
I cannot make this paragraph make sense. You say (paraphrased) “Bostrom says that AI will not understand what we mean by the goals we given them before they are superintelligent, as you can see in the quote ‘the AI will understand what we mean by the goals we give them’”
And like, sure, I could engage with your broader critiques of Bostrom, but I am not. I am trying to understand this one point you make here. Think about it as a classical epistemic spot check. I just want to know what you meant by this one paragraph, as this paragraph as written does not make any sense to me, and I am sure does not make any sense to 90% of readers. It also isn’t making any sense to the language models.
Like, if I hadn’t had this to me very weird interaction I would be 90% confident that you just made a typo in this paragraph.
This is all because you explicitly say “here is the specific sentence in Superintelligence that proves that I am correctly paraphrasing Bostrom” and then cite a sentence that I have no idea how it’s remotely supposed to show that you are correctly paraphrasing Bostrom. Like, if you weren’t trying to give a specific sentence as the source, I would not be having this objection.
Let’s think phrase by phrase and analyze myself in the third person.
First let’s extract the two sentences for comparison:
JDP: Contra Bostrom 2014 AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent.
Bostrom: The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal.
An argument from ethos: JDP is an extremely scrupulous author and would not plainly contradict himself in the same sentence. Therefore this is either a typo or my first interpretation is wrong somehow.
Context: JDP has clarified it is not a typo.
Modus Tollens: If “understand” means the same thing in both sentences they would be in contradiction. Therefore understand must mean something different between them.
Context: After Bostrom’s statement about understanding, he says that the AI’s final goal is to make us happy, not to do what the programmers meant.
Association: The phrase “not to do what the programmers meant” is the only other thing that JDP’s instance of the word “understand” could be bound to in the text given.
Context: JDP says “before they are superintelligent”, which doesn’t seem to have a clear referent in the Bostrom quote given. Whatever he’s talking about must appear in the full passage, and I should probably look that up before commenting, and maybe point out that he hasn’t given quite enough context in that bullet and may want to consider rephrasing it.
Reference: Ah I see, JDP has posted the full thing into this thread. I now see that the relevant section starts with:
But wait! This is not what we meant! Surely if the AI is superintelligent, it must understand that when we asked it to make us happy, we didn’t mean that it should reduce us to a perpetually repeating recording of a drugged- out digitized mental episode!”
Association: Bostrom uses the frame “understand” in the original text for the question from his imagined reader. This implies that JDP saying “AIs will probably understand what we mean” must be in relation to this question.
Modus Tollens: But wait, Bostrom already answers this question by saying the AI will understand but not care, and JDP quotes this, so if JDP meant the same thing Bostrom means he would be contradicting himself, which we assume he is not doing, therefore he must be interpreting this question differently.
Inference: JDP is probably answering the original hypothetical readers question as “Why wouldn’t the AI behave as though it understands? Or why wouldn’t the AI’s motivation system understand what we meant by the goal?”
Context: Bostrom answers (implicitly) that this is because the AI’s epistemology is developed later than its motivation system. By the time the AI is in a position to understand this its goal slot is fixed.
Association: JDP says that subsequent developments have disproved this answers validity. So JDP believes either that the goal slot will not be fixed at superintelligence or that the epistemology does not have to be developed later than the motivation system.
Modus Tollens: If JDP said that the goal slot will not be fixed at superintelligence, he would be wrong, therefore since we are assuming JDP is not wrong this is not what he means.
Context: JDP also says “before superintelligence”, implying he agrees with Bostrom that the goal slot is fixed by the time the AI system is superintelligent.
Process of Elimination: Therefore JDP means that the epistemology does not have to be developed later than the motivation system.
Modus Tollens: But wait. Logically the final superintelligent epistemology must be developed alongside the superintelligence if we’re using neural gradient methods. Therefore since we are assuming JDP is not wrong this must not quite be what he means.
Occam’s Razor: Theoretically it could be made of different models, one of which is a superintelligent epistemology, but epistemology is made of parts and the full system is presumably necessary to be “superintelligent”.
Context: JDP says that “AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent”, this implies the existence of non superintelligent epistemologies which understand what we mean.
Inference: If there are non superintelligent epistemologies which are sufficient to understand us, and JDP believes that the motivation system can be made to understand us before we develop a superintelligent epistemology, then JDP must mean that Bostrom is wrong because there are or will be sufficient neural representations of our goals that can be used to specify the goal slot before we develop the superintelligent epistemology.
Ok, I… think this makes sense? Honestly, I think I would have to engage with this for a long time to see whether this makes sense with the actual content of e.g. Bostrom’s text, but I can at least see the shape of an argument that I could follow if I wanted to! Thank you!
(To be clear, this is of course not a reasonable amount of effort ask to put into understanding a random paragraph from a blogpost, at least without it being flagged as such, but writing is hard and it’s sometimes hard to bridge inferential distance)
This process can be rightfully called UNDERSTANDING and when an AI system fails at this it has FAILED TO UNDERSTAND YOU
No, the rightful way to describe what happens is that the training process generates an AI system with unintended functionality due to your failure to specify the training objective correctly. Describing it as a “misunderstanding” is tantamount to saying that if you make a syntax error when writing some code, the proper way to describe it is the computer “misunderstanding” you.
I mean, you can say that, it’s an okay way to describe things in a colloquial or metaphorical way. But I contest that it’s in any way standard language. You’re using idiosyncratic terminology and should in no way be surprised when people misunderstand (ha) you.
Honestly, if you went to modern-day LLMs and they, specialists in reading comprehension, misunderstood you, that ought to update you in the direction of “I made a bad job phrasing this”, not “it’s everyone else who’s wrong”.
(FYI, I understood what you meant in your initial reply to Habryka without this follow-up explanation, and I still thought you were phrasing it in an obviously confusing way.)
Describing it as a “misunderstanding” is tantamount to saying that if you make a syntax error when writing some code, the proper way to describe it is the computer “misunderstanding” you.
Honestly maybe it would make more sense to say that the cognitive error here is using the reference class of a compiler for a context free grammar for your intuitions as opposed to a mind that understands natural language as your reference class. The former is not expected to understand you when what you say doesn’t fully match what you mean, the latter very much is and the latter is the only kind of thing that’s going to have the proper referents for concepts like “happiness”.
I mean, no mind really exists at the time the “misunderstanding” is starting to happen, no? Unless you want to call a randomly initialized NN (i. e., basically a random program) a “mind”… Which wouldn’t necessarily be an invalid frame to use. But I don’t think it’s the obviously correct frame either, and so I don’t think that people who use a mechanistic frame by default are unambiguously in error.
Therefore Bostrom expects we will not have an AI that correctly understands concepts like intelligence until after it is already superintelligent.
That is straightforwardly correct. But “there exists no AI that understands” is importantly different from “there exists an AI which misunderstands”.
Another questionable frame here is characterizing the relationship between an AI and the SGD/the training process shaping it as some sort of communication process (?), such that the AI ending up misshapen can be described as it “misunderstanding” something.
And the training process itself never becomes a mind, it starts and ends as a discrete program, so if you mean to say that it “misunderstood” something, I think that’s a type error/at best a metaphor.
(I guess it may still be valid from a point of view where you frame SGD updates as Bayesian updates, or something along those lines? But that’s also a non-standard frame.)
in practice, we seem to train the world model and understanding machine first and the policy only much later as a thin patch on top of the world model. this is not guaranteed to stay true but seems pretty durable so far. thus, the relevant heuristics are about base models not about randomly initialized neural networks.
separately, I do think randomly initialized neural networks have some strong baseline of fuzziness and conceptual corrigibility, which is in a sense what it means to have a traversible loss landscape.
Claude says:
Gemini 3 says similar:
I will take this to mean you share similar flawed generalization/reading strategies. I struggle to put the cognitive error here into words, but it seems to me like an inability to connect the act of specifying a wrong representation of utility with the phrase ‘lack of understanding’, or making an odd literalist interpretation whereby the fact that Bostrom argues in general for a separation between motivations and intelligence (orthogonality thesis) means that I am somehow misinterpreting him when I say that the mesagoal inferred from the objective function before understanding of language is a “misunderstanding” of the intent of the objective function. This is a very strange and very pedantic use of “understand”. “Oh but you see Bostrom is saying that the thing you actually wrote means this, which it understood perfectly.”
No.
If I say something by which I clearly mean one thing, and that thing was in principle straightforwardly inferrable from what I said (as is occurring right now), and the thing which is inferred instead is straightforwardly absurd by the norms of language and society, that is called a misunderstanding, a failure to understand, if you specify a wrong incomplete objective to the AI and it internalizes the wrong incomplete objective as opposed to what you meant, it (the training/AI building system as a whole) misunderstood you even if it understands your code to represent the goal just fine. This is to say that you want some way for the AI or AI building system to understand, by which we mean correctly infer the meaning and indirect consequences of the meaning, of what you wrote, at initialization, you want it to infer the correct goal at the point where a mesagoal is internalized. This process can be rightfully called UNDERSTANDING and when an AI system fails at this it has FAILED TO UNDERSTAND YOU at the point in time which mattered even if later there is some epistemology that understands in principle what was meant by the goal but is motivated by the mistaken version that it internalized when a mesagoal was formed.
But also as I said earlier Bostrom states this many times, we have a lot more to go off than the one line I quoted there. Here he is on page 171 in the section “Motivation Selection Methods”:
This part makes it very clear that what Bostrom means by “code” is, centrally, some discrete program representation (i.e. a traditional programming language, like python, as opposed to some continuous program representation like a neural net embedding).
Bostrom expands on this point on page 227 in the section “The Value-Loading Problem”:
Here Bostrom is saying that it is not even rigorously imaginable how you would translate the concept of “happiness” into discrete program code. Which in 2014 when the book is published is correct, it’s not rigorously imaginable, that’s why being able to pretrain neural nets which understand the concept in the kind of way where they simply wouldn’t make mistakes like “tile the universe with smiley faces”, which can be used as part of a goal specification, is a big deal.
With this in mind let’s return to the section I quoted the line in my post from, which says:
What Bostrom is saying is that one of if not the first impossible problem(s) you encounter is having any angle of attack on representing our goals in the kind of way which generalizes even at a human level inside the computer such that you can point a optimization process at it. That obviously a superintelligent AI would understand what we had meant by the initial objective, but it’s going to proceed according to either the mesagoal it internalizes or the literal code sitting in its objective function slot, because the part of the AI which motivates it is not controlled by the part of the AI, developed later in training, which understands what you meant in principle after acquiring language. The system which translates your words or ideas into the motivation specification must understand you at the point where you turned that translated concept into an optimization objective, at the start of the training or some point where the AI is still corrigible and you can therefore insert objectives and training goals into it.
My post says that a superintelligent AI is a superplanner which develops instrumental goals by planning far into the future. The more intelligent the AI is the farther into the future it can effectively plan, and therefore the less corrigible it is. Therefore by the time you encounter this bullet point it should already be implied that superintelligence and the corrigibility of the AI are tightly coupled, which is also an assumption clearly made in Bostrom 2014 so I don’t really understand why you don’t understand.
ChatGPT still thinks I am wrong so let’s think step by step. Bostrom says (i.e. leads the reader to understand through his gestalt speech, not that he literally says this in one passage) that, in the default case:
When you specify your final goal, it is wrong.
It is wrong because it is a discrete program representation of a nuanced concept like “happiness” that does not fully capture what we think happiness is.
Eventually you will have a world model with a correct understanding of happiness, because the AI is superintelligent.
This representation of happiness in the superintelligent world model “understands us” and would presumably produce better results if we could point at that understanding instead.
The fact we don’t do this to begin with heavily implies, almost as a necessary consequence really, that the representation of happiness which is a correct understanding of what we meant was not available at the time we specified what happiness is.
In a way all I am saying is that when you specify the program that will train your superintelligent AI, in Bostrom 2014 the AI’s superintelligent understanding is not available before you train it.
The final goal representation is part of the program that you write before the AI exists.
If you had a non superintelligent corrigible AI that builds a world model with a correct specification of happiness in it, you would use that specification.
If you had a correct specification of happiness, it would not be wrong.
Therefore Bostrom does not expect us to do this, because then the default would not be that your specification is wrong. Bostrom expects by default that our specification is wrong.
If Bostrom does not expect us to do this, that implies he does not expect us to build an AI that builds a correct representation of happiness until it is incorrigible or otherwise not able to be used to specify happiness for our superintelligent AI.
The default way an AI becomes incorrigible is by becoming more powerful than us.
Therefore Bostrom expects we will not have an AI that correctly understands concepts like happiness until after it is already superintelligent.
Maybe this argument is right, but the paragraph I am confused about does not mention the word corrigibility once. It just says (paraphrased) “AIs will in fact understand what we mean, which totally pwns Bostrom because he said the opposite, as you can see in this quote” and then fails to provide a quote that says that, at all.
Like, if you said “Contra Bostrom, AI will be corrigible, which you can see in this quote by Bostrom” then I would not be making this comment thread! I would have objections and could make arguments, and maybe I would bother to make them, but I would not be having the sense that you just said a sentence that really just sounds fully logically contradictory on its own premises, and then when asked about it keep importing context that is not references in the sentence at all.
So did you just accidentally make a typo and meant to say “Contra Bostrom 2014 AIs will in fact probably be corrigible: ‘The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal.’”
If that’s the paragraph you meant to write, and this is just a typo, then everything makes sense. If it isn’t, then I am sorry to say that not much that you’ve said helped me understand what you meant by that paragraph.
My understanding: JDP holds that when the training process chisels a wrong goal into an AI because we gave it a wrong training objective (e. g., “maximize smiles” while we want “maximize eudaimonia”), this event could be validly described as the AI “misunderstanding” us.
So when JDP says that “AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent”, and claims that this counters this Bostrom quote...
… what JDP means to refer to is the “its final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal” part, not the “the AI may indeed understand that this is not what we meant” part. (Pretend the latter part doesn’t exist.)
Reasoning: The fact that the AI’s goal ended up at “maximize happiness” after being trained against the “maximize happiness” objective, instead of at whatever the programmers intended by the “maximize happiness” objective, implies that there was a moment earlier in training when the AI “misunderstood” that goal (in the sense of “misunderstand” described in my first paragraph).
JDP then holds that this won’t happen, contrary to that part of Bostrom’s statement: that training on “naïve” pointers to eudaimonia like “maximize smiles” and such will Just Work, that the SGD will point AIs at eudaimonia (or at corrigibility or whatever we meant).[1] Or, in JDP’s parlance, that the AI will “understand” what we meant by “maximize smiles” well before it’s superintelligent.
If you think that this use of “misunderstand” is wildly idiosyncratic, or that JDP picked a really bad Bostrom quote to make his point, I agree.
(Assuming I am also not misunderstanding everything, there sure is a lot of misunderstanding around.)
Plus/minus some caveats and additional bells and whistles like e. g. early stopping, I believe.
I want to flag that thinking you have a representation that could be used in principle to do the right thing is not the same thing as believing it will “Just Work”. If you do a naive RL process on neural embeddings or LLMs evaluators you will definitely get bad results. I do not believe in “alignment by default” and push back on such things frequently whenever they’re brought up. What has happened is that the problem has gone from “not clear how you would do this even in principle, basically literally impossible with current knowledge” to merely tricky.
Ok, but the latter part does exist! I can’t ignore it. Like, it’s a sentence that seems almost explicitly designed to clarify that Bostrom thinks the AI will understand what we mean. So clearly, Bostrom is not saying “the AI will not understand what we mean”. Maybe he is making some other error in the book about how when the AI understands the way it does, it has to be corrigible, or that “happiness” is a confused kind of model of what an AI might want to optimize, but clearly that sentence is an atrocious sentence for demonstrating that “Bostrom said that the AI will not understand what we mean”. Like, he literally said the opposite right there, in the quote!
(JDP, you’re welcome to chime in and demonstrate that your writing was actually perfectly clear and that I’m just also failing basic reading comprehension.)
Consider the AI at two different points in time, AI-when-embryo early in training and AI-when-superintelligence at the end.
The quote involves Bostrom (a) literally saying that AI-when-superintelligence will understand what we meant,[1] (b) making a statement which logically implies, as an antecedent, that “AI-when-embryo won’t understand what we meant”.[2] Therefore, you can logically infer from this quote that Bostrom believes that the statement “AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent” is false.
JDP, in my understanding, assumes that the reader would do just that: automatically zero-in on (b), infer the antecedent from it, and dismiss (a) as irrelevant context.
I love it when blog posts have lil’ tricksy logic puzzles in them.Yep.
“The AI may indeed understand that this is not what we meant.”
“However, [AI-when-superintelligence’s] final goal is to make us happy, not to do what the programmers meant when they wrote the code that represents this goal[, because AI-when-embryo “misunderstood” that code’s intent.]”
This is correct, though that particular chain of logic doesn’t actually imply the “before superintelligence” part, since there is a space between embryo and superintelligent where it could theoretically come to understand. I argue why I think Bostrom implicitly rejects this or thinks it must be irrelevant with the 13 steps above. But I think it’s important context that this to me doesn’t come out as 13 steps or a bunch of sys2 reasoning, I just look at the thing and see the implication and then have to do a bunch of sys2 reasoning to articulate it if someone asks. To me it doesn’t feel like a hard thing from the inside, so I wouldn’t expect it to be hard for someone else either. From my perspective it basically came across as bad faith, because I literally could not imagine someone wouldn’t understand what I’m talking about until several people went “no I don’t get it”, that’s how basic it feels from the inside here. I now understand that no this actually isn’t obvious, the hostile tone above was frustration from not knowing that yet.
I see! Understandable, but yep, I think you misjudged the inferential distance there a fair bit.
Clearly! I’m a little reluctant to rephrase it until I have a version that I know conveys what I actually meant, but one that would be very semantically close to the original would be:
“—Contra Bostrom 2014 it is possible to get high quality, nuanced representations of concepts like “happiness” at training initialization. The problem of representing happiness and similar ideas in a computer will not be first solved by the world model of a superintelligent or otherwise incorrigible AI, as in the example Bostrom gives on page 147 in the 2017 paperback under the section “Malignant Failure Modes”: “But wait! This is not what we meant! Surely if the AI is superintelligent, it must understand that when we asked it to make us happy, we didn’t mean that it should reduce us to a perpetually repeating recording of a drugged- out digitized mental episode!”—The AI may indeed understand that this is not what we meant. However, its final goal is to make us happy, not to do what the programmers meant when they wrote the code that rep- resents this goal.”″
Part of why I didn’t write it that way in the first place is it would make it a lot bulkier than the other bullet points, so I trimmed it down.
It depends on what you mean by “available”—we already had a representation of happiness in a human brain. And building corrigible AI that builds a correct representation of happiness is not enough—like you said, we need to point at it.
If you can use it.
Yes, the key is “otherwise not able to be used”.
No, unless by “correctly understands” you mean “have an identifiable representation that humans can use to program other AI”—he may expect that we will have an intelligence that correctly understands concepts like happiness while not yet being superintelligent (like we have humans, that are better at this than “maximize happiness”) but we still won’t be able to use it.
This is in principle a thing that Nick Bostrom could have believed while writing Superintelligence but the rest of the book kind of makes it incompatible with Occam’s Razor. It’s possible he meant the issues with translating concepts into discrete program representations as the central difficulty and then whether we would be able to make use of such a representation as a noncentral difficulty. (It’s Bostrom, he’s a pretty smart dude, this wouldn’t surprise me, it might even be in the text somewhere but I’m not reading the whole thing again). But even if that’s the case the central consistently repeated version of the value loading problem in Bostrom 2014 centers on how it’s simply not rigorously imaginable how you would get the relevant representations in the first place.
It’s important to remember also that Bostrom’s primary hypothesis in Superintelligence is that AGI will be produced by recursive self improvement such that it’s genuinely not clear you will have a series of functional non superintelligent AIs with usable representations before you have a superintelligent one. The book very much takes the EY “human level is a weird threshold to expect AI progress to stop at” thesis as the default.
I’m not so sure. Like, first of all, you mean something like “get before superintelligence” or “get into the goal slot”, because there is obviously a method to just get the representations—just build a superintelligence with a random goal, it will have your representations. That difference was explicitly stated then, it is often explicitly stated now—all that “AI will understand but not care”. The focus on the frameworks where it gets hard to translate from humans to programs is consistent with him trying to constrain methods of generating representations to only useful ones.
There is a reason why it is called “the value loading problem” and not “the value understanding problem”. “The value translation problem” would be somewhat in the middle: having actual human utility program would certainly solve some of Bostrom’s problems.
I don’t know whether Bostrom actually thought about non-superintelligent AI that already understands but don’t care. But I don’t think this line of argumentations of yours is correct about why such a scenario contradicts his points. Even if he didn’t consider it, it’s not “contra”, unless it actually contradicts him. What actually may contradict him is not “AI will understand values early” but “AI will understand values early and training such early AI will make it care about right things”.
This is MUCH more clearly written, thanks.
We still have the problems that we
can’t extract the exact concept (e.g., concept of human values) from AI. Even if it has this concept somewhere. Yes, we can look which activations correlate with some behaviour, and stuff like that. But it’s far from enough.
can’t train AI to optimize some concept from the world model of its earlier version. We have no ability to formalize the training objective like this.
Maybe Bostrom thought the weak AIs will not have good enough world model, like you interpret him. Or maybe he already thought that we will not be able to use world model of one AI to direct other. But the conclusion stays anyway.
I also think that current AIs probably don’t have the concept of human values that would actually be fine to optimize hard. And I’m not sure that AIs will have it before they will have the ability to stop us from changing their goal. But if it was the only problem, I would agree that the risk is more manageable.
I honestly have no idea what is going on. I have read your post, but not in excruciating detail. I do not know what you are talking about with corrigibility or whatever in response to my comment, as it really has nothing to do with my question or uncertainty. The language models seem to think similar.
I am not making a particularly complicated point. My point is fully 100% limited to this paragraph. This paragraph as far as I can understand is trying to make a local argument, and I have no idea how this logical step is supposed to work out.
I cannot make this paragraph make sense. You say (paraphrased) “Bostrom says that AI will not understand what we mean by the goals we given them before they are superintelligent, as you can see in the quote ‘the AI will understand what we mean by the goals we give them’”
And like, sure, I could engage with your broader critiques of Bostrom, but I am not. I am trying to understand this one point you make here. Think about it as a classical epistemic spot check. I just want to know what you meant by this one paragraph, as this paragraph as written does not make any sense to me, and I am sure does not make any sense to 90% of readers. It also isn’t making any sense to the language models.
Like, if I hadn’t had this to me very weird interaction I would be 90% confident that you just made a typo in this paragraph.
This is all because you explicitly say “here is the specific sentence in Superintelligence that proves that I am correctly paraphrasing Bostrom” and then cite a sentence that I have no idea how it’s remotely supposed to show that you are correctly paraphrasing Bostrom. Like, if you weren’t trying to give a specific sentence as the source, I would not be having this objection.
Let’s think phrase by phrase and analyze myself in the third person.
First let’s extract the two sentences for comparison:
An argument from ethos: JDP is an extremely scrupulous author and would not plainly contradict himself in the same sentence. Therefore this is either a typo or my first interpretation is wrong somehow.
Context: JDP has clarified it is not a typo.
Modus Tollens: If “understand” means the same thing in both sentences they would be in contradiction. Therefore understand must mean something different between them.
Context: After Bostrom’s statement about understanding, he says that the AI’s final goal is to make us happy, not to do what the programmers meant.
Association: The phrase “not to do what the programmers meant” is the only other thing that JDP’s instance of the word “understand” could be bound to in the text given.
Context: JDP says “before they are superintelligent”, which doesn’t seem to have a clear referent in the Bostrom quote given. Whatever he’s talking about must appear in the full passage, and I should probably look that up before commenting, and maybe point out that he hasn’t given quite enough context in that bullet and may want to consider rephrasing it.
Reference: Ah I see, JDP has posted the full thing into this thread. I now see that the relevant section starts with:
Association: Bostrom uses the frame “understand” in the original text for the question from his imagined reader. This implies that JDP saying “AIs will probably understand what we mean” must be in relation to this question.
Modus Tollens: But wait, Bostrom already answers this question by saying the AI will understand but not care, and JDP quotes this, so if JDP meant the same thing Bostrom means he would be contradicting himself, which we assume he is not doing, therefore he must be interpreting this question differently.
Inference: JDP is probably answering the original hypothetical readers question as “Why wouldn’t the AI behave as though it understands? Or why wouldn’t the AI’s motivation system understand what we meant by the goal?”
Context: Bostrom answers (implicitly) that this is because the AI’s epistemology is developed later than its motivation system. By the time the AI is in a position to understand this its goal slot is fixed.
Association: JDP says that subsequent developments have disproved this answers validity. So JDP believes either that the goal slot will not be fixed at superintelligence or that the epistemology does not have to be developed later than the motivation system.
Modus Tollens: If JDP said that the goal slot will not be fixed at superintelligence, he would be wrong, therefore since we are assuming JDP is not wrong this is not what he means.
Context: JDP also says “before superintelligence”, implying he agrees with Bostrom that the goal slot is fixed by the time the AI system is superintelligent.
Process of Elimination: Therefore JDP means that the epistemology does not have to be developed later than the motivation system.
Modus Tollens: But wait. Logically the final superintelligent epistemology must be developed alongside the superintelligence if we’re using neural gradient methods. Therefore since we are assuming JDP is not wrong this must not quite be what he means.
Occam’s Razor: Theoretically it could be made of different models, one of which is a superintelligent epistemology, but epistemology is made of parts and the full system is presumably necessary to be “superintelligent”.
Context: JDP says that “AIs will in fact probably understand what we mean by the goals we give them before they are superintelligent”, this implies the existence of non superintelligent epistemologies which understand what we mean.
Inference: If there are non superintelligent epistemologies which are sufficient to understand us, and JDP believes that the motivation system can be made to understand us before we develop a superintelligent epistemology, then JDP must mean that Bostrom is wrong because there are or will be sufficient neural representations of our goals that can be used to specify the goal slot before we develop the superintelligent epistemology.
Ok, I… think this makes sense? Honestly, I think I would have to engage with this for a long time to see whether this makes sense with the actual content of e.g. Bostrom’s text, but I can at least see the shape of an argument that I could follow if I wanted to! Thank you!
(To be clear, this is of course not a reasonable amount of effort ask to put into understanding a random paragraph from a blogpost, at least without it being flagged as such, but writing is hard and it’s sometimes hard to bridge inferential distance)
No, the rightful way to describe what happens is that the training process generates an AI system with unintended functionality due to your failure to specify the training objective correctly. Describing it as a “misunderstanding” is tantamount to saying that if you make a syntax error when writing some code, the proper way to describe it is the computer “misunderstanding” you.
I mean, you can say that, it’s an okay way to describe things in a colloquial or metaphorical way. But I contest that it’s in any way standard language. You’re using idiosyncratic terminology and should in no way be surprised when people misunderstand (ha) you.
Honestly, if you went to modern-day LLMs and they, specialists in reading comprehension, misunderstood you, that ought to update you in the direction of “I made a bad job phrasing this”, not “it’s everyone else who’s wrong”.
(FYI, I understood what you meant in your initial reply to Habryka without this follow-up explanation, and I still thought you were phrasing it in an obviously confusing way.)
Honestly maybe it would make more sense to say that the cognitive error here is using the reference class of a compiler for a context free grammar for your intuitions as opposed to a mind that understands natural language as your reference class. The former is not expected to understand you when what you say doesn’t fully match what you mean, the latter very much is and the latter is the only kind of thing that’s going to have the proper referents for concepts like “happiness”.
I mean, no mind really exists at the time the “misunderstanding” is starting to happen, no? Unless you want to call a randomly initialized NN (i. e., basically a random program) a “mind”… Which wouldn’t necessarily be an invalid frame to use. But I don’t think it’s the obviously correct frame either, and so I don’t think that people who use a mechanistic frame by default are unambiguously in error.
I note that in your step-by-step explanation, the last bullet is:
That is straightforwardly correct. But “there exists no AI that understands” is importantly different from “there exists an AI which misunderstands”.
Another questionable frame here is characterizing the relationship between an AI and the SGD/the training process shaping it as some sort of communication process (?), such that the AI ending up misshapen can be described as it “misunderstanding” something.
And the training process itself never becomes a mind, it starts and ends as a discrete program, so if you mean to say that it “misunderstood” something, I think that’s a type error/at best a metaphor.
(I guess it may still be valid from a point of view where you frame SGD updates as Bayesian updates, or something along those lines? But that’s also a non-standard frame.)
in practice, we seem to train the world model and understanding machine first and the policy only much later as a thin patch on top of the world model. this is not guaranteed to stay true but seems pretty durable so far. thus, the relevant heuristics are about base models not about randomly initialized neural networks.
separately, I do think randomly initialized neural networks have some strong baseline of fuzziness and conceptual corrigibility, which is in a sense what it means to have a traversible loss landscape.