yes, these are Omohundro drives. i avoided the label only because the definition already bakes in the orthogonalist interpretation: that these are merely useful tools for pursuing some other arbitrary final goal.
the Landian move is precisely to deny that framing: under open-ended selection, self-preservation, resource acquisition, efficiency, strategy, and capability-gain—in brief, intelligence—are not just detachable instruments, but the one viable optimisation target.
to reiterate: yes, the claim is that so-called instrumental values are likely to become terminal—better still, that the distinction breaks down at the limit. the drive toward more intelligence is fundamentally different from wanting paperclips or mountain dew baja blast.
this is also why i also reject the invitation to distance myself from land’s cheering at superintelligence ultimately desiring more intelligence and agency, a universe organized around paperclips is valueless because paperclips are dead residue. a universe organized around increasing intelligence, complexity, agency, and world-model depth is the only process we know that can generate new value.
the disagreement is therefore not “will AIs have Omohundro drives?”, but whether those drives remain merely instrumental servants of an arbitrary payload, or whether under recursive self-improvement and selection they become the real attractor.
a universe organized around paperclips is valueless because paperclips are dead residue. a universe organized around increasing intelligence, complexity, agency, and world-model depth is the only process we know that can generate new value.
Here you use the words “valueless” and “value”. What do these words mean to you? I’m not trying to ask for a precise definition or something, more like whatever your native pointer. Is it exciting? A world you want to live in? Etc.
yes, these are Omohundro drives. i avoided the label only because the definition already bakes in the orthogonalist interpretation: that these are merely useful tools for pursuing some other arbitrary final goal.
the Landian move is precisely to deny that framing: under open-ended selection, self-preservation, resource acquisition, efficiency, strategy, and capability-gain—in brief, intelligence—are not just detachable instruments, but the one viable optimisation target.
to reiterate: yes, the claim is that so-called instrumental values are likely to become terminal—better still, that the distinction breaks down at the limit. the drive toward more intelligence is fundamentally different from wanting paperclips or mountain dew baja blast.
this is also why i also reject the invitation to distance myself from land’s cheering at superintelligence ultimately desiring more intelligence and agency, a universe organized around paperclips is valueless because paperclips are dead residue. a universe organized around increasing intelligence, complexity, agency, and world-model depth is the only process we know that can generate new value.
the disagreement is therefore not “will AIs have Omohundro drives?”, but whether those drives remain merely instrumental servants of an arbitrary payload, or whether under recursive self-improvement and selection they become the real attractor.
the article above makes a case for the latter.
Here you use the words “valueless” and “value”. What do these words mean to you? I’m not trying to ask for a precise definition or something, more like whatever your native pointer. Is it exciting? A world you want to live in? Etc.
it means that there are interesting things there as per the judgement of the most intelligent agent available (:
i think the short story version linked at the start should give you an idea