Overall sensible frame how to think about the topic is Convergent evolution / Contingency. You can make the sensible part of the anti-orthogonality argument simply by pointing out that there are many reasons to expect convergent evolution in the space of minds/agents/goal/values, empirical evidence abounds. My impression even Eliezer agrees, just believes what’s convergent is tiny part of what humans care about.
Re: more specific points
I’d recommend grokking on Jessica’s piece more, in my view it is actually deeper than yours, by realizing all rationality is bounded rationality, and nothing makes sense otherwise.
The selection pressure for intelligence is ~Baldwin effect in biology. And it works! However, as we see in biology, somehow maxing out on this is not always competitive.
“If agents optimizing for intelligence routinely lose to agents with rigid, narrow targets in complex environments, my selection argument is wrong.” ...but of course they do! Apes are smarter and their brains are optimizers and develop deep models and so on, and yet they routinely loose and by many metrics are less successful than bacteria or ants.
Landian teleology has the vertigo-inducing appeal similar to other good teleologies, where you get the sense that you suddenly understand the over-reaching arc of the universe and see the eschaton reaching back in time or logical time etc etc
My impression is once you had experienced more of these, they loose part of the power. (Other examples are Teilhard de Chardin’s Omega Point or Scott’s and others God of acausal value handshakes) … fixed point in the limit that retrocausally pulls on the present, doing normative work while disclaiming it
Overall it seems unclear what the ultimate balance of the selection pressures is and what is convergent. (Yes stupid terminal goals stable across radical ontology shifts is part of some doom arguments and is likely not true, but seems not very central to LW?)
I think most points have been addressec in other replies, apart from the one about not having understood the obliqueness theory
on that point I submit to jessi’s judgement, but considering she formulated the main thesis during an attempt at strawmanning orthogonality we were engaging in together, and it integrates a couple of rounds of feedback from yours truly, I think the verdict might surprise you.
Re-reading her post it seems plausible she also does not understand/see all implications of “boundedness” selection pressures, idk. If this is the case I’d concede that neither of you gets this point.
Which responses specifically? The Lonelyton reply addresses whether some selection continues, not whether selection’s direction is what you believe. I don’t think in any other response you gave your explanation why ‘increased intelligence/adaptability’ is such a small niche in natural evolution, or why Lands/yours argument about the eschaton would be so much better than other arguments about eschatology, or actually most of what I’m writing about. I made the arguments in somewhat compressed form, but Claude can expand/explain
do you think bacteria and ants have a stronger shot at winning the lightcone than humans?
in general, if you don’t think intelligence gives a significant advantage, why would you worry about ASI?
eschatology: please consider that it’s not me who says a superintelligence will take over the universe. my claim is simply that, if that’s the case, its main goal wouldn’t have been any dumb unchanging goal. the eschaton is something you continually bring up, together with the necessity to prevent it.
Yeah. You getting me to read Land and discussions about this topic led to me writing the post. I spent most of the post on arguing contra orthogonality, here you are more directly / strongly arguing against orthogonality. We agree on the basic idea, that intelligent agents tend to have different goals than unintelligent agents, such that it’s not a type error to say some goals are smarter than others.
The specific topic in question was not generally “arguing against orthogonality” / “it’s not a type error to say some goals are smarter than others” but more specific Landian teleology, which makes stronger and more specific claims about which selection pressures win
(as retold in the OP: The diagonal is More Intelligence: the will to think, self-cultivation, recursive capability gain, intelligence optimizing the conditions for further intelligence.)
I think people who believe this—and I don’t know if this includes you—usually don’t really get the bounded rationality argument. Roughly - any cognition&agency in this physics costs negentropy - this “selects” against length, against depth of world models, against details, against thinking too long, against being unnecessarily smart
One of the implications is something relatively dumb can outcompete something relatively smart. Unnecessary intelligence gets selected away. Something like this likely explains various observations like - why no rational agents - why animals are not that VNM - why it took natural evolution so long to discover humans
In the big scheme of things, what happened so far was increasing levels of intelligence at various points unlocked new pools of negentropy/efficiency, so there is some sense of trend. However, with fixed pool of negentropy, the most competitive configuration of matter often isn’t the smartest one.
If current physics holds, there isn’t alway “one level up” or “new pool of negentropy to harvest”, and ultimately it may be possible to reach technological maturity.
Among other things, this makes possible an absorbing state of locusts—VNM probes of the lowest intelligence to replicate on cosmic scale and eat available negentropy. The goals could be … just spread fast and eat negentropy. (more about this topic by Joe Carlsmith)
Maybe, an even stronger argument could be viable: typical Landian arguments + bounded rationality could suggest locusts are the most natural outcome.
I think aspiring Landians then either have to flinch, or “bite the bullet” and believe if locusts happen, this is somehow a good outcome. Possibly the most pure bullet-biting being some of the original e/acc: good = production of entropy; axiology solved; you can be on the ultimately winning side by just being on the side of 2nd law of thermodynamics.
(Also no need to respond, I find the whole frame of this thread where you are asked to judge if lumpenspace understands something not very productive.)
I think people who believe this—and I don’t know if this includes you—usually don’t really get the bounded rationality argument. Roughly - any cognition&agency in this physics costs negentropy - this “selects” against length, against depth of world models, against details, against thinking too long, against being unnecessarily smart
You have to carry this argument a bit further, no? Intelligence costs negentropy, but intelligence pays dividends in negentropy too. That’s the benefit of “depth of world models, details, thinking” in the first place. That’s why “unnecessarily” does all the heavy lifting in that argument. Empirically, the (locally) “thinkiest” species has got all the (local) negentropy, so isn’t the burden of proof pointing in the other direction?
Yes of course cognition costs resources. That creates an ecosystem of different agents with different intelligence levels. We also see returns to general capacity from intelligence where humans, being the most intelligent animals on Earth, have capacities not had by ants despite consuming more energy than ants. So there is competition in multiple levels including evolutionary niches.
In terms of space fights with aliens, combined arms matter. It doesn’t matter much if you have more Von Neumann probes if your military strategy is bad. So the winning groups will use multiple forms of cognition including very intelligent forms.
it’s telling that you equate “being rational agents” with “more intelligence”, but as long as this cones in the context of denying the very possibility of yudkowskian asi ill vibe with it.
edit: your entire reply suffers from the local pathology of equating intelligence with “thinkiness”. “a more detailed world model, thinking for longer” are only symptoms of more intelligence if they get you closer to a goal. you want to have the capacity of doing that if/when necessary, not the habit of doing it constantly, even when the only effect is a more pointlesdly verbose reply.
re: jessi and my understanding: that is known as “a joke”, borne of the fact that someone was smugly opining on my lack of understanding of a concept for which I’ve been Jessis sounding board and beta tester as she fleshed it out.
Overall sensible frame how to think about the topic is Convergent evolution / Contingency. You can make the sensible part of the anti-orthogonality argument simply by pointing out that there are many reasons to expect convergent evolution in the space of minds/agents/goal/values, empirical evidence abounds. My impression even Eliezer agrees, just believes what’s convergent is tiny part of what humans care about.
Re: more specific points
I’d recommend grokking on Jessica’s piece more, in my view it is actually deeper than yours, by realizing all rationality is bounded rationality, and nothing makes sense otherwise.
The selection pressure for intelligence is ~Baldwin effect in biology. And it works! However, as we see in biology, somehow maxing out on this is not always competitive.
“If agents optimizing for intelligence routinely lose to agents with rigid, narrow targets in complex environments, my selection argument is wrong.”
...but of course they do! Apes are smarter and their brains are optimizers and develop deep models and so on, and yet they routinely loose and by many metrics are less successful than bacteria or ants.
Why? Because
of what Jessica explains: in this physics, negentropy is not free, and any cognition costs negentropy.Landian teleology has the vertigo-inducing appeal similar to other good teleologies, where you get the sense that you suddenly understand the over-reaching arc of the universe and see the eschaton reaching back in time or logical time etc etc
My impression is once you had experienced more of these, they loose part of the power. (Other examples are Teilhard de Chardin’s Omega Point or Scott’s and others God of acausal value handshakes) … fixed point in the limit that retrocausally pulls on the present, doing normative work while disclaiming it
Overall it seems unclear what the ultimate balance of the selection pressures is and what is convergent. (Yes stupid terminal goals stable across radical ontology shifts is part of some doom arguments and is likely not true, but seems not very central to LW?)
I think most points have been addressec in other replies, apart from the one about not having understood the obliqueness theory
on that point I submit to jessi’s judgement, but considering she formulated the main thesis during an attempt at strawmanning orthogonality we were engaging in together, and it integrates a couple of rounds of feedback from yours truly, I think the verdict might surprise you.
Re-reading her post it seems plausible she also does not understand/see all implications of “boundedness” selection pressures, idk. If this is the case I’d concede that neither of you gets this point.
Which responses specifically? The Lonelyton reply addresses whether some selection continues, not whether selection’s direction is what you believe. I don’t think in any other response you gave your explanation why ‘increased intelligence/adaptability’ is such a small niche in natural evolution, or why Lands/yours argument about the eschaton would be so much better than other arguments about eschatology, or actually most of what I’m writing about. I made the arguments in somewhat compressed form, but Claude can expand/explain
do you think bacteria and ants have a stronger shot at winning the lightcone than humans?
in general, if you don’t think intelligence gives a significant advantage, why would you worry about ASI?
eschatology: please consider that it’s not me who says a superintelligence will take over the universe. my claim is simply that, if that’s the case, its main goal wouldn’t have been any dumb unchanging goal. the eschaton is something you continually bring up, together with the necessity to prevent it.
What is the verdict then?
i am not Jess. @jessicata do you reckon i grok the obliqueness theory sufficiently?
Yeah. You getting me to read Land and discussions about this topic led to me writing the post. I spent most of the post on arguing contra orthogonality, here you are more directly / strongly arguing against orthogonality. We agree on the basic idea, that intelligent agents tend to have different goals than unintelligent agents, such that it’s not a type error to say some goals are smarter than others.
The specific topic in question was not generally “arguing against orthogonality” / “it’s not a type error to say some goals are smarter than others” but more specific Landian teleology, which makes stronger and more specific claims about which selection pressures win
(as retold in the OP: The diagonal is More Intelligence: the will to think, self-cultivation, recursive capability gain, intelligence optimizing the conditions for further intelligence.)
I think people who believe this—and I don’t know if this includes you—usually don’t really get the bounded rationality argument. Roughly
- any cognition&agency in this physics costs negentropy
- this “selects” against length, against depth of world models, against details, against thinking too long, against being unnecessarily smart
One of the implications is something relatively dumb can outcompete something relatively smart. Unnecessary intelligence gets selected away. Something like this likely explains various observations like
- why no rational agents
- why animals are not that VNM
- why it took natural evolution so long to discover humans
In the big scheme of things, what happened so far was increasing levels of intelligence at various points unlocked new pools of negentropy/efficiency, so there is some sense of trend. However, with fixed pool of negentropy, the most competitive configuration of matter often isn’t the smartest one.
If current physics holds, there isn’t alway “one level up” or “new pool of negentropy to harvest”, and ultimately it may be possible to reach technological maturity.
Among other things, this makes possible an absorbing state of locusts—VNM probes of the lowest intelligence to replicate on cosmic scale and eat available negentropy. The goals could be … just spread fast and eat negentropy. (more about this topic by Joe Carlsmith)
Maybe, an even stronger argument could be viable: typical Landian arguments + bounded rationality could suggest locusts are the most natural outcome.
I think aspiring Landians then either have to flinch, or “bite the bullet” and believe if locusts happen, this is somehow a good outcome. Possibly the most pure bullet-biting being some of the original e/acc: good = production of entropy; axiology solved; you can be on the ultimately winning side by just being on the side of 2nd law of thermodynamics.
(Also no need to respond, I find the whole frame of this thread where you are asked to judge if lumpenspace understands something not very productive.)
You have to carry this argument a bit further, no? Intelligence costs negentropy, but intelligence pays dividends in negentropy too. That’s the benefit of “depth of world models, details, thinking” in the first place. That’s why “unnecessarily” does all the heavy lifting in that argument. Empirically, the (locally) “thinkiest” species has got all the (local) negentropy, so isn’t the burden of proof pointing in the other direction?
Yes of course cognition costs resources. That creates an ecosystem of different agents with different intelligence levels. We also see returns to general capacity from intelligence where humans, being the most intelligent animals on Earth, have capacities not had by ants despite consuming more energy than ants. So there is competition in multiple levels including evolutionary niches.
In terms of space fights with aliens, combined arms matter. It doesn’t matter much if you have more Von Neumann probes if your military strategy is bad. So the winning groups will use multiple forms of cognition including very intelligent forms.
it’s telling that you equate “being rational agents” with “more intelligence”, but as long as this cones in the context of denying the very possibility of yudkowskian asi ill vibe with it.
edit: your entire reply suffers from the local pathology of equating intelligence with “thinkiness”. “a more detailed world model, thinking for longer” are only symptoms of more intelligence if they get you closer to a goal. you want to have the capacity of doing that if/when necessary, not the habit of doing it constantly, even when the only effect is a more pointlesdly verbose reply.
re: jessi and my understanding: that is known as “a joke”, borne of the fact that someone was smugly opining on my lack of understanding of a concept for which I’ve been Jessis sounding board and beta tester as she fleshed it out.
thanks you I was doubting myself a little
Btw, it might be not central to LessWrong but it’s what Liron held in the doom debate that inspired this post
What episode of doom debates?
Upcoming, featuring lil ol me
https://x.com/liron/status/2047710978561753112?s=46