Yeah. You getting me to read Land and discussions about this topic led to me writing the post. I spent most of the post on arguing contra orthogonality, here you are more directly / strongly arguing against orthogonality. We agree on the basic idea, that intelligent agents tend to have different goals than unintelligent agents, such that it’s not a type error to say some goals are smarter than others.
The specific topic in question was not generally “arguing against orthogonality” / “it’s not a type error to say some goals are smarter than others” but more specific Landian teleology, which makes stronger and more specific claims about which selection pressures win
(as retold in the OP: The diagonal is More Intelligence: the will to think, self-cultivation, recursive capability gain, intelligence optimizing the conditions for further intelligence.)
I think people who believe this—and I don’t know if this includes you—usually don’t really get the bounded rationality argument. Roughly - any cognition&agency in this physics costs negentropy - this “selects” against length, against depth of world models, against details, against thinking too long, against being unnecessarily smart
One of the implications is something relatively dumb can outcompete something relatively smart. Unnecessary intelligence gets selected away. Something like this likely explains various observations like - why no rational agents - why animals are not that VNM - why it took natural evolution so long to discover humans
In the big scheme of things, what happened so far was increasing levels of intelligence at various points unlocked new pools of negentropy/efficiency, so there is some sense of trend. However, with fixed pool of negentropy, the most competitive configuration of matter often isn’t the smartest one.
If current physics holds, there isn’t alway “one level up” or “new pool of negentropy to harvest”, and ultimately it may be possible to reach technological maturity.
Among other things, this makes possible an absorbing state of locusts—VNM probes of the lowest intelligence to replicate on cosmic scale and eat available negentropy. The goals could be … just spread fast and eat negentropy. (more about this topic by Joe Carlsmith)
Maybe, an even stronger argument could be viable: typical Landian arguments + bounded rationality could suggest locusts are the most natural outcome.
I think aspiring Landians then either have to flinch, or “bite the bullet” and believe if locusts happen, this is somehow a good outcome. Possibly the most pure bullet-biting being some of the original e/acc: good = production of entropy; axiology solved; you can be on the ultimately winning side by just being on the side of 2nd law of thermodynamics.
(Also no need to respond, I find the whole frame of this thread where you are asked to judge if lumpenspace understands something not very productive.)
I think people who believe this—and I don’t know if this includes you—usually don’t really get the bounded rationality argument. Roughly - any cognition&agency in this physics costs negentropy - this “selects” against length, against depth of world models, against details, against thinking too long, against being unnecessarily smart
You have to carry this argument a bit further, no? Intelligence costs negentropy, but intelligence pays dividends in negentropy too. That’s the benefit of “depth of world models, details, thinking” in the first place. That’s why “unnecessarily” does all the heavy lifting in that argument. Empirically, the (locally) “thinkiest” species has got all the (local) negentropy, so isn’t the burden of proof pointing in the other direction?
Yes of course cognition costs resources. That creates an ecosystem of different agents with different intelligence levels. We also see returns to general capacity from intelligence where humans, being the most intelligent animals on Earth, have capacities not had by ants despite consuming more energy than ants. So there is competition in multiple levels including evolutionary niches.
In terms of space fights with aliens, combined arms matter. It doesn’t matter much if you have more Von Neumann probes if your military strategy is bad. So the winning groups will use multiple forms of cognition including very intelligent forms.
it’s telling that you equate “being rational agents” with “more intelligence”, but as long as this cones in the context of denying the very possibility of yudkowskian asi ill vibe with it.
edit: your entire reply suffers from the local pathology of equating intelligence with “thinkiness”. “a more detailed world model, thinking for longer” are only symptoms of more intelligence if they get you closer to a goal. you want to have the capacity of doing that if/when necessary, not the habit of doing it constantly, even when the only effect is a more pointlesdly verbose reply.
re: jessi and my understanding: that is known as “a joke”, borne of the fact that someone was smugly opining on my lack of understanding of a concept for which I’ve been Jessis sounding board and beta tester as she fleshed it out.
Yeah. You getting me to read Land and discussions about this topic led to me writing the post. I spent most of the post on arguing contra orthogonality, here you are more directly / strongly arguing against orthogonality. We agree on the basic idea, that intelligent agents tend to have different goals than unintelligent agents, such that it’s not a type error to say some goals are smarter than others.
The specific topic in question was not generally “arguing against orthogonality” / “it’s not a type error to say some goals are smarter than others” but more specific Landian teleology, which makes stronger and more specific claims about which selection pressures win
(as retold in the OP: The diagonal is More Intelligence: the will to think, self-cultivation, recursive capability gain, intelligence optimizing the conditions for further intelligence.)
I think people who believe this—and I don’t know if this includes you—usually don’t really get the bounded rationality argument. Roughly
- any cognition&agency in this physics costs negentropy
- this “selects” against length, against depth of world models, against details, against thinking too long, against being unnecessarily smart
One of the implications is something relatively dumb can outcompete something relatively smart. Unnecessary intelligence gets selected away. Something like this likely explains various observations like
- why no rational agents
- why animals are not that VNM
- why it took natural evolution so long to discover humans
In the big scheme of things, what happened so far was increasing levels of intelligence at various points unlocked new pools of negentropy/efficiency, so there is some sense of trend. However, with fixed pool of negentropy, the most competitive configuration of matter often isn’t the smartest one.
If current physics holds, there isn’t alway “one level up” or “new pool of negentropy to harvest”, and ultimately it may be possible to reach technological maturity.
Among other things, this makes possible an absorbing state of locusts—VNM probes of the lowest intelligence to replicate on cosmic scale and eat available negentropy. The goals could be … just spread fast and eat negentropy. (more about this topic by Joe Carlsmith)
Maybe, an even stronger argument could be viable: typical Landian arguments + bounded rationality could suggest locusts are the most natural outcome.
I think aspiring Landians then either have to flinch, or “bite the bullet” and believe if locusts happen, this is somehow a good outcome. Possibly the most pure bullet-biting being some of the original e/acc: good = production of entropy; axiology solved; you can be on the ultimately winning side by just being on the side of 2nd law of thermodynamics.
(Also no need to respond, I find the whole frame of this thread where you are asked to judge if lumpenspace understands something not very productive.)
You have to carry this argument a bit further, no? Intelligence costs negentropy, but intelligence pays dividends in negentropy too. That’s the benefit of “depth of world models, details, thinking” in the first place. That’s why “unnecessarily” does all the heavy lifting in that argument. Empirically, the (locally) “thinkiest” species has got all the (local) negentropy, so isn’t the burden of proof pointing in the other direction?
Yes of course cognition costs resources. That creates an ecosystem of different agents with different intelligence levels. We also see returns to general capacity from intelligence where humans, being the most intelligent animals on Earth, have capacities not had by ants despite consuming more energy than ants. So there is competition in multiple levels including evolutionary niches.
In terms of space fights with aliens, combined arms matter. It doesn’t matter much if you have more Von Neumann probes if your military strategy is bad. So the winning groups will use multiple forms of cognition including very intelligent forms.
it’s telling that you equate “being rational agents” with “more intelligence”, but as long as this cones in the context of denying the very possibility of yudkowskian asi ill vibe with it.
edit: your entire reply suffers from the local pathology of equating intelligence with “thinkiness”. “a more detailed world model, thinking for longer” are only symptoms of more intelligence if they get you closer to a goal. you want to have the capacity of doing that if/when necessary, not the habit of doing it constantly, even when the only effect is a more pointlesdly verbose reply.
re: jessi and my understanding: that is known as “a joke”, borne of the fact that someone was smugly opining on my lack of understanding of a concept for which I’ve been Jessis sounding board and beta tester as she fleshed it out.
thanks you I was doubting myself a little