This made me curious enough to read Land’s posts on the orthogonality thesis. Unfortunately I got a pretty negative impression from them. From what I’ve read, Land tends to be overconfident in his claims and fails to notice obvious flaws in his arguments. Links for people who want to judge for themselves (I had to dig up archive.org links as the original site has disappeared):
From Will-to-Think (“Probably Land’s best anti-orthogonalist essay”):
Imagine, instead, that Gandhi is offered a pill that will vastly enhance his cognitive capabilities, with the rider that it might lead him to revise his volitional orientation — even radically — in directions that cannot be anticipated, since the ability to think through the process of revision is accessible only with the pill. This is the real problem FAI (and Super-humanism) confronts. The desire to take the pill is the will-to-think. The refusal to take it, based on concern that it will lead to the subversion of presently supreme values, is the alternative. It’s a Boolean dilemma, grounded in the predicament: Is there anything we trust above intelligence (as a guide to doing ‘the right thing’)? The postulate of the will-to-think is that anything other than a negative answer to this question is self-destructively contradictory, and actually (historically) unsustainable.
When reading this it immediately jumps out at me that “boolean” is false. There are many other options Gandhi could take besides taking the pill or not. He could look for other ways to increase intelligence and pick one that is least likely to subvert his values. Perhaps try to solve metaethics first so that he has a better idea of what “preserving values” or “subversion of values” means. Or try to solve metaphilosophy to better understand what method of thinking is more likely to lead to correct philosophical conclusions, before trying to reflect on one’s values. Somehow none of these options occur to Land and he concludes that the only reasonable choice is to take the pill with unknown effects on one’s values.
I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values; the pill itself is not very realistic, but thinking can make people smarter and will cause value changes. I would agree Land is overconfident (I think orthogonal and diagonal are both wrong models).
I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values
I don’t think that’s necessarily true. For example some people think about metaethics and decide that anti-realism is correct and they should just keep their current values. I think that’s overconfident but it does show that we don’t know whether correct thinking about metaethics necessarily leads one to change one’s values. (Under some other metaethical possibilities the same is also true.)
Also, even if it possible to steelman Land in a way to eliminate flaws in his argument, I’d rather spend my time reading philosophers who are more careful and do more thinking (or are better at it) before confidently declaring a conclusion. I do appreciate you giving an overview of his ideas, as it’s good to be familiar with that part of the current philosophical landscape (apparently Land is a fairly prominent philosopher with an extensive Wikipedia page).
I’m trying to understand where the source of disagreement lies, since I don’t really see much “overconfidence”—ie, i don’t see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed. After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles);
The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does—and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:
- Imagine a fairly uncontroversial intelligence-sorted line-up, going: thermostat → mosquito → rat(🐭) → chimp → median human → rat(Ω) - Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn’t want those on the left of wherever one stands to impose their moral code to those on the right).
While I agree these sort of arguments don’t cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam’s razor should strongly slash their way—unless there’s some implicit counterargument i missed.
(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here—it is indeed often a target for snark—and while I think there’s much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).
I’m not sure I agree—in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back—and we should all consider ourself lucky that our early sapiens predecessors didn’t take the second option.
This made me curious enough to read Land’s posts on the orthogonality thesis. Unfortunately I got a pretty negative impression from them. From what I’ve read, Land tends to be overconfident in his claims and fails to notice obvious flaws in his arguments. Links for people who want to judge for themselves (I had to dig up archive.org links as the original site has disappeared):
http://web.archive.org/web/20131028060133/http://www.xenosystems.net/against-orthogonality/
http://web.archive.org/web/20141013052107/http://www.xenosystems.net/stupid-monsters/
http://web.archive.org/web/20200809114022/http://www.xenosystems.net/more-thought/
http://web.archive.org/web/20140917022211/http://www.xenosystems.net/will-to-think/
From Will-to-Think (“Probably Land’s best anti-orthogonalist essay”):
When reading this it immediately jumps out at me that “boolean” is false. There are many other options Gandhi could take besides taking the pill or not. He could look for other ways to increase intelligence and pick one that is least likely to subvert his values. Perhaps try to solve metaethics first so that he has a better idea of what “preserving values” or “subversion of values” means. Or try to solve metaphilosophy to better understand what method of thinking is more likely to lead to correct philosophical conclusions, before trying to reflect on one’s values. Somehow none of these options occur to Land and he concludes that the only reasonable choice is to take the pill with unknown effects on one’s values.
I think the relevant implication from the thought experiment is that thinking a bunch about metaethics and so on will in practice change your values; the pill itself is not very realistic, but thinking can make people smarter and will cause value changes. I would agree Land is overconfident (I think orthogonal and diagonal are both wrong models).
I don’t think that’s necessarily true. For example some people think about metaethics and decide that anti-realism is correct and they should just keep their current values. I think that’s overconfident but it does show that we don’t know whether correct thinking about metaethics necessarily leads one to change one’s values. (Under some other metaethical possibilities the same is also true.)
Also, even if it possible to steelman Land in a way to eliminate flaws in his argument, I’d rather spend my time reading philosophers who are more careful and do more thinking (or are better at it) before confidently declaring a conclusion. I do appreciate you giving an overview of his ideas, as it’s good to be familiar with that part of the current philosophical landscape (apparently Land is a fairly prominent philosopher with an extensive Wikipedia page).
I’m trying to understand where the source of disagreement lies, since I don’t really see much “overconfidence”—ie, i don’t see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed.
After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles);
The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does—and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:
- Imagine a fairly uncontroversial intelligence-sorted line-up, going:
thermostat → mosquito → rat(🐭) → chimp → median human → rat(Ω)
- Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn’t want those on the left of wherever one stands to impose their moral code to those on the right).
While I agree these sort of arguments don’t cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam’s razor should strongly slash their way—unless there’s some implicit counterargument i missed.
(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here—it is indeed often a target for snark—and while I think there’s much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).
I’m not sure I agree—in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back—and we should all consider ourself lucky that our early sapiens predecessors didn’t take the second option.
(btw, I’m very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/