My problem with Will’s outlook is that if we are indeed being “watched over by a superintelligence”, it doesn’t appear to care about us in any very helpful way. Our relationship to it is therefore more about survival than it is about morality. According to the scenario, there is some thing out there which is all-powerful, whose actions depend partly on our actions, and which doesn’t care about {long list of evolutionary and historical holocausts}, in any way that we would recognize as caring. Clearly, if we had any idea of the relationship between our actions and its actions, it would be in one’s interest, first of all, to act so that it would not allow various awful things to happen to you and anyone you care about, and second, to act so that you might gain some advantage from its powers.
It appears that the only distinctive reason Will has for entertaining such a scenario is the usual malarkey about timeless game-theoretic equilibria… A while back, I was contemplating a post, to be called “Towards a critique of acausal reason”, which was going to mention three fallacies of timeless decision theory: acausal democracy, acausal trade, acausal blackmail. The last two arise from a fallacy of selective attention: to believe them possible, you must only pay attention to possible worlds which only care about you in a highly specific way. But for any possible world where there is an intelligence simulating your response and which will do X if you do Y, there is another possible world where there is an intelligence which will do X if you don’t do Y. And the actual multiplicity of worlds in which intelligences make decisions on the basis of decisions made by agents in other possible worlds that they are simulating it is vanishingly small, in the set of all possible worlds. Why the hell would you base your decision, regarding what to do in your own reality, on the opinions or actions of a possible entity in another world? You may as well just flip a coin. The whole idea that intelligences in causally disjoint worlds are in a position to trade, bargain, or arrive at game-theoretic equilibria is deeply flawed; it’s only a highly eccentric agent which “cares” strongly about events which are influenced by only an extremely small fraction of its subjective duplicates (its other selves in the space of possible worlds). So some of these “eccentric agents” may genuinely “do deals”, but there is no reason to think that they are anything more than a vanishingly small minority among the total population of the multiverse. (Obviously it would be desirable for people trying to work rigorously in TDT to make this argument in a rigorous form, but I don’t see anything that’s going to change the basic conclusion.)
So that leaves us in the more familiar situation, of possibly being in a simulation, or possibly facing the rise of a superintelligence in the near future, or possibly being somewhere in the guts of a cosmic superintelligence which either just tolerates our existence because we haven’t crossed thresholds-of-caring yet, or which has a purpose for us which extends to tolerating the holocausts I mentioned earlier. All of this suggests that our survival and well-being are on the line, but it doesn’t suggest that we are embedded in an order that is moral in any conventional sense.
We are now advanced enough to tackle this issue formally, by trying to construct an equilibrium in a combinatorially exhaustive population of acausal trading programs. Is there an acausal version of the “no-trade theorem”?
I brought up a similar objection to acausal trade, and found [Nesov_2010]’s reply somewhat convincing.
His reply doesn’t address the problem of potentially prohibitive difficulty of acausal trade, it merely appeals to its theoretical possibility. Essentially, the argument is that “there is still a chance”, but that’s not enough,
“between zero chance of becoming wealthy, and epsilon chance, there is an order-of-epsilon difference”
What does that even mean? Does that mean something like: hypothetical lunar farmers in a hypothetical lunar utopia should send down some ore to Earth, and that actual people hundreds of years earlier in a representative body voted 456-450 not to fund a lunar expedition even with a rider to the bill requiring future farmers to send down ore, but the farmer votes from the future+450 > 456? So the farmers “promised’ to send ore?
acausal blackmail
It seems more like a real self inflicted wound than a fallacy or fake blackmail to me, perhaps we don’t disagree. it’s something that is real if one has certain patterns of mind that one could self modify away from, I think.
By “acausal democracy”, I mean the attempt to justify the practice of democracy—specifically, the act of voting—with timeless decision theory. No-one until you has attempted to depict a genuinely acausal democracy :-) This doesn’t involve the “fallacy of selective attention”, it’s another sort of error, or combination of errors, in which TDT reasoning is supposed to apply to agents with only a bare similarity to yourself. See discussion here for a related example.
I also think we agree regarding acausal blackmail, that for a human being it can only be a mistake. Only one of those “eccentric agents” with a very peculiar utility function or decision architecture could rationally be susceptible to acausal blackmail—its decision procedure would have to insist that “selective attention” (to just those possible worlds where the specific blackmail threat is being made) is important, rather than attending to other worlds where contrary threats are being made, or to worlds where the action under consideration will be rewarded rather than punished, or to worlds where the agent is simply a free agent not being threatened or enticed by a captor who cares about acausal dealmaking (and those worlds should be in the vast majority).
My problem with Will’s outlook is that if we are indeed being “watched over by a superintelligence”, it doesn’t appear to care about us in any very helpful way.
The only “plausible” (heh) scenario I can come up with is that a future civilization developed backward time travel, but to avoid paradox it required full non-interaction, so it developed a means of close observation without changing that which is observed, and used it to upload everyone upon their information theoretic death.
I don’t think I really have an outlook, I just notice that I am very confused about a lot of things that other people are ignoring. And my social role is different from my betting odds. (I notice I am confused about whether or not this is justified, about what meta-level policy I should have for situations like this.)
((((I feel compelled to stir up drama for people because they are too complacent to stir up drama for themselves. Unfortunately it is hard to stir up drama by going meta.))))
You’re talking about theodicy; have you read Leibniz on the subject? The most existent of all possible worlds, the world that takes the least bits to specify, because existence is good… Anyway I find it plausible that the universe is weird and that miracles do happen, but once luck reveals clearly how its decision policy works you get Goodhardt’s law problems, so it lies low. Bow chicka bow wow, God of the gaps FTW.
In A History of Western Philosohy, Bertrand Russell wrote of Leibniz that
His best thought was not such as would win him popularity, and he left his records of it
unpublished in his desk. What he published was designed to win the approbation of princes and
princesses. The consequence is that there are two systems of philosophy which may be regarded as
representing Leibniz: one, which he proclaimed, was optimistic, orthodox, fantastic, and shallow;
the other, which has been slowly unearthed from his manuscripts by fairly recent editors, was
profound, coherent, largely Spinozistic, and amazingly logical. It was the popular Leibniz who
invented the doctrine that this is the best of all possible worlds (to which F. H. Bradley added the
sardonic comment “and everything in it is a necessary evil”); it was this Leibniz whom Voltaire
caricatured as Doctor Pangloss. It would be unhistorical to ignore this Leibniz, but the other is of
far greater philosophical importance.
and Russell seems to think that “best of all possible worlds” is the shallow public theodicy, and “most existent” is the private theodicy, and they are not the same thing—since privately (according to Russell’s account), Leibniz speculated that the world which gets to exist is the one which has the most entities in it (maximum number of entities logically capable of coexisting). But then Russell also writes that Leibniz may have considered this a sign of God’s goodness—it’s good to exist, and God makes the world with the most possible things… I am much more sympathetic to Nietzsche’s metaphysics, as described in the posthumous notes collected in The Will to Power, and his skeptical analysis of the psychology behind philosophies which set forth identities such as Reason = Virtue = Happiness. Nietzsche to my knowledge did not speculate as to why there is something rather than nothing, one reason why Heidegger could see Nietzsche’s ontology as the final stage in the forgetting of Being, but his will-to-power analysis is plausible as an explanation of why beings-who-happen-to-exist end up constructing metaphysical systems which say that to be is good, and to be is inevitable, so goodness is inevitable.
So Nietzsche wrote a bunch of stuff in notebooks and even started writing a book called “The Will to Power”. He abandoned it but used a lot of the ideas in his last few works. Upon his death his anti-semitic sister arranged the notebooks and abandoned text into “The Will to Power”. Much of it is in line with stuff he published and that stuff, it is fair to say is representative of his views. But where TWTP says things Nietzsche didn’t include in his later works (which were written after the notes used to create TWTP)… it’s likely that he that he didn’t publish those ideas because he ended up not liking them for whatever reason. Plus, the editorial decisions made by his sister were made by his sister… for example Nietzsche made lots of organization outlines only one of which had “Discipline and Breeding” as a book title… that that outline was chosen in lieu of others is a result of his sister’s ideology (which Nietzsche opposed).
I doubt there is anything in there that is so far away from Nietzsche’s actual views that you aren’t equipped to talk about Nietzsche (the stuff you talk about above is certainly something he’s be down with). I can’t tell you what specifically is in TWTP that isn’t in his other books because I haven’t read it- it’s usually just something read by Nietzsche scholars.
(Looking at this comment it kind of sounds like I’m playing status games “You read the wrong book.” etc. I don’t mean that, you probably have at least as good an understanding of Nietzsche’s views as I do. Mainly I’m just recommending that you be careful about ascribing all of TWTP to Nietzsche and pointing this out so that people don’t read your comment and then go out and buy TWTP in order to understand Nietzsche. And of course, just because Nietzsche didn’t agree with everything in the book doesn’t mean what’s in there aren’t good ideas.)
But where TWTP says things Nietzsche didn’t include in his later works (which were written after the notes used to create TWTP)… it’s likely that he that he didn’t publish those ideas because he ended up not liking them for whatever reason.
There are sections of TWTP—e.g. “The Mechanical Interpretation of the World”—which cover topics simply not addressed in any of Nietzsche’s finished works. (By the way, the version of TWTP that I’m familiar with is Walter Kaufmann’s.) So all we can say is that they lack the final imprimatur of appearing in a book “author”ized by Nietzsche himself. There’s no evidence here of a change of opinion. It is at least possible that he would subsequently have disagreed with some of the thoughts anthologized in TWTP—though presumably he agreed with them at the time he wrote them.
On at least one subject—the meaning of the “eternal recurrence”—I believe TWTP shows that a lot of Nietzsche scholarship has been on the wrong track. Many interpreters have said that the eternal recurrence is a state of mind, or a metaphor, anything but a literal recurrence. But in these notes, Nietzsche shows himself to be interested in eternal recurrence as a physical hypothesis. He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally. He thinks this is the world-picture that 20th-century science will produce and endorse. And then—this is the part I think is hilarious—he thinks that lots of people will kill themselves because they can’t bear the thought of their lives being repeated infinitely often in the future cycles of time. The “superman” is supposed to be someone who finds the eternal recurrence a joyous thing, because they love their life and the whole of existence, and the eternal recurrence provides their existence with a sort of eternity that is otherwise not available in a universe of relentless flux. In this regard Nietzsche’s futurology was doubly wrong—first, that isn’t the world-picture that science produces; second, it’s only a very rare individual who would take this claim—the alleged fact of existing again in a distant future aeon—seriously enough to make it the basis for choosing life or death. But I have the same appreciation for the imagination behind this piece of Nietzschean cultural futurology, as I do for the uniquely weird worldviews that are sometimes exhibited on LW. :-)
Well, they were personal notebooks- so who knows how speculative he was being. The key thing is, this wasn’t what he was working on when he died. Published works intervened between TWTP and his death. That combined with the sheer implausibility of the metaphysics you’ve described might suggest he wasn’t that committed to the whole thing ;-). It sounds fascinating though.
He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally.
Are there any arguments for these claims? I’m fascinated by the (often very compelling!) arguments past generations had for how the physical world had to be. Aristotle is the best at this.
My problem with Will’s outlook is that if we are indeed being “watched over by a superintelligence”, it doesn’t appear to care about us in any very helpful way. Our relationship to it is therefore more about survival than it is about morality. According to the scenario, there is some thing out there which is all-powerful, whose actions depend partly on our actions, and which doesn’t care about {long list of evolutionary and historical holocausts}, in any way that we would recognize as caring. Clearly, if we had any idea of the relationship between our actions and its actions, it would be in one’s interest, first of all, to act so that it would not allow various awful things to happen to you and anyone you care about, and second, to act so that you might gain some advantage from its powers.
It appears that the only distinctive reason Will has for entertaining such a scenario is the usual malarkey about timeless game-theoretic equilibria… A while back, I was contemplating a post, to be called “Towards a critique of acausal reason”, which was going to mention three fallacies of timeless decision theory: acausal democracy, acausal trade, acausal blackmail. The last two arise from a fallacy of selective attention: to believe them possible, you must only pay attention to possible worlds which only care about you in a highly specific way. But for any possible world where there is an intelligence simulating your response and which will do X if you do Y, there is another possible world where there is an intelligence which will do X if you don’t do Y. And the actual multiplicity of worlds in which intelligences make decisions on the basis of decisions made by agents in other possible worlds that they are simulating it is vanishingly small, in the set of all possible worlds. Why the hell would you base your decision, regarding what to do in your own reality, on the opinions or actions of a possible entity in another world? You may as well just flip a coin. The whole idea that intelligences in causally disjoint worlds are in a position to trade, bargain, or arrive at game-theoretic equilibria is deeply flawed; it’s only a highly eccentric agent which “cares” strongly about events which are influenced by only an extremely small fraction of its subjective duplicates (its other selves in the space of possible worlds). So some of these “eccentric agents” may genuinely “do deals”, but there is no reason to think that they are anything more than a vanishingly small minority among the total population of the multiverse. (Obviously it would be desirable for people trying to work rigorously in TDT to make this argument in a rigorous form, but I don’t see anything that’s going to change the basic conclusion.)
So that leaves us in the more familiar situation, of possibly being in a simulation, or possibly facing the rise of a superintelligence in the near future, or possibly being somewhere in the guts of a cosmic superintelligence which either just tolerates our existence because we haven’t crossed thresholds-of-caring yet, or which has a purpose for us which extends to tolerating the holocausts I mentioned earlier. All of this suggests that our survival and well-being are on the line, but it doesn’t suggest that we are embedded in an order that is moral in any conventional sense.
I brought up a similar objection to acausal trade, and found Nesov’s reply somewhat convincing. What do you think?
We are now advanced enough to tackle this issue formally, by trying to construct an equilibrium in a combinatorially exhaustive population of acausal trading programs. Is there an acausal version of the “no-trade theorem”?
His reply doesn’t address the problem of potentially prohibitive difficulty of acausal trade, it merely appeals to its theoretical possibility. Essentially, the argument is that “there is still a chance”, but that’s not enough,
What does that even mean? Does that mean something like: hypothetical lunar farmers in a hypothetical lunar utopia should send down some ore to Earth, and that actual people hundreds of years earlier in a representative body voted 456-450 not to fund a lunar expedition even with a rider to the bill requiring future farmers to send down ore, but the farmer votes from the future+450 > 456? So the farmers “promised’ to send ore?
It seems more like a real self inflicted wound than a fallacy or fake blackmail to me, perhaps we don’t disagree. it’s something that is real if one has certain patterns of mind that one could self modify away from, I think.
By “acausal democracy”, I mean the attempt to justify the practice of democracy—specifically, the act of voting—with timeless decision theory. No-one until you has attempted to depict a genuinely acausal democracy :-) This doesn’t involve the “fallacy of selective attention”, it’s another sort of error, or combination of errors, in which TDT reasoning is supposed to apply to agents with only a bare similarity to yourself. See discussion here for a related example.
I also think we agree regarding acausal blackmail, that for a human being it can only be a mistake. Only one of those “eccentric agents” with a very peculiar utility function or decision architecture could rationally be susceptible to acausal blackmail—its decision procedure would have to insist that “selective attention” (to just those possible worlds where the specific blackmail threat is being made) is important, rather than attending to other worlds where contrary threats are being made, or to worlds where the action under consideration will be rewarded rather than punished, or to worlds where the agent is simply a free agent not being threatened or enticed by a captor who cares about acausal dealmaking (and those worlds should be in the vast majority).
Right, humans can’t even do straightforward causal reasoning, let alone weird superrational reasoning.
The only “plausible” (heh) scenario I can come up with is that a future civilization developed backward time travel, but to avoid paradox it required full non-interaction, so it developed a means of close observation without changing that which is observed, and used it to upload everyone upon their information theoretic death.
I don’t think I really have an outlook, I just notice that I am very confused about a lot of things that other people are ignoring. And my social role is different from my betting odds. (I notice I am confused about whether or not this is justified, about what meta-level policy I should have for situations like this.)
((((I feel compelled to stir up drama for people because they are too complacent to stir up drama for themselves. Unfortunately it is hard to stir up drama by going meta.))))
You’re talking about theodicy; have you read Leibniz on the subject? The most existent of all possible worlds, the world that takes the least bits to specify, because existence is good… Anyway I find it plausible that the universe is weird and that miracles do happen, but once luck reveals clearly how its decision policy works you get Goodhardt’s law problems, so it lies low. Bow chicka bow wow, God of the gaps FTW.
In A History of Western Philosohy, Bertrand Russell wrote of Leibniz that
and Russell seems to think that “best of all possible worlds” is the shallow public theodicy, and “most existent” is the private theodicy, and they are not the same thing—since privately (according to Russell’s account), Leibniz speculated that the world which gets to exist is the one which has the most entities in it (maximum number of entities logically capable of coexisting). But then Russell also writes that Leibniz may have considered this a sign of God’s goodness—it’s good to exist, and God makes the world with the most possible things… I am much more sympathetic to Nietzsche’s metaphysics, as described in the posthumous notes collected in The Will to Power, and his skeptical analysis of the psychology behind philosophies which set forth identities such as Reason = Virtue = Happiness. Nietzsche to my knowledge did not speculate as to why there is something rather than nothing, one reason why Heidegger could see Nietzsche’s ontology as the final stage in the forgetting of Being, but his will-to-power analysis is plausible as an explanation of why beings-who-happen-to-exist end up constructing metaphysical systems which say that to be is good, and to be is inevitable, so goodness is inevitable.
The Will to Power is universally regarded as not representative of Nietzsche’s views.
So what parts would he have disagreed with?
So Nietzsche wrote a bunch of stuff in notebooks and even started writing a book called “The Will to Power”. He abandoned it but used a lot of the ideas in his last few works. Upon his death his anti-semitic sister arranged the notebooks and abandoned text into “The Will to Power”. Much of it is in line with stuff he published and that stuff, it is fair to say is representative of his views. But where TWTP says things Nietzsche didn’t include in his later works (which were written after the notes used to create TWTP)… it’s likely that he that he didn’t publish those ideas because he ended up not liking them for whatever reason. Plus, the editorial decisions made by his sister were made by his sister… for example Nietzsche made lots of organization outlines only one of which had “Discipline and Breeding” as a book title… that that outline was chosen in lieu of others is a result of his sister’s ideology (which Nietzsche opposed).
I doubt there is anything in there that is so far away from Nietzsche’s actual views that you aren’t equipped to talk about Nietzsche (the stuff you talk about above is certainly something he’s be down with). I can’t tell you what specifically is in TWTP that isn’t in his other books because I haven’t read it- it’s usually just something read by Nietzsche scholars.
(Looking at this comment it kind of sounds like I’m playing status games “You read the wrong book.” etc. I don’t mean that, you probably have at least as good an understanding of Nietzsche’s views as I do. Mainly I’m just recommending that you be careful about ascribing all of TWTP to Nietzsche and pointing this out so that people don’t read your comment and then go out and buy TWTP in order to understand Nietzsche. And of course, just because Nietzsche didn’t agree with everything in the book doesn’t mean what’s in there aren’t good ideas.)
I agree with much of what you say, except
There are sections of TWTP—e.g. “The Mechanical Interpretation of the World”—which cover topics simply not addressed in any of Nietzsche’s finished works. (By the way, the version of TWTP that I’m familiar with is Walter Kaufmann’s.) So all we can say is that they lack the final imprimatur of appearing in a book “author”ized by Nietzsche himself. There’s no evidence here of a change of opinion. It is at least possible that he would subsequently have disagreed with some of the thoughts anthologized in TWTP—though presumably he agreed with them at the time he wrote them.
On at least one subject—the meaning of the “eternal recurrence”—I believe TWTP shows that a lot of Nietzsche scholarship has been on the wrong track. Many interpreters have said that the eternal recurrence is a state of mind, or a metaphor, anything but a literal recurrence. But in these notes, Nietzsche shows himself to be interested in eternal recurrence as a physical hypothesis. He reasons: the universe is finite, it has a finite number of possible states, if any state was an end state it would already have ended, therefore it recurs eternally. He thinks this is the world-picture that 20th-century science will produce and endorse. And then—this is the part I think is hilarious—he thinks that lots of people will kill themselves because they can’t bear the thought of their lives being repeated infinitely often in the future cycles of time. The “superman” is supposed to be someone who finds the eternal recurrence a joyous thing, because they love their life and the whole of existence, and the eternal recurrence provides their existence with a sort of eternity that is otherwise not available in a universe of relentless flux. In this regard Nietzsche’s futurology was doubly wrong—first, that isn’t the world-picture that science produces; second, it’s only a very rare individual who would take this claim—the alleged fact of existing again in a distant future aeon—seriously enough to make it the basis for choosing life or death. But I have the same appreciation for the imagination behind this piece of Nietzschean cultural futurology, as I do for the uniquely weird worldviews that are sometimes exhibited on LW. :-)
Well, they were personal notebooks- so who knows how speculative he was being. The key thing is, this wasn’t what he was working on when he died. Published works intervened between TWTP and his death. That combined with the sheer implausibility of the metaphysics you’ve described might suggest he wasn’t that committed to the whole thing ;-). It sounds fascinating though.
Are there any arguments for these claims? I’m fascinated by the (often very compelling!) arguments past generations had for how the physical world had to be. Aristotle is the best at this.
Weird, I’m pretty sure that was in the original.
And I thought it was Voltaire’s satire of Leibniz.
Here: http://www.class.uidaho.edu/mickelsen/texts/Leibniz%20-%20Theodicy.htm
Oh. Yes, the idea was in Leibniz, but the specific quote is Voltaire’s, I believe.
Speaking of Voltaire, his theism is a really good example of meta-contrarianism.
Ah, got it.