Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive? For example, if I have the option of suicide that has only a one-in-a-billion chance every minute of being triggered in such a situation (due to transient depression or accidentally pressing the wrong button or whatever), that will kill me on average in two thousand years. Is that better or worse than my odds of falling into an AIMS scenario?
That aside… I think AIMS scenarios are a distraction here. It is certainly true that the longer I live, the more suffering I will experience (including, but not limited to, a vanishingly small chance of extended periods of suffering, as you say). What I conclude from that has a lot to do with my relative valuations of suffering and nonexistence.
To put this a different way: what is the value of an additional observer-moment given an N% chance that it will involve suffering? I would certainly agree that the value increases as N decreases, all else being equal, but I think N has to get awfully large before that value even approximates zero, let alone goes negative (which presumably it would have to before death is the more valuable option).
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive?
Well—here’s the thing. These sorts of scenarios are relatively easy to safeguard against. For example (addressing the ‘wrong button’ but also the transient emotional state): require the suicide mechanism to take two thousand years to implement from initiation to irreversibility. Another example—tailored the other way—since we’re already talking about altering/substituting physiology drastically, given that psychological states depend on physiological states, it would stand to reason that any being capable of engineering immortality could also engineer cognitive states.
I would find the notion of pressing the wrong button accidentally for two thousand years to be … possible at all, given human timeframes. Especially if the suicide mechanism is tied to intentionality. Could you even imagine actively desiring to die for that long a period, short of something akin to an AIMS scenario?
Also, please note that I later described “AIMS” as “total anti-utility that endures for a duration outlasting the remaining lifespan of the universe”. So what I’m talking about is a situation where you have a justified true belief that the remaining span of your existence will not merely not have positive value but rather specifically will consist solely of negative value.
(For me, the concept of “absolute suffering”—that is, a suffering greater than which I cannot conceive (That’s kinda got echoes of the Ontological but in this case it’s not fallacious since I’m not using that as an argument in favor of its instantiation rather than definition) -- is not sufficient to induce zero-value/utility let alone negative-value/utility. Suffering that “serves a purpose” is acceptable to me; so even an ‘eternity’ of ‘absolute suffering’ wouldn’t necessarily be AIMS for me, under these terms.)
The point is: such a scenario—total anti-utility from a given point forward—has a negligible but non-zero chance of occurring. And over a period of 10^100 years, that raises such a negligible percentage individual instance occurance to an accumulatedly significant one. IF we humans manage to manipulate M-Theory to bypass the closed-system state of the universe and thereby, furthermore, stave off the heat death of the universe indefinitely, then … well, I know that my mind cannot properly grok the sort of scenario we’d then be discussing.
Still—given the scenario-options of possibly fucking it up and dying early, or being unable to escape a scenario of unending anti-utility… I choose the former. Then again, If I were given the choice of increasing my g quotient and IQ quotient (or, rather, whatever actual cognitive function those scores are meant to model) twofold for ten years at the price of dying after those ten… I think, right now, I would take it.
So I guess that says something about my underlying beliefs in this discussion, as well.
For me, it’s the irreversibility that’s the real issue. For any situation that would warrant pressing the suicide switch, would it not be preferable to press the analgesia switch?
Not necessarily. If I believed that my continued survival would cause the destruction of everything I valued, suicide would be a value-preserving option and analgesia would not be. More generally: if my values include anything beyond avoiding pain, analgesia isn’t necessarily my best value-preserving option.
But, agreed, irreversibility of the sort we’re discussing here is highly implausible. But we’re discussing low-probability scenarios to begin with.
my continued survival would cause the destruction of everything I valued
This is a situation I hadn’t thought of, and I agree that in this case, suicide would be preferable. But I hadn’t got the impression that’s what was being discussed—for one thing, if this were a real worry it would also argue against a two-thousand-year safety interval. I feel like the “Omega threatening to torture your loved ones to compel your suicide” scenario should be separated from the “I have no mouth and I must scream” scenario.
More generally: if my values include anything beyond avoiding pain, analgesia isn’t necessarily my best value-preserving option.
True, but the problem with pain is that its importance in your hierarchy of values tends to increase with intensity. Now I’m thinking of a sort of dead-man’s-switch where outside sensory information requires voluntary opting-in, and the suicide switch can only be accessed from the baseline mental state of total sensory deprivation, or an imaginary field of flowers, or whatever.
But, agreed, irreversibility of the sort we’re discussing here is highly implausible. But we’re discussing low-probability scenarios to begin with.
I was mostly talking about the irreversibility of suicide, actually. In an AIMS scenario, where I have every reason to expect my whole future to consist of total, mind-crushing suffering, I would still prefer “spend the remaining lifetime of the universe building castles in my head, checking back in occasionally to make sure the suffering hasn’t stopped” to “cease to exist, permanently”.
Of course, this is all rather ignoring the unlikelihood of the existence of an entity that can impose effectively infinite, total suffering on you but can’t hack your mind and remove the suicide switch.
For convenience I refer hereafter to an the-future-is-solely-comprised-of-negative-value scenario as an “AIMS” scenario and to a I-trigger-my-suicide-switch-when-the-future-has-net-positive-expected-value as an “OOPS” scenario.
Things I more-or-less agree with:
I don’t really see why positing “solely of negative value” is necessary for AIMS scenarios. If I’m confident that my future unavoidably contains net negative value, that should be enough to conclude that opting out of that future is a more valuable choice than signing up for it. But since it seems to be an important part of your definition, I accept it for the sake of discussion.
I agree that suffering is not the same thing as negative value, and therefore that we aren’t necessarily talking about suffering here.
I agree that a AIMS scenario has a negligible but non-zero chance of occurring. I personally can’t imagine one, but that’s just a limit of my imagination and not terribly relevant.
I agree that it’s possible to put safeguards on a suicide switch such that an OOPS scenario has a negligible chance of occurring.
Things I disagree with:
You seem to be suggesting either that it’s possible to make the OOPS scenario likelihood not just negligible, but zero. I see no reason to believe that’s true. (Perhaps you aren’t actually saying this.)
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention. I certainly agree that IF that’s true, then it trivially follows that giving people a suicide switch creates more expected value than not doing so in all scenarios worthy of attention. But I see no reason to believe that’s true either.
You seem to be suggesting either that it’s possible to make the OOPS scenario likelihood not just negligible, but zero.
Specific versions of “OOPS”. I don’t intend to categorize all of them that way.
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention.
Well, no. It has more to do with the expected cost of failing to account for either variety at least in principle. An OOPS scenario being fulfilled means the end of potential and the cessation of gained utility. An AIMS scenario being fulfilled means the aversion of constantly negative utility. (We can drop the “solely” so long as the ‘net’ is kept.)
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive? For example, if I have the option of suicide that has only a one-in-a-billion chance every minute of being triggered in such a situation (due to transient depression or accidentally pressing the wrong button or whatever), that will kill me on average in two thousand years. Is that better or worse than my odds of falling into an AIMS scenario?
That aside… I think AIMS scenarios are a distraction here. It is certainly true that the longer I live, the more suffering I will experience (including, but not limited to, a vanishingly small chance of extended periods of suffering, as you say). What I conclude from that has a lot to do with my relative valuations of suffering and nonexistence.
To put this a different way: what is the value of an additional observer-moment given an N% chance that it will involve suffering? I would certainly agree that the value increases as N decreases, all else being equal, but I think N has to get awfully large before that value even approximates zero, let alone goes negative (which presumably it would have to before death is the more valuable option).
Well—here’s the thing. These sorts of scenarios are relatively easy to safeguard against. For example (addressing the ‘wrong button’ but also the transient emotional state): require the suicide mechanism to take two thousand years to implement from initiation to irreversibility. Another example—tailored the other way—since we’re already talking about altering/substituting physiology drastically, given that psychological states depend on physiological states, it would stand to reason that any being capable of engineering immortality could also engineer cognitive states.
I would find the notion of pressing the wrong button accidentally for two thousand years to be … possible at all, given human timeframes. Especially if the suicide mechanism is tied to intentionality. Could you even imagine actively desiring to die for that long a period, short of something akin to an AIMS scenario?
Also, please note that I later described “AIMS” as “total anti-utility that endures for a duration outlasting the remaining lifespan of the universe”. So what I’m talking about is a situation where you have a justified true belief that the remaining span of your existence will not merely not have positive value but rather specifically will consist solely of negative value.
(For me, the concept of “absolute suffering”—that is, a suffering greater than which I cannot conceive (That’s kinda got echoes of the Ontological but in this case it’s not fallacious since I’m not using that as an argument in favor of its instantiation rather than definition) -- is not sufficient to induce zero-value/utility let alone negative-value/utility. Suffering that “serves a purpose” is acceptable to me; so even an ‘eternity’ of ‘absolute suffering’ wouldn’t necessarily be AIMS for me, under these terms.)
The point is: such a scenario—total anti-utility from a given point forward—has a negligible but non-zero chance of occurring. And over a period of 10^100 years, that raises such a negligible percentage individual instance occurance to an accumulatedly significant one. IF we humans manage to manipulate M-Theory to bypass the closed-system state of the universe and thereby, furthermore, stave off the heat death of the universe indefinitely, then … well, I know that my mind cannot properly grok the sort of scenario we’d then be discussing.
Still—given the scenario-options of possibly fucking it up and dying early, or being unable to escape a scenario of unending anti-utility… I choose the former. Then again, If I were given the choice of increasing my
g
quotient and IQ quotient (or, rather, whatever actual cognitive function those scores are meant to model) twofold for ten years at the price of dying after those ten… I think, right now, I would take it.So I guess that says something about my underlying beliefs in this discussion, as well.
For me, it’s the irreversibility that’s the real issue. For any situation that would warrant pressing the suicide switch, would it not be preferable to press the analgesia switch?
Not necessarily. If I believed that my continued survival would cause the destruction of everything I valued, suicide would be a value-preserving option and analgesia would not be. More generally: if my values include anything beyond avoiding pain, analgesia isn’t necessarily my best value-preserving option.
But, agreed, irreversibility of the sort we’re discussing here is highly implausible. But we’re discussing low-probability scenarios to begin with.
This is a situation I hadn’t thought of, and I agree that in this case, suicide would be preferable. But I hadn’t got the impression that’s what was being discussed—for one thing, if this were a real worry it would also argue against a two-thousand-year safety interval. I feel like the “Omega threatening to torture your loved ones to compel your suicide” scenario should be separated from the “I have no mouth and I must scream” scenario.
True, but the problem with pain is that its importance in your hierarchy of values tends to increase with intensity. Now I’m thinking of a sort of dead-man’s-switch where outside sensory information requires voluntary opting-in, and the suicide switch can only be accessed from the baseline mental state of total sensory deprivation, or an imaginary field of flowers, or whatever.
I was mostly talking about the irreversibility of suicide, actually. In an AIMS scenario, where I have every reason to expect my whole future to consist of total, mind-crushing suffering, I would still prefer “spend the remaining lifetime of the universe building castles in my head, checking back in occasionally to make sure the suffering hasn’t stopped” to “cease to exist, permanently”.
Of course, this is all rather ignoring the unlikelihood of the existence of an entity that can impose effectively infinite, total suffering on you but can’t hack your mind and remove the suicide switch.
For convenience I refer hereafter to an the-future-is-solely-comprised-of-negative-value scenario as an “AIMS” scenario and to a I-trigger-my-suicide-switch-when-the-future-has-net-positive-expected-value as an “OOPS” scenario.
Things I more-or-less agree with:
I don’t really see why positing “solely of negative value” is necessary for AIMS scenarios. If I’m confident that my future unavoidably contains net negative value, that should be enough to conclude that opting out of that future is a more valuable choice than signing up for it. But since it seems to be an important part of your definition, I accept it for the sake of discussion.
I agree that suffering is not the same thing as negative value, and therefore that we aren’t necessarily talking about suffering here.
I agree that a AIMS scenario has a negligible but non-zero chance of occurring. I personally can’t imagine one, but that’s just a limit of my imagination and not terribly relevant.
I agree that it’s possible to put safeguards on a suicide switch such that an OOPS scenario has a negligible chance of occurring.
Things I disagree with:
You seem to be suggesting either that it’s possible to make the OOPS scenario likelihood not just negligible, but zero. I see no reason to believe that’s true. (Perhaps you aren’t actually saying this.)
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention. I certainly agree that IF that’s true, then it trivially follows that giving people a suicide switch creates more expected value than not doing so in all scenarios worthy of attention. But I see no reason to believe that’s true either.
Specific versions of “OOPS”. I don’t intend to categorize all of them that way.
Well, no. It has more to do with the expected cost of failing to account for either variety at least in principle. An OOPS scenario being fulfilled means the end of potential and the cessation of gained utility. An AIMS scenario being fulfilled means the aversion of constantly negative utility. (We can drop the “solely” so long as the ‘net’ is kept.)