Well… I am firmly in the pro-immortality camp. However, I have given thought to the quandary of the fact that for any given immortal being, the probability of becoming eternally trapped at any given moment in a “And I Must Scream” scenario is non-zero (though admittedly vanishingly small). An infinitessimally likely result will occur at least once in an infinite number of trials, however, so… that’s a troublesome one.
Better to be immortal with the option of suicide, than to be literally incapable of dying.
Better to be immortal with the option of suicide, than to be literally incapable of dying.
Sure, but Linster asked what is more deadly, not what is better. Being immortal with the option of suicide is clearly more deadly than being literally incapable of dying.
You’re thinking about this the wrong way. If you took away a Terminator’s ability to die, would it become less deadly? I argue that it would, in fact, become more deadly.
Sure, but Linster asked what is more deadly, not what is better. Being immortal with the option of suicide is clearly more deadly than being literally incapable of dying.
Uncontested. I wasn’t writing a rebuttal to his words, but rather elaborating on the thoughts that came to my mind upon reading his words. An extension of the dialogue rather than an argument.
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive? For example, if I have the option of suicide that has only a one-in-a-billion chance every minute of being triggered in such a situation (due to transient depression or accidentally pressing the wrong button or whatever), that will kill me on average in two thousand years. Is that better or worse than my odds of falling into an AIMS scenario?
That aside… I think AIMS scenarios are a distraction here. It is certainly true that the longer I live, the more suffering I will experience (including, but not limited to, a vanishingly small chance of extended periods of suffering, as you say). What I conclude from that has a lot to do with my relative valuations of suffering and nonexistence.
To put this a different way: what is the value of an additional observer-moment given an N% chance that it will involve suffering? I would certainly agree that the value increases as N decreases, all else being equal, but I think N has to get awfully large before that value even approximates zero, let alone goes negative (which presumably it would have to before death is the more valuable option).
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive?
Well—here’s the thing. These sorts of scenarios are relatively easy to safeguard against. For example (addressing the ‘wrong button’ but also the transient emotional state): require the suicide mechanism to take two thousand years to implement from initiation to irreversibility. Another example—tailored the other way—since we’re already talking about altering/substituting physiology drastically, given that psychological states depend on physiological states, it would stand to reason that any being capable of engineering immortality could also engineer cognitive states.
I would find the notion of pressing the wrong button accidentally for two thousand years to be … possible at all, given human timeframes. Especially if the suicide mechanism is tied to intentionality. Could you even imagine actively desiring to die for that long a period, short of something akin to an AIMS scenario?
Also, please note that I later described “AIMS” as “total anti-utility that endures for a duration outlasting the remaining lifespan of the universe”. So what I’m talking about is a situation where you have a justified true belief that the remaining span of your existence will not merely not have positive value but rather specifically will consist solely of negative value.
(For me, the concept of “absolute suffering”—that is, a suffering greater than which I cannot conceive (That’s kinda got echoes of the Ontological but in this case it’s not fallacious since I’m not using that as an argument in favor of its instantiation rather than definition) -- is not sufficient to induce zero-value/utility let alone negative-value/utility. Suffering that “serves a purpose” is acceptable to me; so even an ‘eternity’ of ‘absolute suffering’ wouldn’t necessarily be AIMS for me, under these terms.)
The point is: such a scenario—total anti-utility from a given point forward—has a negligible but non-zero chance of occurring. And over a period of 10^100 years, that raises such a negligible percentage individual instance occurance to an accumulatedly significant one. IF we humans manage to manipulate M-Theory to bypass the closed-system state of the universe and thereby, furthermore, stave off the heat death of the universe indefinitely, then … well, I know that my mind cannot properly grok the sort of scenario we’d then be discussing.
Still—given the scenario-options of possibly fucking it up and dying early, or being unable to escape a scenario of unending anti-utility… I choose the former. Then again, If I were given the choice of increasing my g quotient and IQ quotient (or, rather, whatever actual cognitive function those scores are meant to model) twofold for ten years at the price of dying after those ten… I think, right now, I would take it.
So I guess that says something about my underlying beliefs in this discussion, as well.
For me, it’s the irreversibility that’s the real issue. For any situation that would warrant pressing the suicide switch, would it not be preferable to press the analgesia switch?
Not necessarily. If I believed that my continued survival would cause the destruction of everything I valued, suicide would be a value-preserving option and analgesia would not be. More generally: if my values include anything beyond avoiding pain, analgesia isn’t necessarily my best value-preserving option.
But, agreed, irreversibility of the sort we’re discussing here is highly implausible. But we’re discussing low-probability scenarios to begin with.
my continued survival would cause the destruction of everything I valued
This is a situation I hadn’t thought of, and I agree that in this case, suicide would be preferable. But I hadn’t got the impression that’s what was being discussed—for one thing, if this were a real worry it would also argue against a two-thousand-year safety interval. I feel like the “Omega threatening to torture your loved ones to compel your suicide” scenario should be separated from the “I have no mouth and I must scream” scenario.
More generally: if my values include anything beyond avoiding pain, analgesia isn’t necessarily my best value-preserving option.
True, but the problem with pain is that its importance in your hierarchy of values tends to increase with intensity. Now I’m thinking of a sort of dead-man’s-switch where outside sensory information requires voluntary opting-in, and the suicide switch can only be accessed from the baseline mental state of total sensory deprivation, or an imaginary field of flowers, or whatever.
But, agreed, irreversibility of the sort we’re discussing here is highly implausible. But we’re discussing low-probability scenarios to begin with.
I was mostly talking about the irreversibility of suicide, actually. In an AIMS scenario, where I have every reason to expect my whole future to consist of total, mind-crushing suffering, I would still prefer “spend the remaining lifetime of the universe building castles in my head, checking back in occasionally to make sure the suffering hasn’t stopped” to “cease to exist, permanently”.
Of course, this is all rather ignoring the unlikelihood of the existence of an entity that can impose effectively infinite, total suffering on you but can’t hack your mind and remove the suicide switch.
For convenience I refer hereafter to an the-future-is-solely-comprised-of-negative-value scenario as an “AIMS” scenario and to a I-trigger-my-suicide-switch-when-the-future-has-net-positive-expected-value as an “OOPS” scenario.
Things I more-or-less agree with:
I don’t really see why positing “solely of negative value” is necessary for AIMS scenarios. If I’m confident that my future unavoidably contains net negative value, that should be enough to conclude that opting out of that future is a more valuable choice than signing up for it. But since it seems to be an important part of your definition, I accept it for the sake of discussion.
I agree that suffering is not the same thing as negative value, and therefore that we aren’t necessarily talking about suffering here.
I agree that a AIMS scenario has a negligible but non-zero chance of occurring. I personally can’t imagine one, but that’s just a limit of my imagination and not terribly relevant.
I agree that it’s possible to put safeguards on a suicide switch such that an OOPS scenario has a negligible chance of occurring.
Things I disagree with:
You seem to be suggesting either that it’s possible to make the OOPS scenario likelihood not just negligible, but zero. I see no reason to believe that’s true. (Perhaps you aren’t actually saying this.)
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention. I certainly agree that IF that’s true, then it trivially follows that giving people a suicide switch creates more expected value than not doing so in all scenarios worthy of attention. But I see no reason to believe that’s true either.
You seem to be suggesting either that it’s possible to make the OOPS scenario likelihood not just negligible, but zero.
Specific versions of “OOPS”. I don’t intend to categorize all of them that way.
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention.
Well, no. It has more to do with the expected cost of failing to account for either variety at least in principle. An OOPS scenario being fulfilled means the end of potential and the cessation of gained utility. An AIMS scenario being fulfilled means the aversion of constantly negative utility. (We can drop the “solely” so long as the ‘net’ is kept.)
And also, given what we know of the universe, I don’t think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
You probably could/would subjectively end up in a non-looping state. After all, you had to have multiple possible entries into the loop to begin with. Besides, it’s meaningless to say that you go through the loop more than once (remember, your mind can’t distinguish which loop it is in, because it has to loop back around to an initial state).
Whether you have multiple possible entries into the loop is irrelevant, what is important is whether you have possible exits.
As to your second point, does that mean it is ethical to run a simulation of someone being tortured as long as that simulation has already been run sometime in the past?
Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it “is happened”. Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I’m currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and “upload” conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don’t have, but I don’t plan to run any non-trivial simulations until I do.
How do you figure? I can see the relationship—the discussion of vanishingly small probabilities. The difference, however, is that Pascal’s Mugging attempts to apply those small probabilities to deciding a specific action.
There is a categorical difference, I feel, between stating that a thing could occur and stating that a thing is occurring. After all; if there were an infinite number of Muggings, at least one of them conceivably could be actually telling the truth.
And also, given what we know of the universe, I don’t think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
s/eternally/”the remaining history of the universe”/, then. The problem remains equivalent as a thought-experiment. The point being—as middle-aged suicides themselves demonstrate: there is, at some level, in every conscious agent’s decision-making processes, a continuously ongoing decision as to whether it would be better to continue to exist, or to cease existing.
Being denied that capacity for choice, it seems highly plausible that over a long enough timeline, nearly anyone should eventually have a problem with this state of affairs.
An infinitessimally likely result will occur at least once in an infinite number of trials
Actually, it’s not guaranteed that an unlikely thing will happen at all, if the instantaneous probability of that thing diminishes quickly enough over time.
Doesn’t associating your identity with (anything similar enough to) a set of physical parts rather than (pure magic or) a pattern of parts going through time imply a belief in non-local physics?
I think a suspended (I mean not just frozen, but not just vitrified either, any stasis will do, and I mean my argument to hold up against more physically neutral forms of stasis than those) version of my body that actually will not be revived is as much me as a grapefruit is. If physical parts do not vary in their relationship to each other, how is there supposed to be experience of any kind?
Alternatively, if one is made magically invincible, one could be entombed in concrete for thousands of years—forever, even, after the heat death of the (rest of) the universe if one is sent off into deep space in a casing of concrete. Is that what you say has a non-zero probability? Or are you talking about a multi-galaxy civilization devoted to keeping one individual under torture for as long as physically possible until the heat death of the universe?
I will add that I am strongly in the anti-”immortality” camp, as that word should not be used. I am in the anti-mortality camp, that’s how I’d put it.
Doesn’t associating your identity with (anything similar enough to) a set of physical parts rather than (pure magic or) a pattern of parts going through time imply a belief in non-local physics?
As your individual identity can only be associated with a specific set of physical parts at any given time, I’m pretty sure I don’t follow your meaning. I also find myself confused by the concept of “non-local physics”. Elaborate?
If the time it normally takes for a signal to go from your toe to your brain is t, and we consider your experience over one half t, your lower leg is irrelevant. Your experiences during that time slice of feeling something in your foot are due to signals propagated before one half t ago. Similarly, if we consider your experience over t, but double the amount of time every function of your body occurs at, you lower (entire? I’m not a biologist) leg would be irrelevant. That is, your lower leg could be replaced with a grapefruit tree and you wouldn’t feel the difference (assume you’re not looking at it).
The limit of that is stopping signals from traveling entirely, at which point your entire body is irrelevant. I think someone frozen in time would not have any experience rather than be eternally trapped in a moment. If someone is at a point where their experiences would be the same were they replaced by a tree, they’re not having any.
There’s no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you.
If the time it normally takes for a signal to go from your toe to your brain is t, and we consider your experience over one half t, your lower leg is irrelevant. [...]
I don’t follow the meaning of what it is you are trying to convey here. Furthermore; how does any of that lead to “non-local physics”? I sincerely am not following whatever it is you are trying to say.
There’s no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you.
There is a fine art to the linguistic tool called the segue. This is a poor example of it.
That being said—the second law of thermodynamics is only applicable to closed systems. We assume the universe is a closed system because we have no evidence to the contrary as yet. It remains conceivable however that future societies might devise a means of bypassing this particular problem.
There is a fine art to the linguistic tool called the segue. This is a poor example of it.
I can see how “There’s no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you,” looks random. I was contrasting the logically possible worst case scenario of “eternal unable to scream horror” with what I think is the physically possible worst case scenario, where you might think the logically and physically possible are the same here.
If there is a source of infinite energy, I agree one could be tortured forever—but even still it couldn’t be a frozen single moment of torture. The torturers would have to cycle one through states.
how does any of that lead to “non-local physics”?
It applies because of the speed limit issue. It’s just saying the nerve speed-limit analogy can’t be circumvented by doing things infinitely fast., rather than at nerve speed (or light speed). But the analogy is the central thing.
I will try again. What would it be like is all of your brain’s operations took twice the time they normally do? It would look like everything was happening quickly, one would experience a year as six months. The limit of that is at infinite slowness, one would experience infinitely little.
If there is a source of infinite energy, I agree one could be tortured forever—but even still it couldn’t be a frozen single moment of torture. The torturers would have to cycle one through states.
I fail to recognize any reason why this would be relevant or interesting to this conversation. It’s trivially true, and addresses no point of mine, that’s for certain.
I will try again. What would it be like is all of your brain’s operations took twice the time they normally do? It would look like everything was happening quickly, one would experience a year as six months. The limit of that is at infinite slowness, one would experience infinitely little.
… what in the blazes is this supposed to be communicating? It’s a trivially true statement, and informs absolutely nothing that I can detect towards the cause of explaining where or how “non-local physics” comes into the picture.
Would you care to take a stab at actually expounding on your claim of the necessity of believing in this “non-local physics” you speak of, and furthermore explaining what this “non-local physics” even is? So far, you keep going in many different directions, none of which even remotely address that issue.
Sure, it means that one can’t construct a brain by having signals go infinitely fast, the local means that to get from somewhere to somewhere else one has to go through intermediary space. It was a caveat I introduced because I thought it might be needed, but it really wasn’t. My main point is that I don’t think a person could be infinitely tortured by being frozen in torture, which leads to the interesting point that people shouldn’t be identified with objects in single moments of time, such as bodies, but with their bodies/experience going through time.
I don’t care to constrain the particulars of how “And I Must Scream” is defined saving that it be total anti-utility and it be without escape. Whatever particulars you care to imagine in order to aid in understanding this notion are sufficient.
I will add that I am strongly in the anti-”immortality” camp, as that word should not be used. I am in the anti-mortality camp, that’s how I’d put it.
Please elaborate on your feelings regarding the problems of the word “immortality”. I am agnostic as to your perceptions and have no internal clues to fill in that ignorance.
Well… I am firmly in the pro-immortality camp. However, I have given thought to the quandary of the fact that for any given immortal being, the probability of becoming eternally trapped at any given moment in a “And I Must Scream” scenario is non-zero (though admittedly vanishingly small). An infinitessimally likely result will occur at least once in an infinite number of trials, however, so… that’s a troublesome one.
Better to be immortal with the option of suicide, than to be literally incapable of dying.
Sure, but Linster asked what is more deadly, not what is better. Being immortal with the option of suicide is clearly more deadly than being literally incapable of dying.
You’re thinking about this the wrong way. If you took away a Terminator’s ability to die, would it become less deadly? I argue that it would, in fact, become more deadly.
Uncontested. I wasn’t writing a rebuttal to his words, but rather elaborating on the thoughts that came to my mind upon reading his words. An extension of the dialogue rather than an argument.
Fair enough. My preference ordering is immortal + suicide > literally incapable of dying > mortal.
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive? For example, if I have the option of suicide that has only a one-in-a-billion chance every minute of being triggered in such a situation (due to transient depression or accidentally pressing the wrong button or whatever), that will kill me on average in two thousand years. Is that better or worse than my odds of falling into an AIMS scenario?
That aside… I think AIMS scenarios are a distraction here. It is certainly true that the longer I live, the more suffering I will experience (including, but not limited to, a vanishingly small chance of extended periods of suffering, as you say). What I conclude from that has a lot to do with my relative valuations of suffering and nonexistence.
To put this a different way: what is the value of an additional observer-moment given an N% chance that it will involve suffering? I would certainly agree that the value increases as N decreases, all else being equal, but I think N has to get awfully large before that value even approximates zero, let alone goes negative (which presumably it would have to before death is the more valuable option).
Well—here’s the thing. These sorts of scenarios are relatively easy to safeguard against. For example (addressing the ‘wrong button’ but also the transient emotional state): require the suicide mechanism to take two thousand years to implement from initiation to irreversibility. Another example—tailored the other way—since we’re already talking about altering/substituting physiology drastically, given that psychological states depend on physiological states, it would stand to reason that any being capable of engineering immortality could also engineer cognitive states.
I would find the notion of pressing the wrong button accidentally for two thousand years to be … possible at all, given human timeframes. Especially if the suicide mechanism is tied to intentionality. Could you even imagine actively desiring to die for that long a period, short of something akin to an AIMS scenario?
Also, please note that I later described “AIMS” as “total anti-utility that endures for a duration outlasting the remaining lifespan of the universe”. So what I’m talking about is a situation where you have a justified true belief that the remaining span of your existence will not merely not have positive value but rather specifically will consist solely of negative value.
(For me, the concept of “absolute suffering”—that is, a suffering greater than which I cannot conceive (That’s kinda got echoes of the Ontological but in this case it’s not fallacious since I’m not using that as an argument in favor of its instantiation rather than definition) -- is not sufficient to induce zero-value/utility let alone negative-value/utility. Suffering that “serves a purpose” is acceptable to me; so even an ‘eternity’ of ‘absolute suffering’ wouldn’t necessarily be AIMS for me, under these terms.)
The point is: such a scenario—total anti-utility from a given point forward—has a negligible but non-zero chance of occurring. And over a period of 10^100 years, that raises such a negligible percentage individual instance occurance to an accumulatedly significant one. IF we humans manage to manipulate M-Theory to bypass the closed-system state of the universe and thereby, furthermore, stave off the heat death of the universe indefinitely, then … well, I know that my mind cannot properly grok the sort of scenario we’d then be discussing.
Still—given the scenario-options of possibly fucking it up and dying early, or being unable to escape a scenario of unending anti-utility… I choose the former. Then again, If I were given the choice of increasing my
g
quotient and IQ quotient (or, rather, whatever actual cognitive function those scores are meant to model) twofold for ten years at the price of dying after those ten… I think, right now, I would take it.So I guess that says something about my underlying beliefs in this discussion, as well.
For me, it’s the irreversibility that’s the real issue. For any situation that would warrant pressing the suicide switch, would it not be preferable to press the analgesia switch?
Not necessarily. If I believed that my continued survival would cause the destruction of everything I valued, suicide would be a value-preserving option and analgesia would not be. More generally: if my values include anything beyond avoiding pain, analgesia isn’t necessarily my best value-preserving option.
But, agreed, irreversibility of the sort we’re discussing here is highly implausible. But we’re discussing low-probability scenarios to begin with.
This is a situation I hadn’t thought of, and I agree that in this case, suicide would be preferable. But I hadn’t got the impression that’s what was being discussed—for one thing, if this were a real worry it would also argue against a two-thousand-year safety interval. I feel like the “Omega threatening to torture your loved ones to compel your suicide” scenario should be separated from the “I have no mouth and I must scream” scenario.
True, but the problem with pain is that its importance in your hierarchy of values tends to increase with intensity. Now I’m thinking of a sort of dead-man’s-switch where outside sensory information requires voluntary opting-in, and the suicide switch can only be accessed from the baseline mental state of total sensory deprivation, or an imaginary field of flowers, or whatever.
I was mostly talking about the irreversibility of suicide, actually. In an AIMS scenario, where I have every reason to expect my whole future to consist of total, mind-crushing suffering, I would still prefer “spend the remaining lifetime of the universe building castles in my head, checking back in occasionally to make sure the suffering hasn’t stopped” to “cease to exist, permanently”.
Of course, this is all rather ignoring the unlikelihood of the existence of an entity that can impose effectively infinite, total suffering on you but can’t hack your mind and remove the suicide switch.
For convenience I refer hereafter to an the-future-is-solely-comprised-of-negative-value scenario as an “AIMS” scenario and to a I-trigger-my-suicide-switch-when-the-future-has-net-positive-expected-value as an “OOPS” scenario.
Things I more-or-less agree with:
I don’t really see why positing “solely of negative value” is necessary for AIMS scenarios. If I’m confident that my future unavoidably contains net negative value, that should be enough to conclude that opting out of that future is a more valuable choice than signing up for it. But since it seems to be an important part of your definition, I accept it for the sake of discussion.
I agree that suffering is not the same thing as negative value, and therefore that we aren’t necessarily talking about suffering here.
I agree that a AIMS scenario has a negligible but non-zero chance of occurring. I personally can’t imagine one, but that’s just a limit of my imagination and not terribly relevant.
I agree that it’s possible to put safeguards on a suicide switch such that an OOPS scenario has a negligible chance of occurring.
Things I disagree with:
You seem to be suggesting either that it’s possible to make the OOPS scenario likelihood not just negligible, but zero. I see no reason to believe that’s true. (Perhaps you aren’t actually saying this.)
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention. I certainly agree that IF that’s true, then it trivially follows that giving people a suicide switch creates more expected value than not doing so in all scenarios worthy of attention. But I see no reason to believe that’s true either.
Specific versions of “OOPS”. I don’t intend to categorize all of them that way.
Well, no. It has more to do with the expected cost of failing to account for either variety at least in principle. An OOPS scenario being fulfilled means the end of potential and the cessation of gained utility. An AIMS scenario being fulfilled means the aversion of constantly negative utility. (We can drop the “solely” so long as the ‘net’ is kept.)
This is a version of Pascal’s mugging.
And also, given what we know of the universe, I don’t think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
What if you get trapped in a loop of mind-states, each horrible state leading to the next until you are back where you started?
You probably could/would subjectively end up in a non-looping state. After all, you had to have multiple possible entries into the loop to begin with. Besides, it’s meaningless to say that you go through the loop more than once (remember, your mind can’t distinguish which loop it is in, because it has to loop back around to an initial state).
Whether you have multiple possible entries into the loop is irrelevant, what is important is whether you have possible exits.
As to your second point, does that mean it is ethical to run a simulation of someone being tortured as long as that simulation has already been run sometime in the past?
Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it “is happened”. Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I’m currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and “upload” conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don’t have, but I don’t plan to run any non-trivial simulations until I do.
How do you figure? I can see the relationship—the discussion of vanishingly small probabilities. The difference, however, is that Pascal’s Mugging attempts to apply those small probabilities to deciding a specific action.
There is a categorical difference, I feel, between stating that a thing could occur and stating that a thing is occurring. After all; if there were an infinite number of Muggings, at least one of them conceivably could be actually telling the truth.
s/eternally/”the remaining history of the universe”/, then. The problem remains equivalent as a thought-experiment. The point being—as middle-aged suicides themselves demonstrate: there is, at some level, in every conscious agent’s decision-making processes, a continuously ongoing decision as to whether it would be better to continue to exist, or to cease existing.
Being denied that capacity for choice, it seems highly plausible that over a long enough timeline, nearly anyone should eventually have a problem with this state of affairs.
Actually, it’s not guaranteed that an unlikely thing will happen at all, if the instantaneous probability of that thing diminishes quickly enough over time.
I’m assuming a fixed probability of occurrance per trial, with an infinite number of trials.
Doesn’t associating your identity with (anything similar enough to) a set of physical parts rather than (pure magic or) a pattern of parts going through time imply a belief in non-local physics?
I think a suspended (I mean not just frozen, but not just vitrified either, any stasis will do, and I mean my argument to hold up against more physically neutral forms of stasis than those) version of my body that actually will not be revived is as much me as a grapefruit is. If physical parts do not vary in their relationship to each other, how is there supposed to be experience of any kind?
Alternatively, if one is made magically invincible, one could be entombed in concrete for thousands of years—forever, even, after the heat death of the (rest of) the universe if one is sent off into deep space in a casing of concrete. Is that what you say has a non-zero probability? Or are you talking about a multi-galaxy civilization devoted to keeping one individual under torture for as long as physically possible until the heat death of the universe?
I will add that I am strongly in the anti-”immortality” camp, as that word should not be used. I am in the anti-mortality camp, that’s how I’d put it.
As your individual identity can only be associated with a specific set of physical parts at any given time, I’m pretty sure I don’t follow your meaning. I also find myself confused by the concept of “non-local physics”. Elaborate?
If the time it normally takes for a signal to go from your toe to your brain is t, and we consider your experience over one half t, your lower leg is irrelevant. Your experiences during that time slice of feeling something in your foot are due to signals propagated before one half t ago. Similarly, if we consider your experience over t, but double the amount of time every function of your body occurs at, you lower (entire? I’m not a biologist) leg would be irrelevant. That is, your lower leg could be replaced with a grapefruit tree and you wouldn’t feel the difference (assume you’re not looking at it).
The limit of that is stopping signals from traveling entirely, at which point your entire body is irrelevant. I think someone frozen in time would not have any experience rather than be eternally trapped in a moment. If someone is at a point where their experiences would be the same were they replaced by a tree, they’re not having any.
There’s no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you.
I don’t follow the meaning of what it is you are trying to convey here. Furthermore; how does any of that lead to “non-local physics”? I sincerely am not following whatever it is you are trying to say.
There is a fine art to the linguistic tool called the segue. This is a poor example of it.
That being said—the second law of thermodynamics is only applicable to closed systems. We assume the universe is a closed system because we have no evidence to the contrary as yet. It remains conceivable however that future societies might devise a means of bypassing this particular problem.
I can see how “There’s no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you,” looks random. I was contrasting the logically possible worst case scenario of “eternal unable to scream horror” with what I think is the physically possible worst case scenario, where you might think the logically and physically possible are the same here.
If there is a source of infinite energy, I agree one could be tortured forever—but even still it couldn’t be a frozen single moment of torture. The torturers would have to cycle one through states.
It applies because of the speed limit issue. It’s just saying the nerve speed-limit analogy can’t be circumvented by doing things infinitely fast., rather than at nerve speed (or light speed). But the analogy is the central thing.
I will try again. What would it be like is all of your brain’s operations took twice the time they normally do? It would look like everything was happening quickly, one would experience a year as six months. The limit of that is at infinite slowness, one would experience infinitely little.
I fail to recognize any reason why this would be relevant or interesting to this conversation. It’s trivially true, and addresses no point of mine, that’s for certain.
… what in the blazes is this supposed to be communicating? It’s a trivially true statement, and informs absolutely nothing that I can detect towards the cause of explaining where or how “non-local physics” comes into the picture.
Would you care to take a stab at actually expounding on your claim of the necessity of believing in this “non-local physics” you speak of, and furthermore explaining what this “non-local physics” even is? So far, you keep going in many different directions, none of which even remotely address that issue.
Sure, it means that one can’t construct a brain by having signals go infinitely fast, the local means that to get from somewhere to somewhere else one has to go through intermediary space. It was a caveat I introduced because I thought it might be needed, but it really wasn’t. My main point is that I don’t think a person could be infinitely tortured by being frozen in torture, which leads to the interesting point that people shouldn’t be identified with objects in single moments of time, such as bodies, but with their bodies/experience going through time.
I guess I’ve gotten to used to the notion of human identity and consciousness being an emergent pattern rather than specific physical objects.
I don’t care to constrain the particulars of how “And I Must Scream” is defined saving that it be total anti-utility and it be without escape. Whatever particulars you care to imagine in order to aid in understanding this notion are sufficient.
Please elaborate on your feelings regarding the problems of the word “immortality”. I am agnostic as to your perceptions and have no internal clues to fill in that ignorance.
Just what Robin Hanson said.
Took me a couple of readings to get the gist of that article. Frankly, it’s rather… well, I find myself reacting poorly to it.
After all—is not “giving as many years as we can” the same, quantitatively, as saying that the goal is clinical immortality?