The flaws in both of these dilemmas seems rather obvious to me, but maybe I’m overlooking something.
The Repugnant Conclusion
First of all, I balk at the idea that adding something barely tolerable to a collection of much more wonderful examples is a net gain. If you had a bowl of cherries (and life has been said to be a bowl of cherries, so this seems appropriate) that were absolutely the most wonderful, fresh cherries you had ever tasted, and someone offered to add a recently-thawed frozen non-organic cherry which had been sitting in the back of the fridge for a week but nonetheless looked edible, would you take it?
“But how can you equate HYOOMAN LIIIIIVES with mere INANIMATE CHERRIES, you heartless rationalist you?” I hear someone cry (probably not one of us, but they’re out there, and the argument needs to be answered).
Look, we’re not talking about whether someone’s life should be saved; we’re talking about whether to create an additional life, starting from scratch. To suppose anything else is to make an assumption about facts not mentioned in the scenario. Why would anyone, under these circumstances, add even one life that was barely worth living, if everyone else is much better off?
I think what happens in most people’s minds, when presented with conundrums like this, is that they subconsciously impose a context to give the question more meaning. In this case, the fact that we know (somehow) the quality-of-life of this one additional person implies that they already exist, somewhere—and therefore that we are perhaps rescuing them. Who could turn that down? Indeed, who could turn down a billion refugees, rather than let them die, if we knew that we could then sustain everyone at a just-barely-positive level? Surely we would soon be able to put them to work and improve everyone’s lot soon enough.
I could go on with the inquiries, but the point is this: the devil is in the details, and scenarios such as these leave us without the necessary context to make a rational decision.
I propose that this is a type of fallacy—call it Reasoning Without Context.
Which brings me to today’s main dish...
The Lifespan Dilemma
The essential fallacy here is the same: we lack sufficient context to make a rational decision. We have absolutely no experience with human lifespans exceeding even 1000 years, so how can we possibly guage the value of extending life by an almost incomprehensible multiple of that, and what are the side-effects and consequences of the technique being used?
Some further context which I would want to know before making this decision:
How do I know that you can extend my life by this much? How do you know it? (Just when did you test your technique? How reliable is it? How do I know you’re not a Brooklyn Bridge salesman, or a Republican?)
If you can extend it, why can’t I do it without your help?
How do I know there isn’t someone else who can do it better, i.e. without the 20% chance of dying today?
What are you getting out of this deal, that you are essentially giving away immortality for whatever it is you are doing that gives me an approximately 20% chance of dying today? Perhaps whatever that thing is, I should be selling it to you for quite a high price.
How many others have already accepted this offer? Can I talk with them (the ones who haven’t died yet) before deciding? Can you prove that your fatality rate really is only 20%?
Math and logic deal in absolute certainties and facts; real life, which is the realm of rational decisionmaking, depends on context. You can’t logically or mathematically analyze a problem with no real-world context and expect the answer to make rational sense.
I didn’t vote your comment down, but I can guess why someone else did. Contradicting the premises is a common failure mode for humans attacking difficult problems. In some cases it is necessary (for example, if the premises are somehow self-contradictory), but even so people fail into that conclusion more often than they should.
Consider someone answering the Fox-Goose-Grain puzzle with “I would swim across” or “I would look for a second boat”.
Points 1 through 5. In general, you can understand any thought experiment someone proposes to be “trued”. The doubting listener adds whatever additional hypotheses were not mentioned about Omega’s about powers, trustworthiness, et cetera, until (according to their best insight into the original poster’s thought process) the puzzle is as hard as the original poster apparently thought it was.
I just re-read it more carefully, and I don’t see where it says that I can assume that Omega is telling the truth...
...but even if it did, my questions still stand, starting with how do I know that Omega is telling the truth? I cannot at present conceive* of any circumstances under which I would believe someone making the claims that Omega makes.
As I understand it, the point of the exercise is to show how our intuitive moral judgment leads us into inconsistencies or contradictions when dealing with complex mathematical situations (which is certainly true) -- so my point about context being important is still relevant. Give me sufficient moral context, and I’ll give you a moral determination that is consistent—but without that context, intuition is essentially dividing by zero to fill in the gaps.
without using my imagination to fill in some very large blanks, anyway, which means I could end up with a substantially different scenario from that intended
It’s a convention about Omega that Omega’s reliability is altogether beyond reproach. This is, of course, completely implausible, but it serves as a useful device to make sure that the only issues at hand are the offers Omega makes, not whether they can be expected to pan out.
Okay… this does render moot any conclusions one might draw from this exercise about the fallibility of human moral intuition.
Or was that not the point?
If the question is supposed to be considered in pure mathematical terms, then I don’t understand why I should care one way or the other; it’s like asking me if I like the number 3 better than the number 7.
The point is that Omega’s statements (about Omega itself, about the universe, etc.) are all to be taken at face value as premises in the thought experiments that feature Omega. From these premises, you attempt to derive conclusions. Entertaining variations on the thought experiment where any of the premises are in doubt is cheating (unless you can prove that they contradict one another, thereby invalidating the entire experiment).
Omega is a tool to find your true rejection, if you in fact reject something.
So what I’m supposed to do is make whatever assumptions are necessary to render the questions free of any side-effects, and then consider the question...
So, let me take a stab at answering the question, given my revised understanding.
If you pay me just one penny, I’ll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. …with further shaving-off of survival odds in exchange for life-extension by truly Vast orders of magnitude.
First off, I can’t bring myself to care about the difference; both are incomprehensibly long amounts of time.
Also, my natural tendency is to avoid “deal sweeteners”, presumably because in the real world this would be the “switch” part of the “bait-and-switch”—but Omega is 100% trustworthy, so I don’t need to worry—which means I need to specifically override my natural “decision hysteresis” and consider this as an initial choice to be made.
Is it cheating to let the “real world” intrude in the form of the following thought?:
If, by the time 10^^3 years have elapsed, I or my civilization have not developed some more controllable means of might-as-well-be-immortality, then I’m probably not going to care too much how long I live past the end of my civilization, much less the end of the universe.
...or am I simply supposed to think of “years of life” as a commodity, like money? (The ensuing monetary analogies would seem to imply this...) Too much of anything, though—money or time—becomes meaningless when multiplied further.:
Time: Do I assume my friends get to come with me, and that together we will find some way to survive the inevitable maximization of entropy?
Money: After I’ve bought the earth, and the rights to the rest of the solar system and any other planets we’re able to find with the infinite improbability drive developed by the laboratories I paid for, what do we do with the other $0.99999 x 10^^whatever? (And how do I spend the first part of that money without causing a global economic crisis that will make this one look like a slow day at the taco stand? Oh, wait, though, I’m probably supposed to assume I earned it legitimately by contributing that much value to the global economy… how??? Mind boggles, scenario fails.)
In other words… Omega can have the penny, because it’s totally not about the penny, but I don’t see any point in starting down the road of shaving off probability-points in exchange for orders of magnitude, no matter how large.
In fact, I’d be more inclined to go the other way, if that were an option—reducing the likelihood of death in exchange for a shorter life. (I’m not quite clear on whether this could be reverse-extrapolated from the examples given.) I suspect a thousand years would be enough; give me that, and I can get the rest for myself. (Or am I supposed to assume that I will never be able to extend my life beyond the years Omega gives me? If so, we’re getting way too mystical and into premises that seem like they would force me to revise my understanding of reality in some significant way.)
So I guess my primary answer to Eliezer’s question is that I don’t even start down the garden path because I’m more inclined to walk the other way.
Please stop allowing your practical considerations get in the way of the pure, beautiful counterfactual!
Seriously though, either you allow yourself to suspend practicalities and consider pure decision theory, or you don’t. This is a pure maths problem, you can’t equate it to ‘John has 4 apples.’ John has 3^^^3 apples here, causing your mind to break. Forget the apples and years, consider utility!
As I said somewhere earlier (points vaguely upward), my impression was that this was not actually intended as a pure mathematical problem but rather an example of how our innate decisionmaking abilities (morality? intuition?) don’t do well with big numbers.
If this is not the case, then why phrase the question as a word problem with a moral decision to be made? Why not simply ask it in pure mathematical terms?
this was my initial reaction as well, ask if I can go the other way until we’re at, say, 1000 years. but if you truly take the problem at face value (we’re negotiating with omega, the whole point of omega is that he neatly lops off alternatives for the purposes of the thought experiment) and are negotiating for your total lifespan +- 0 then yes, I think you’d be forced to come up with a rule.
I think my “true rejection”, then, if I’m understanding the term correctly, is the idea that we live in a universe where such absolute certainties could exist—or at least where for-all-practical-purposes certainties can exist without any further context.
This problem seems to have an obvious “shut up and multiply” answer (take the deal), but our normal intuitions scream out against it. We can easily imagine some negligible chance of living through the next hour, but we just can’t imagine trusting some dude enough to take that chance, or (properly) a period longer than some large epoch time.
Since our inability to properly grok these elements of the problem is the fulcrum on which our difficulty balances it seems more reasonable than usual to question Omega & her claims.
(This problem seems as easy to me as specks vs torture: in both cases you need to shut up and multiply, and in both cases you need to quiet your screaming intuitions—they were trained against different patterns.)
I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely.
To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam’s razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.
Contradicting the premises is a common failure mode for humans attacking difficult problems.
Failing to question them is another. In the political world, the power to define the problem trumps the power to solve it.
Within the terms of this problem, one is supposed to take Omega’s claims as axiomatically true. p=1, not 1-epsilon for even an unimaginably small epsilon. This is unlike Newcomb’s problem, where an ordinary, imaginable sort of confidence is all that is required.
Thinking outside that box, however, there’s a genuine issue around the question of what it would take to rationally accept Omega’s propositions involving such ginormous numbers. I notice that Christian Szegedy has been voted up for saying that in more technical language.
Consider someone answering the Fox-Goose-Grain puzzle with “I would swim across” or “I would look for a second boat”.
These are answers worth giving, especially by someone who can also solve the problem on its own terms.
Side note: ya know, it would be really nice if there was some way for a negative vote to be accompanied by some explanation of what the voter didn’t like. My comment here got one negative vote, and I have no idea at all why—so I am unable to take any corrective action either with regard to this comment or any future comments I may make.
(I suppose the voter could have replied to the comment to explain what the problem was, but then they would have surrendered their anonymity..)
The flaws in both of these dilemmas seems rather obvious to me, but maybe I’m overlooking something.
The Repugnant Conclusion
First of all, I balk at the idea that adding something barely tolerable to a collection of much more wonderful examples is a net gain. If you had a bowl of cherries (and life has been said to be a bowl of cherries, so this seems appropriate) that were absolutely the most wonderful, fresh cherries you had ever tasted, and someone offered to add a recently-thawed frozen non-organic cherry which had been sitting in the back of the fridge for a week but nonetheless looked edible, would you take it?
“But how can you equate HYOOMAN LIIIIIVES with mere INANIMATE CHERRIES, you heartless rationalist you?” I hear someone cry (probably not one of us, but they’re out there, and the argument needs to be answered).
Look, we’re not talking about whether someone’s life should be saved; we’re talking about whether to create an additional life, starting from scratch. To suppose anything else is to make an assumption about facts not mentioned in the scenario. Why would anyone, under these circumstances, add even one life that was barely worth living, if everyone else is much better off?
I think what happens in most people’s minds, when presented with conundrums like this, is that they subconsciously impose a context to give the question more meaning. In this case, the fact that we know (somehow) the quality-of-life of this one additional person implies that they already exist, somewhere—and therefore that we are perhaps rescuing them. Who could turn that down? Indeed, who could turn down a billion refugees, rather than let them die, if we knew that we could then sustain everyone at a just-barely-positive level? Surely we would soon be able to put them to work and improve everyone’s lot soon enough.
I could go on with the inquiries, but the point is this: the devil is in the details, and scenarios such as these leave us without the necessary context to make a rational decision.
I propose that this is a type of fallacy—call it Reasoning Without Context.
Which brings me to today’s main dish...
The Lifespan Dilemma
The essential fallacy here is the same: we lack sufficient context to make a rational decision. We have absolutely no experience with human lifespans exceeding even 1000 years, so how can we possibly guage the value of extending life by an almost incomprehensible multiple of that, and what are the side-effects and consequences of the technique being used?
Some further context which I would want to know before making this decision:
How do I know that you can extend my life by this much? How do you know it? (Just when did you test your technique? How reliable is it? How do I know you’re not a Brooklyn Bridge salesman, or a Republican?)
If you can extend it, why can’t I do it without your help?
How do I know there isn’t someone else who can do it better, i.e. without the 20% chance of dying today?
What are you getting out of this deal, that you are essentially giving away immortality for whatever it is you are doing that gives me an approximately 20% chance of dying today? Perhaps whatever that thing is, I should be selling it to you for quite a high price.
How many others have already accepted this offer? Can I talk with them (the ones who haven’t died yet) before deciding? Can you prove that your fatality rate really is only 20%?
Math and logic deal in absolute certainties and facts; real life, which is the realm of rational decisionmaking, depends on context. You can’t logically or mathematically analyze a problem with no real-world context and expect the answer to make rational sense.
I didn’t vote your comment down, but I can guess why someone else did. Contradicting the premises is a common failure mode for humans attacking difficult problems. In some cases it is necessary (for example, if the premises are somehow self-contradictory), but even so people fail into that conclusion more often than they should.
Consider someone answering the Fox-Goose-Grain puzzle with “I would swim across” or “I would look for a second boat”.
http://en.wikipedia.org/wiki/Fox,_goose_and_bag_of_beans_puzzle
Where did I contradict the premises?
Points 1 through 5. In general, you can understand any thought experiment someone proposes to be “trued”. The doubting listener adds whatever additional hypotheses were not mentioned about Omega’s about powers, trustworthiness, et cetera, until (according to their best insight into the original poster’s thought process) the puzzle is as hard as the original poster apparently thought it was.
I just re-read it more carefully, and I don’t see where it says that I can assume that Omega is telling the truth...
...but even if it did, my questions still stand, starting with how do I know that Omega is telling the truth? I cannot at present conceive* of any circumstances under which I would believe someone making the claims that Omega makes.
As I understand it, the point of the exercise is to show how our intuitive moral judgment leads us into inconsistencies or contradictions when dealing with complex mathematical situations (which is certainly true) -- so my point about context being important is still relevant. Give me sufficient moral context, and I’ll give you a moral determination that is consistent—but without that context, intuition is essentially dividing by zero to fill in the gaps.
without using my imagination to fill in some very large blanks, anyway, which means I could end up with a substantially different scenario from that intended
It’s a convention about Omega that Omega’s reliability is altogether beyond reproach. This is, of course, completely implausible, but it serves as a useful device to make sure that the only issues at hand are the offers Omega makes, not whether they can be expected to pan out.
Okay… this does render moot any conclusions one might draw from this exercise about the fallibility of human moral intuition.
Or was that not the point?
If the question is supposed to be considered in pure mathematical terms, then I don’t understand why I should care one way or the other; it’s like asking me if I like the number 3 better than the number 7.
The point is that Omega’s statements (about Omega itself, about the universe, etc.) are all to be taken at face value as premises in the thought experiments that feature Omega. From these premises, you attempt to derive conclusions. Entertaining variations on the thought experiment where any of the premises are in doubt is cheating (unless you can prove that they contradict one another, thereby invalidating the entire experiment).
Omega is a tool to find your true rejection, if you in fact reject something.
So what I’m supposed to do is make whatever assumptions are necessary to render the questions free of any side-effects, and then consider the question...
So, let me take a stab at answering the question, given my revised understanding.
If you pay me just one penny, I’ll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. …with further shaving-off of survival odds in exchange for life-extension by truly Vast orders of magnitude.
First off, I can’t bring myself to care about the difference; both are incomprehensibly long amounts of time.
Also, my natural tendency is to avoid “deal sweeteners”, presumably because in the real world this would be the “switch” part of the “bait-and-switch”—but Omega is 100% trustworthy, so I don’t need to worry—which means I need to specifically override my natural “decision hysteresis” and consider this as an initial choice to be made.
Is it cheating to let the “real world” intrude in the form of the following thought?:
If, by the time 10^^3 years have elapsed, I or my civilization have not developed some more controllable means of might-as-well-be-immortality, then I’m probably not going to care too much how long I live past the end of my civilization, much less the end of the universe.
...or am I simply supposed to think of “years of life” as a commodity, like money? (The ensuing monetary analogies would seem to imply this...) Too much of anything, though—money or time—becomes meaningless when multiplied further.:
Time: Do I assume my friends get to come with me, and that together we will find some way to survive the inevitable maximization of entropy?
Money: After I’ve bought the earth, and the rights to the rest of the solar system and any other planets we’re able to find with the infinite improbability drive developed by the laboratories I paid for, what do we do with the other $0.99999 x 10^^whatever? (And how do I spend the first part of that money without causing a global economic crisis that will make this one look like a slow day at the taco stand? Oh, wait, though, I’m probably supposed to assume I earned it legitimately by contributing that much value to the global economy… how??? Mind boggles, scenario fails.)
In other words… Omega can have the penny, because it’s totally not about the penny, but I don’t see any point in starting down the road of shaving off probability-points in exchange for orders of magnitude, no matter how large.
In fact, I’d be more inclined to go the other way, if that were an option—reducing the likelihood of death in exchange for a shorter life. (I’m not quite clear on whether this could be reverse-extrapolated from the examples given.) I suspect a thousand years would be enough; give me that, and I can get the rest for myself. (Or am I supposed to assume that I will never be able to extend my life beyond the years Omega gives me? If so, we’re getting way too mystical and into premises that seem like they would force me to revise my understanding of reality in some significant way.)
So I guess my primary answer to Eliezer’s question is that I don’t even start down the garden path because I’m more inclined to walk the other way.
Am I still missing anything?
Please stop allowing your practical considerations get in the way of the pure, beautiful counterfactual!
Seriously though, either you allow yourself to suspend practicalities and consider pure decision theory, or you don’t. This is a pure maths problem, you can’t equate it to ‘John has 4 apples.’ John has 3^^^3 apples here, causing your mind to break. Forget the apples and years, consider utility!
As I said somewhere earlier (points vaguely upward), my impression was that this was not actually intended as a pure mathematical problem but rather an example of how our innate decisionmaking abilities (morality? intuition?) don’t do well with big numbers.
If this is not the case, then why phrase the question as a word problem with a moral decision to be made? Why not simply ask it in pure mathematical terms?
this was my initial reaction as well, ask if I can go the other way until we’re at, say, 1000 years. but if you truly take the problem at face value (we’re negotiating with omega, the whole point of omega is that he neatly lops off alternatives for the purposes of the thought experiment) and are negotiating for your total lifespan +- 0 then yes, I think you’d be forced to come up with a rule.
I think my “true rejection”, then, if I’m understanding the term correctly, is the idea that we live in a universe where such absolute certainties could exist—or at least where for-all-practical-purposes certainties can exist without any further context.
This problem seems to have an obvious “shut up and multiply” answer (take the deal), but our normal intuitions scream out against it. We can easily imagine some negligible chance of living through the next hour, but we just can’t imagine trusting some dude enough to take that chance, or (properly) a period longer than some large epoch time.
Since our inability to properly grok these elements of the problem is the fulcrum on which our difficulty balances it seems more reasonable than usual to question Omega & her claims.
(This problem seems as easy to me as specks vs torture: in both cases you need to shut up and multiply, and in both cases you need to quiet your screaming intuitions—they were trained against different patterns.)
I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely.
To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam’s razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.
Failing to question them is another. In the political world, the power to define the problem trumps the power to solve it.
Within the terms of this problem, one is supposed to take Omega’s claims as axiomatically true. p=1, not 1-epsilon for even an unimaginably small epsilon. This is unlike Newcomb’s problem, where an ordinary, imaginable sort of confidence is all that is required.
Thinking outside that box, however, there’s a genuine issue around the question of what it would take to rationally accept Omega’s propositions involving such ginormous numbers. I notice that Christian Szegedy has been voted up for saying that in more technical language.
These are answers worth giving, especially by someone who can also solve the problem on its own terms.
Side note: ya know, it would be really nice if there was some way for a negative vote to be accompanied by some explanation of what the voter didn’t like. My comment here got one negative vote, and I have no idea at all why—so I am unable to take any corrective action either with regard to this comment or any future comments I may make.
(I suppose the voter could have replied to the comment to explain what the problem was, but then they would have surrendered their anonymity..)
That assumes those people down voting are doing so with some well thought out intention.