As everyone knows, this is the most rational and non-obnoxious way to think about incentives and disincentives.
In all seriousness, coming up with extreme, contrived examples is a very good way to test the limits of moral criteria, methods of reasoning, etc. Oftentimes if a problem shows up most obviously at the extreme fringes, it may also be liable, less obviously, to affect reasoning in more plausible real-world scenarios, so knowing where a system obviously fails is a good starting point.
Of course, we’re generally relying on intuition to determine what a “failure” is (many people would hear that utilitarianism favours TORTURE over SPECKS and deem that a failure of utilitarianism rather than a failure of intuition), so this method is also good for probing at what people really believe, rather than what they claim to believe, or believe they believe. That’s a good general principle of reverse engineering — if you can figure out where a system does something weird or surprising, or merely what it does in weird or surprising cases, you can often get a better sense of the underlying algorithms. A person unfamiliar with the terminology of moral philosophy might not know whether they are a deontologist or a consequentialist or something else, and if you ask them whether it is right to kill a random person for no reason, then they will probably say no, no matter what they are. If you ask them whether it’s right to kill someone who is threatening other people, there’s some wiggle room on both sides. But if you tell them to imagine that an evil wizard comes before them and gives them a device with a button, and tells them that pushing the button will cause a hundred people to die horrible deaths, and that if and only if they don’t push the button, he will personally kill a hundred thousand people, then their answer to this ridiculous scenario will give you much better information about their moral thinking than any of the previous more realistic examples.
If you ask me, the prevalence of torture scenarios on this site has very little to do with clarity and a great deal to do with a certain kind of autism-y obsession with things that might happen but probably won’t.
It’s the same mental machinery that makes people avoid sidewalk cracks or worry their parents have poisoned their food.
A lot of times it seems the “rationality” around here simply consists of an environment that enables certain neuroses and personality problems while suppressing more typical ones.
I don’t think that being fascinated by extremely low-probability but dramatic possibilities has anything to do with autism. As you imply, people in general tend to do it, though being terrified about airplane crashes might be a better example.
I’d come up with an evolutionary explanation, but a meteor would probably fall on my head if I did that.
I really don’t see how you could have drawn that conclusion. It’s not like anyone here is actually worried about being forced to choose between torture and dust specks, or being accosted by Omega and required to choose one box or two, or being counterfactually mugged. (And, if you were wondering, we don’t actually think Clippy is a real paperclip maximizer, either.) “Torture” is a convenient, agreeable stand-in for “something very strongly negatively valued” or “something you want to avoid more than almost anything else that could happen to you” in decision problems. I think it works pretty well for that purpose.
Yes, a recent now-deleted post proposed a torture scenario as something that might actually happen, but it was not a typical case and not well-received. You need to provide more evidence that more than a few people here actually worry about that sort of thing, that it’s more than just an Omega-like abstraction used to simplify decision problems by removing loopholes around thinking about the real question.
Actually I’m not sure he’s even serious, but I’ve certainly seen that argument advanced before, and the parent post’s “1% chance” thing is I’m pretty sure a parody of the idea that you have to give anything at least a 1% chance, because it’s all so messy and how can you ever be sure?! which has certainly shown up on this site on several occasions recently, particularly in relation to the very extreme fringe scenarios you say help people think more clearly.
Torture scenarios have LONG being advanced in this community as more than a trolly problem with added poor-taste hyperbole. Even if you go back to the SL4 mailing list, it’s full of discussions where someone says something about AI, and someone else replies “what, so an AI is like god in this respect? What if it goes wrong? What if religious people make one? What if my mean neighbor gets uploaded? What if what if what if? WE’LL ALL BE TORTURED!”
I was serious that the probability of an explicitly described situation is orders of magnitude greater than the probability of a not-yet-chosen random scenario. In the same way the probability of any particular hundred digit number, once someone posts it online, will be orders of magnitude more likely to appear elsewhere.
But I was joking in the “complaint” about the posting, because the probability both before and after the post is small enough that no reasonable person could worry about the thing happening.
I really don’t see how you could have drawn that conclusion. It’s not like anyone here is actually worried about being forced to choose between torture and dust specks, or being accosted by Omega and required to choose one box or two, or being counterfactually mugged.
This isn’t consistent with Roko’s post here which took seriously the notion of a post Singularity FAI precommitting to torturing some people for eternity. ETA: Although it does seem that part of the reason that post was downvoted heavily was that most people considered the situation ridiculous.
If you ask me, the prevalence of torture scenarios on this site has very little to do with clarity and a great deal to do with a certain kind of autism-y obsession with things that might happen but probably won’t.
Could you give an example of how extreme examples inform realistic examples? Is there evidence that people who advocate deontology or consequentialism in one place do so in the other?
I think paradoxes/extreme examples work mainly by provoking lateral thinking, forcing us to reconsider assumptions, etc. It has nothing at all to do with the logical system under consideration. Sometimes we get lucky and hit upon an idea that goes further and with less exceptions, other times we don’t. In short, it’s all in the map, not in the territory.
I don’t believe in absolute consistency (whether in morality or even in say, physics). A theory is an algorithm that works. We should be thankful that it does at all. In something like morality, I don’t expect there to be a possible systematization of it. We will only know what is moral in the far future in the only-slightly-less-far future. Self-modification has no well-defined trajectory.
Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. --Feynman
In all seriousness, coming up with extreme, contrived examples is a very good way to test the limits of moral criteria, methods of reasoning, etc. Oftentimes if a problem shows up most obviously at the extreme fringes, it may also be liable, less obviously, to affect reasoning in more plausible real-world scenarios, so knowing where a system obviously fails is a good starting point.
Of course, we’re generally relying on intuition to determine what a “failure” is (many people would hear that utilitarianism favours TORTURE over SPECKS and deem that a failure of utilitarianism rather than a failure of intuition), so this method is also good for probing at what people really believe, rather than what they claim to believe, or believe they believe. That’s a good general principle of reverse engineering — if you can figure out where a system does something weird or surprising, or merely what it does in weird or surprising cases, you can often get a better sense of the underlying algorithms. A person unfamiliar with the terminology of moral philosophy might not know whether they are a deontologist or a consequentialist or something else, and if you ask them whether it is right to kill a random person for no reason, then they will probably say no, no matter what they are. If you ask them whether it’s right to kill someone who is threatening other people, there’s some wiggle room on both sides. But if you tell them to imagine that an evil wizard comes before them and gives them a device with a button, and tells them that pushing the button will cause a hundred people to die horrible deaths, and that if and only if they don’t push the button, he will personally kill a hundred thousand people, then their answer to this ridiculous scenario will give you much better information about their moral thinking than any of the previous more realistic examples.
If you ask me, the prevalence of torture scenarios on this site has very little to do with clarity and a great deal to do with a certain kind of autism-y obsession with things that might happen but probably won’t.
It’s the same mental machinery that makes people avoid sidewalk cracks or worry their parents have poisoned their food.
A lot of times it seems the “rationality” around here simply consists of an environment that enables certain neuroses and personality problems while suppressing more typical ones.
I don’t think that being fascinated by extremely low-probability but dramatic possibilities has anything to do with autism. As you imply, people in general tend to do it, though being terrified about airplane crashes might be a better example.
I’d come up with an evolutionary explanation, but a meteor would probably fall on my head if I did that.
I really don’t see how you could have drawn that conclusion. It’s not like anyone here is actually worried about being forced to choose between torture and dust specks, or being accosted by Omega and required to choose one box or two, or being counterfactually mugged. (And, if you were wondering, we don’t actually think Clippy is a real paperclip maximizer, either.) “Torture” is a convenient, agreeable stand-in for “something very strongly negatively valued” or “something you want to avoid more than almost anything else that could happen to you” in decision problems. I think it works pretty well for that purpose.
Yes, a recent now-deleted post proposed a torture scenario as something that might actually happen, but it was not a typical case and not well-received. You need to provide more evidence that more than a few people here actually worry about that sort of thing, that it’s more than just an Omega-like abstraction used to simplify decision problems by removing loopholes around thinking about the real question.
How about this guy a couple comments down?
Actually I’m not sure he’s even serious, but I’ve certainly seen that argument advanced before, and the parent post’s “1% chance” thing is I’m pretty sure a parody of the idea that you have to give anything at least a 1% chance, because it’s all so messy and how can you ever be sure?! which has certainly shown up on this site on several occasions recently, particularly in relation to the very extreme fringe scenarios you say help people think more clearly.
Torture scenarios have LONG being advanced in this community as more than a trolly problem with added poor-taste hyperbole. Even if you go back to the SL4 mailing list, it’s full of discussions where someone says something about AI, and someone else replies “what, so an AI is like god in this respect? What if it goes wrong? What if religious people make one? What if my mean neighbor gets uploaded? What if what if what if? WE’LL ALL BE TORTURED!”
I was serious that the probability of an explicitly described situation is orders of magnitude greater than the probability of a not-yet-chosen random scenario. In the same way the probability of any particular hundred digit number, once someone posts it online, will be orders of magnitude more likely to appear elsewhere.
But I was joking in the “complaint” about the posting, because the probability both before and after the post is small enough that no reasonable person could worry about the thing happening.
This isn’t consistent with Roko’s post here which took seriously the notion of a post Singularity FAI precommitting to torturing some people for eternity. ETA: Although it does seem that part of the reason that post was downvoted heavily was that most people considered the situation ridiculous.
Pondering blue tentacle scenarios can be a displacement activity, to avoid dealing with things that really might happen, or be made to happen, here and now.
Could you give an example of how extreme examples inform realistic examples? Is there evidence that people who advocate deontology or consequentialism in one place do so in the other?
I think paradoxes/extreme examples work mainly by provoking lateral thinking, forcing us to reconsider assumptions, etc. It has nothing at all to do with the logical system under consideration. Sometimes we get lucky and hit upon an idea that goes further and with less exceptions, other times we don’t. In short, it’s all in the map, not in the territory.
I don’t believe in absolute consistency (whether in morality or even in say, physics). A theory is an algorithm that works. We should be thankful that it does at all. In something like morality, I don’t expect there to be a possible systematization of it. We will only know what is moral in the far future in the only-slightly-less-far future. Self-modification has no well-defined trajectory.
Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. --Feynman