All else equal, do you disagree with: “A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect” for the range concerned?
I agree with that. My point is that agreeing that “A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable” doesn’t oblige me to agree that “A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable”. (Unless “a further ill effect” is meant to exclude not only car accidents but superlinear personal emotional effects, but that would be stupid.)
* 1 billion seconds = 31.7 years
I think that what we’re dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility
Since real problems never possess the degree of certainty that this dilemma does, holding certain heuristics as absolutes may be the utility-maximizing thing to do. In a realistic version of this problem, you would have to consider the results of empowering whatever agent is doing this to torture people with supposedly good but nonverifiable results. If it’s a human or group of humans, not such a good idea; if it’s a Friendly AI, maybe you can trust it but can’t it figure out a better way to achieve the result? (There is a Pascal’s Mugging problem here.)
One more thing for TORTURErs to think about: if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer? I lean towards SPECK on considering this, although I’m less sure about the case of torturing 3^^^3 people for a minute each vs. 1 person for 50 years.
All else equal, do you disagree with: “A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect” for the range concerned?
I agree with that. My point is that agreeing that “A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable” doesn’t oblige me to agree that “A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable”. (Unless “a further ill effect” is meant to exclude not only car accidents but superlinear personal emotional effects, but that would be stupid.)
* 1 billion seconds = 31.7 years
I think that what we’re dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility
Since real problems never possess the degree of certainty that this dilemma does, holding certain heuristics as absolutes may be the utility-maximizing thing to do. In a realistic version of this problem, you would have to consider the results of empowering whatever agent is doing this to torture people with supposedly good but nonverifiable results. If it’s a human or group of humans, not such a good idea; if it’s a Friendly AI, maybe you can trust it but can’t it figure out a better way to achieve the result? (There is a Pascal’s Mugging problem here.)
One more thing for TORTURErs to think about: if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer? I lean towards SPECK on considering this, although I’m less sure about the case of torturing 3^^^3 people for a minute each vs. 1 person for 50 years.