On rereview of torture vs. specks/shampoo, I think I may see something noteworthy, which is there are multiple separate problems here and I haven’t been considering all of them.
Problem 1: Given a large enough stake on the other end, a seemingly sacred value(not torturing people) isn’t sacred.
Example: the last time I did the math on this, I think I calculated that the trade point was roughly around the quintillions range. That was roughly the point where I thought it might be better to just torture the one person than have quintillions of people suffer that inconvenience, because the inconvenience, multiplied by 1 quintillion (10^18), was approximately as bad as the torture, when I tried to measure both. (The specific number isn’t as critical, just the rough order of magnitude, and I want to note that was assuming near 100% certainty that the threat was real.)
I think Problem 1 is the standard way to evaluate this. But there is also this:
Problem 2: Given a large enough stake on the other end, you need to reevaluate what is going on because you can’t handle threats of that caliber as threats.
Ergo, If you try to actually establish how 3^^^3 trivial inconveniences has meaning, you’ll generally fail miserably. You might end up saying: “You could turn the all possible histories of all possible universal branches to shampoo from the beginning of time until the projected end of time, and you STILL wouldn’t have enough shampoo to actually do that, so what does that threat even mean?”
So to evaluate that, you need to temporarily make a variety of changes to how you process things, just to resolve how a threat of that level even makes sense to determine if you comply.
Problem 2 comes up sometimes, but there is also:
Problem 3: Given a sufficiently large threat, the threat itself is actually an attack, not just a threat.
For instance, someone could run the following code to print a threat to a terminal:
10: Print: “If you don’t press this button to torture this person for 50 years, I’m going to give the following number of people a trivial inconvenience:”
20: Print: “3, large number of Knuth’s up arrows 3, where the large number of Knuth’s up arrow can be defined as:”
30: Wait 1 second.
40: If person hasn’t become unable to push button and the person hasn’t pushed the button, Goto 20
50: Else: Do stuff.
And some particularly simple threat evaluation code will see that threat and just hang, waiting for the threat to finish printing before deciding whether or not to press the button.
So we have:
A: Threats too small to cause you to act. (Example, 1 person get’s shampoo in their eyes)
B: Threats large enough to cause you to act. (Example, a plausible chance that 10^20 people get shampoo in their eyes)
C: Threats so large they do not appear to be possible based in how you understand reality. (Example, 3^^^3 people get shampoo in their eyes), so you have to potentially reevaluate everything to process the threat.
D: Threats so large that the threat itself should actually be treated as malicious/broken, because you will never finish resolving the threat size without just setting it to infinite.
So in addition to considering whether a threat is Threat A or B, (Problem 1) it seems like I would also need to consider if it was C or D. (Problem 2 and 3)
On rereview of torture vs. specks/shampoo, I think I may see something noteworthy, which is there are multiple separate problems here and I haven’t been considering all of them.
Problem 1: Given a large enough stake on the other end, a seemingly sacred value(not torturing people) isn’t sacred.
Example: the last time I did the math on this, I think I calculated that the trade point was roughly around the quintillions range. That was roughly the point where I thought it might be better to just torture the one person than have quintillions of people suffer that inconvenience, because the inconvenience, multiplied by 1 quintillion (10^18), was approximately as bad as the torture, when I tried to measure both. (The specific number isn’t as critical, just the rough order of magnitude, and I want to note that was assuming near 100% certainty that the threat was real.)
I think Problem 1 is the standard way to evaluate this. But there is also this:
Problem 2: Given a large enough stake on the other end, you need to reevaluate what is going on because you can’t handle threats of that caliber as threats.
Ergo, If you try to actually establish how 3^^^3 trivial inconveniences has meaning, you’ll generally fail miserably. You might end up saying: “You could turn the all possible histories of all possible universal branches to shampoo from the beginning of time until the projected end of time, and you STILL wouldn’t have enough shampoo to actually do that, so what does that threat even mean?”
So to evaluate that, you need to temporarily make a variety of changes to how you process things, just to resolve how a threat of that level even makes sense to determine if you comply.
Problem 2 comes up sometimes, but there is also:
Problem 3: Given a sufficiently large threat, the threat itself is actually an attack, not just a threat.
For instance, someone could run the following code to print a threat to a terminal:
10: Print: “If you don’t press this button to torture this person for 50 years, I’m going to give the following number of people a trivial inconvenience:”
20: Print: “3, large number of Knuth’s up arrows 3, where the large number of Knuth’s up arrow can be defined as:”
30: Wait 1 second.
40: If person hasn’t become unable to push button and the person hasn’t pushed the button, Goto 20
50: Else: Do stuff.
And some particularly simple threat evaluation code will see that threat and just hang, waiting for the threat to finish printing before deciding whether or not to press the button.
So we have:
A: Threats too small to cause you to act. (Example, 1 person get’s shampoo in their eyes)
B: Threats large enough to cause you to act. (Example, a plausible chance that 10^20 people get shampoo in their eyes)
C: Threats so large they do not appear to be possible based in how you understand reality. (Example, 3^^^3 people get shampoo in their eyes), so you have to potentially reevaluate everything to process the threat.
D: Threats so large that the threat itself should actually be treated as malicious/broken, because you will never finish resolving the threat size without just setting it to infinite.
So in addition to considering whether a threat is Threat A or B, (Problem 1) it seems like I would also need to consider if it was C or D. (Problem 2 and 3)
Is that accurate?