1) Isomorphic to my “what if you know you’ll do something stupid if you learn that your girlfriend has cheated on you” example. To reiterate, any negative effects of learning are caused by false beliefs. Prioritize over which way you’re going to be wrong until you become strong enough to just not be predictably wrong, sure. But become stronger so that you can handle the truths you may encounter.
2) This clearly isn’t a conflict between epistemic and instrumental rationality. This is a question about arming your enemies vs not doing so, and the answer there is obvious. To reiterate what I said last time, this stuff all falls apart once you realize that these are two entirely separate systems both with their own beliefs and values and you posit that the subsystem in control is not the subsystem that is correct and shares your values. Epistemic rationality doesn’t mean giving your stalker your new address.
3) “Unfortunately studies have shown that in this case the deception is necessary, and the placebo effect won’t take hold without it”. This is assuming your conclusion. It’s like saying “Unfortunately, in my made up hypothetical that doesn’t actually exist, studies have shown that some bachelors are married, so now what do you say when you meet a married bachelor!”. I say you’re making stuff up and that no such thing exists. Show me the studies, and I’ll show you where they went wrong.
You can’t just throw a blanket over a box and say “now that you can no longer see the gears, imagine that there’s a perpetual motion machine in there!” and expect it to have any real world significance. If someone showed me a black box that put out more energy than went into it and persisted longer than known energy storage/conversion mechanisms could do, I would first look under the box for any shenanigans that a magician might try to pull. Next I would measure the electromagnetic energy in the room and check for wireless power transfer. Even if I found none of those, I would first expect that this guy is a better magician than I am anti-magician, and would not begin to doubt the physics. Even if I became assured that it wasn’t magician trickery and it really wasn’t sneaking energy in somehow, I would then start to suspect that he managed to build a nuclear reactor smaller than I thought possible, or otherwise discovered new physics that makes this possible. I would then proceed to tear the box apart and find out what assumptions I’m missing. At the point where it became likely that it wasn’t new physics but rather incorrect old physics, I would continually reference the underlying justifications of the laws of thermodynamics and see if I could start to see how one of the founding assumptions could be failing to hold.
Not until I had done all that would I even start to believe that it is genuinely what it claims to be. The reasons to believe in the laws of thermodynamics are simply so much stronger than the reason to believe people claiming to have perpetual motion machines that if your first response isn’t to challenge the hypothetical hard, then you’re making a mistake.
“Knowing more true things without knowing more false things leads to worse results by the values of the system that is making the decision even when the system is working properly” is a similarly extraordinary claim that calls for extraordinary evidence. The first thing to look for, besides a complete failure to even meet the description, is for false beliefs being smuggled in. In every case you’ve given, it’s been one or the other of these, and that’s not likely to change.
If you want to challenge one of the fundamental laws of rationality, you have to produce a working prototype, and it has to be able to show where the founding assumptions went wrong. You can’t simply cast a blanket over the box and declare that it is now “possible” since you “can’t see” that is not impossible. Endeavor to open black boxes and see the gears, not close your eyes to them and deliberately reason out of ignorance. Because when you do, you’ll start to see the path towards making both your epistemic and your instrumental rationality work better.
4) Throw it away like all spam. Your attention is precious, and you should spend it learning the things that you expect to help you the most, not about seagulls. If you want though, you can use this as an exercise in becoming more resilient and/or about learning about the nature of human psychological frailty.
It’s worth noticing though, that you didn’t use a real world example and that there might be reasons for this.
5) This is just 2 again.
6) Maybe? As stated, probably not. There are a few different possibilities here though, and I think it makes more sense to address them individually.
a) The torture is physically damaging, like peeling ones skin back of slowly breaking every bone in ones body.
In this case, obviously not. I’m also curious what it feels like to be shot in the leg, but the price of that information is more than I’m willing to spend. If I learn what that feels like, then I don’t get to learn what I would have been able to accomplish if I could still walk well. There’s no conflict here between epistemic and instrumental rationality here.
b) The “torture” is guaranteed to be both safe and non physically damaging, and not keep me prisoner too long when I could be doing other things.
When I learned about tarantula hawks and that their sting was supposedly both debilitatingly painful and also perfectly non-damaging and safe, I went pretty far out of my way to acquire them and provoke them to sting me. Fear of non-damaging things is a failing to be stamped out. When you accept that the scary thing truly is sufficiently non-dangerous, fear just becomes excitement anyway.
If these mysterious white room people think they can bring me a challenge while keeping things sufficiently safe and non-physically-damaging I’d probably call their bluff and push that button to see what they got.
c) This “torture” really is enough to push me sufficiently past my limits of composure that there will be lasting psychological damage.
I think this is actually harder than you think unless you also cross the lines on physical damage, risk, or get to spend a lot of time at it. However, it is conceivable and so in this case we’re back to being another example of number one. If I’m pretty sure it won’t be any worse than this, I’d go for it.
This whole “epistemic vs instrumental rationality” thing really is just a failure to do epistemic rationality right, and when you peak into the black box instead of intentionally keeping it covered you can start to see why.
I think your comparison to spam in #4 works well. Reading spam has negative expected utility and small possible positive utility. Negative-sum advertising in general and spam in particularis a real-world example, at least in principle.
1) Isomorphic to my “what if you know you’ll do something stupid if you learn that your girlfriend has cheated on you” example. To reiterate, any negative effects of learning are caused by false beliefs. Prioritize over which way you’re going to be wrong until you become strong enough to just not be predictably wrong, sure. But become stronger so that you can handle the truths you may encounter.
2) This clearly isn’t a conflict between epistemic and instrumental rationality. This is a question about arming your enemies vs not doing so, and the answer there is obvious. To reiterate what I said last time, this stuff all falls apart once you realize that these are two entirely separate systems both with their own beliefs and values and you posit that the subsystem in control is not the subsystem that is correct and shares your values. Epistemic rationality doesn’t mean giving your stalker your new address.
3) “Unfortunately studies have shown that in this case the deception is necessary, and the placebo effect won’t take hold without it”. This is assuming your conclusion. It’s like saying “Unfortunately, in my made up hypothetical that doesn’t actually exist, studies have shown that some bachelors are married, so now what do you say when you meet a married bachelor!”. I say you’re making stuff up and that no such thing exists. Show me the studies, and I’ll show you where they went wrong.
You can’t just throw a blanket over a box and say “now that you can no longer see the gears, imagine that there’s a perpetual motion machine in there!” and expect it to have any real world significance. If someone showed me a black box that put out more energy than went into it and persisted longer than known energy storage/conversion mechanisms could do, I would first look under the box for any shenanigans that a magician might try to pull. Next I would measure the electromagnetic energy in the room and check for wireless power transfer. Even if I found none of those, I would first expect that this guy is a better magician than I am anti-magician, and would not begin to doubt the physics. Even if I became assured that it wasn’t magician trickery and it really wasn’t sneaking energy in somehow, I would then start to suspect that he managed to build a nuclear reactor smaller than I thought possible, or otherwise discovered new physics that makes this possible. I would then proceed to tear the box apart and find out what assumptions I’m missing. At the point where it became likely that it wasn’t new physics but rather incorrect old physics, I would continually reference the underlying justifications of the laws of thermodynamics and see if I could start to see how one of the founding assumptions could be failing to hold.
Not until I had done all that would I even start to believe that it is genuinely what it claims to be. The reasons to believe in the laws of thermodynamics are simply so much stronger than the reason to believe people claiming to have perpetual motion machines that if your first response isn’t to challenge the hypothetical hard, then you’re making a mistake.
“Knowing more true things without knowing more false things leads to worse results by the values of the system that is making the decision even when the system is working properly” is a similarly extraordinary claim that calls for extraordinary evidence. The first thing to look for, besides a complete failure to even meet the description, is for false beliefs being smuggled in. In every case you’ve given, it’s been one or the other of these, and that’s not likely to change.
If you want to challenge one of the fundamental laws of rationality, you have to produce a working prototype, and it has to be able to show where the founding assumptions went wrong. You can’t simply cast a blanket over the box and declare that it is now “possible” since you “can’t see” that is not impossible. Endeavor to open black boxes and see the gears, not close your eyes to them and deliberately reason out of ignorance. Because when you do, you’ll start to see the path towards making both your epistemic and your instrumental rationality work better.
4) Throw it away like all spam. Your attention is precious, and you should spend it learning the things that you expect to help you the most, not about seagulls. If you want though, you can use this as an exercise in becoming more resilient and/or about learning about the nature of human psychological frailty.
It’s worth noticing though, that you didn’t use a real world example and that there might be reasons for this.
5) This is just 2 again.
6) Maybe? As stated, probably not. There are a few different possibilities here though, and I think it makes more sense to address them individually.
a) The torture is physically damaging, like peeling ones skin back of slowly breaking every bone in ones body.
In this case, obviously not. I’m also curious what it feels like to be shot in the leg, but the price of that information is more than I’m willing to spend. If I learn what that feels like, then I don’t get to learn what I would have been able to accomplish if I could still walk well. There’s no conflict here between epistemic and instrumental rationality here.
b) The “torture” is guaranteed to be both safe and non physically damaging, and not keep me prisoner too long when I could be doing other things.
When I learned about tarantula hawks and that their sting was supposedly both debilitatingly painful and also perfectly non-damaging and safe, I went pretty far out of my way to acquire them and provoke them to sting me. Fear of non-damaging things is a failing to be stamped out. When you accept that the scary thing truly is sufficiently non-dangerous, fear just becomes excitement anyway.
If these mysterious white room people think they can bring me a challenge while keeping things sufficiently safe and non-physically-damaging I’d probably call their bluff and push that button to see what they got.
c) This “torture” really is enough to push me sufficiently past my limits of composure that there will be lasting psychological damage.
I think this is actually harder than you think unless you also cross the lines on physical damage, risk, or get to spend a lot of time at it. However, it is conceivable and so in this case we’re back to being another example of number one. If I’m pretty sure it won’t be any worse than this, I’d go for it.
This whole “epistemic vs instrumental rationality” thing really is just a failure to do epistemic rationality right, and when you peak into the black box instead of intentionally keeping it covered you can start to see why.
I think your comparison to spam in #4 works well. Reading spam has negative expected utility and small possible positive utility. Negative-sum advertising in general and spam in particularis a real-world example, at least in principle.