This seems incorrect. It doesn’t really matter for blackmailer if you’re aware of the blackmail or not, what matters is his estimate of the chance than you know.
Blackmailing is profitable if gain from successful blackmail chance you’ll know about it chance you’ll give in > cost of blackmail.
Unless you can guarantee 100% solid precommitment to not giving in to blackmail (and let’s face it—friendly AI is easier than that), the more you increase the chance of knowing about it, the more blackmailing you’ll face.
Acquiring information is never bad, in and of itself.
That idea is usually regarded as being incorrect around here—e.g. see here.
For instance, the document states that one example is “to measure the placebo effect”. In that case, if you find out what treatment you actually got, that messes up the trial, and you have to start all over again.
There is a more defensible idea that accquiring accurate information is not ever bad—if you are a super-rational uber-agent, who is able to lie flawlessly, erase information perfectly, etc.
However, that is counter-factual. If you are a human, in practice, acquiring accurate information can harm you—and of course acquiring deceptive or inaccurate information can really cause problems.
Unless there’s a placebo effect placebo effect! Seriously, I think I’ve experienced that. (I’ll take a pill and immediately feel better because I think that the placebo effect will make me feel better.) But maybe it’s too hard to disentangle.
I continue to think that I am blatantly crazy for continuing to not find out how strong placebo effects tend to be and what big factors affect that.
Actually, no it isn’t. What is bad for you is for the blackmailer to learn that you are aware of the blackmail.
Acquiring information is never bad, in and of itself. Allowing others to gain information can be bad for you. Speaking as an egoist, that is.
ETA: I now notice that gjm already made this point.
This seems incorrect. It doesn’t really matter for blackmailer if you’re aware of the blackmail or not, what matters is his estimate of the chance than you know.
Blackmailing is profitable if gain from successful blackmail chance you’ll know about it chance you’ll give in > cost of blackmail.
Unless you can guarantee 100% solid precommitment to not giving in to blackmail (and let’s face it—friendly AI is easier than that), the more you increase the chance of knowing about it, the more blackmailing you’ll face.
That idea is usually regarded as being incorrect around here—e.g. see here.
For instance, the document states that one example is “to measure the placebo effect”. In that case, if you find out what treatment you actually got, that messes up the trial, and you have to start all over again.
There is a more defensible idea that accquiring accurate information is not ever bad—if you are a super-rational uber-agent, who is able to lie flawlessly, erase information perfectly, etc.
However, that is counter-factual. If you are a human, in practice, acquiring accurate information can harm you—and of course acquiring deceptive or inaccurate information can really cause problems.
Unless there’s a placebo effect placebo effect! Seriously, I think I’ve experienced that. (I’ll take a pill and immediately feel better because I think that the placebo effect will make me feel better.) But maybe it’s too hard to disentangle.
I continue to think that I am blatantly crazy for continuing to not find out how strong placebo effects tend to be and what big factors affect that.