I think it’s worth dividing blackmail into two distinct types:
1. Blackmailing on information that is harmful to society.
2. Blackmailing on information that is not harmful to society, but which the victim feels private about.
Your arguments stand somewhat well for the first type. For example, if one is stealing money from the cash register where he works on a weekly basis, then we would not want such behavior to persist. But for the latter type, for example, if someone is secretly a homosexual and is afraid of what his family would say or do if they knew, I don’t think we’d like to force him ‘out of the closet’.
A possibly more serious problem would be how the extortionist can escalate the stakes (similar to Zvi’s argument if I understood it correctly), where one may start with blackmailing the victim about being a homosexual, and proceed to force him to steal money from the cash register in order to have even more leverage on him. In other words, an intelligent blackmailer could potentially start from type 2 but cause type 1 actions to be performed.
Lastly—Blackmailers do no reveal said information to society, making it all better. They would actually rather to never reveal that information (thus losing their ability to blackmail the victim). They instead make personal profit and gain from it which may also allow the victim to persist with his harmful / illicit behavior. In other words, the amount the victim pays is not a simple function of how much his behavior is harmful to society, but depends on how good the blackmailer is and how much he knows. In this regard, it may be worth while to simply tell the authorities (assuming some ideal authorities, yeah I know—not very realistic). In which case they have the means to investigate the matter in depth and enforce the socially accepted punishment for such an offense. Do note that this also means that the victim would not be punished for type (2) blackmails.
So my bottom line is—perhaps giving people incentive to tell the authorities about someone else’s illicit behavior is a better way of doing things, assuming the authorities aren’t too awful.
Sounds really cool, too bad the ‘more details document’ is all in Russian. I suppose it’s not like I would go to Russia just for an RPG, but it sounds like fun and I would love hearing more details about it.
I think the title is a little bit misleading, and perhaps he didn’t put much emphasis on this, but it seems he isn’t claiming correct models are generally bad, just that there are also possible downsides to holding correct models and it’s probably a good idea to be aware to these flaws when applying these models to reality.
Also, it seems to me as he is defining ‘correct model’ as a model in which the reasoning is sound and can be used for some applications, however does not necessarily fully describe every aspect of the problem.
What word do you mean? Friendly AI? It’s a term (I’m hardly an expert, but I guess wikipedia should be okay for that https://en.wikipedia.org/wiki/Friendlyartificial intelligence )