I think we do disagree on if it’s a good idea to widely spread as a message “HEY SUICIDAL PEOPLE HAVE YOU REALIZED THAT IF YOU KILL YOURSELF EVERYONE WILL SAY NICE THINGS ABOUT YOU AND WORK ON SOLVING PROBLEMS YOU CARE ABOUT LET’S MAKE SURE TO HIGHLIGHT THIS EXTENSIVELY”.
Paperclip Minimizer
I think we agree on this and we only miscommunicated with each other. Aumann points for both of us, I guess.
This isn’t what “conflict theory” mean. Conflict theory is a specific theory about the nature of conflict, that say conflict is inevitable. Conflict theory doesn’t simply mean that conflict exist.
But if the blackmail information is a good thing to publish, then blackmailing is still immoral, because it should be published and people should be incentivized to publish it, not to not publish it. We, as a society, should ensure that if, say, someone routinely engage in kidnapping children to harvest their organs, and someone knows this information, then she should be incentivized to send this information to the relevant authorities and not to keep this information to herself, for reasons that are I hope obvious.
Excellent article ! You might want to add some trigger warnings, though.
edit: why so many downvotes in so little time ?
More generally, commenting isn’t a good way to train oneself as a rationalist, but blogging is.
See my answer in Ozy’s subthread
Yes, this is the whole point of the first part of the article.
But don’t you need to get a gears-level model of how blackmail is bad to think about how dystopian a hypothetical legal-blackmail sociey is ?
Note the framing. Not “should blackmail be legal?” but rather “why should blackmail be illegal?” Thinking for five seconds (or minutes) about a hypothetical legal-blackmail society should point to obviously dystopian results. This is not a subtle. One could write the young adult novel, but what would even be the point.
Of course, that is not an argument. Not evidence.
What ? From a consequentialist point of view, of course it is. If a policy (and “make blackmail legal” is a policy) probably have bad consequences, then it is a bad policy.
Never heard of a prank like this, this sound weird.
Yes, you’re right, some people raised this in the /r/ControlProblem subreddit. I fixed this.
This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.
VNM is used to show why you need to have utility functions if you don’t want to get Dutch-booked. It’s not something the OP invented, it’s the whole point of VNM. One wonder what you thought VNM was about.
Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.
That we face trade-offs in the real world is a claim under dispute ?
Ditto.
Another way of phrasing it is that we can model “ignore” as a choice, and derive the VNM theorem just as usual.
Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.
Ditto.
Once again, please provide some real-world examples of when this applies.
OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.
Fixed ;)
My position on anthropics is that anthropics is grounded in updateless decision theory, which AFAIK lead in practice to full non-indexical conditioning.
I’m surprised that you’re mentioning only non-negative utilitarianism and deontology, rather than the capability utilitarianism you recently signal-boosted, which I think is a more psychologically realistic explanation of people’s reactions to the idea of wireheading.
It depends on the rationalist space in question. LW isn’t in the IDW given it’s politics-free while the /r/slatestarcodex subreddit is inside the IDW and many of its members qualify it as such.
(It is incidentally 1. more prone to comments showing a lack of familiarity to basic rationalist/Sequences concepts, and 2. more prone to uncharitable culture-warring and casual bigotry (e.g. misgendering). I think the three may be correlated.)
The world being turned in computronium computing in order to solve the AI alignment problem would certainly be an ironic end to it.
It seems like you are the one doing some kind of motte-and-bailey, given you made a post called “Wirehead your Chickens” arguing for wireheading chickens and having a rather dismissive tone towards the opposing side, and now you’re saying the real point was that negative utilitarian rhetoric is too emphasized compared to the moral systems which are actually used by EAs. (By the way, the prominence of negative utilitarian rhetoric is one of My Issues With EA Let Me Show You Them.)