What is the difference between fun theory and political theory?
ETA: Did you edit your comment? I didn’t see some of the stuff at first.
...his ethical theory doesn’t really fit neatly into the deontological/consequentialism dichotomy anyway. Arguably, his ethics/political theory amounts to consequentialism with “side-constraints” (that can even be violated in extreme circumstances). It doesn’t seem to be any less consequentislist than, say, rule-utilitarianism.
but it’s still not consequentialist, whereas, consequentialism is correct.
I think so, but I also think the Less Wrong ethical doctrine is wrong. At this point I think non-cognitivism is more probable than consequentialism (ask me next week and I might not, I go back and forth on the subject).
I still believe in consequentialism, as do most (presumably?) people on Less Wrong.
Okay, that and your belief that rule-utilitarianism isn’t consequantialism leads me to think that your version of consequentialism is roughly “if you’re attempting to be an FAI and you’re not doing lots of multiplication then you’re doing it wrong”. Too far off?
Instrumental vs terminal goals. Consequentialism is the ideal, but can’t implement it so we have to approximate it with deontological rules due to limitations of our brains. The rules don’t get their moral authority from nowhere, they depend on being useful for reaching the actual goal. Or: the only reason we follow the rules is because we know that we’ll get a worse outcome if we don’t.
Is fun theory not relevant to lesswrongers?
What is the difference between fun theory and political theory?
ETA: Did you edit your comment? I didn’t see some of the stuff at first.
but it’s still not consequentialist, whereas, consequentialism is correct.
I still believe in consequentialism, as do most (presumably?) people on Less Wrong.
What do you mean by this? I know it doesn’t mean that humans should generally use consequentialist reasoning, for example.
It means that the right way to come up with deontological rules for humans is by thinking of them in the framework discussed in that post.
Okay, that and your belief that rule-utilitarianism isn’t consequantialism leads me to think that your version of consequentialism is roughly “if you’re attempting to be an FAI and you’re not doing lots of multiplication then you’re doing it wrong”. Too far off?
Instrumental vs terminal goals. Consequentialism is the ideal, but can’t implement it so we have to approximate it with deontological rules due to limitations of our brains. The rules don’t get their moral authority from nowhere, they depend on being useful for reaching the actual goal. Or: the only reason we follow the rules is because we know that we’ll get a worse outcome if we don’t.
It’s the difference between—a priori rules and a posteori rules, I guess?
I’m all for a posteori rules, but not a priori rules.