How about ruler-of-the-universe deathism? Wouldn’t it be great if I were sore undisputed ruler of the universe? And yet thinking that rather unlikely, I don’t even try to achieve it. I even think trying to achieve it would be counter-productive. How freackin’ defeatist is that?
That you won’t try incorporates feasibility (and can well be a correct decision, just as expecting defeat may well be correct), but value judgment doesn’t, and shouldn’t be updated on lack of said feasibility. It’s not OK to not take over the world.
I even think trying to achieve it would be counter-productive.
I think that if I took over the world it might cause me to go Unfriendly; that is, there’s a nontrivial chance that the values of a DSimon that rules the world would diverge from my current values sharply and somewhat quickly.
Basically, I just don’t think I’m immune to corruption, so I don’t personally want to rule the world. However, I do wish that the world had an effective ruler that shared my current values.
See this comment. The intended meaning is managing to get your values to successfully optimize the world, not for your fallible human mind to issue orders.
Your actions are pretty “Unfriendly” even now, to the extent they don’t further your values because of poor knowledge of what you actually want and poor ability to form efficient plans.
Yes, that was some rhetoric applause-lighting on my part with little care about whether you meant what my post seemed to assume you meant. I think the point is worth making (with deathist interpretation of “OK”), even if it doesn’t actually apply to yours or Ben’s positions.
Unless you know you’re kind of a git or, more generally, your value system itself doesn’t rate ‘you taking over the world’ highly. I agree with your position though.
It is interesting to note that Robin’s comment is all valid when considered independently. The error he makes is that he presents it as a reply to your argument. “Should” is not determined by “probably will”.
Unless you know you’re kind of a git or, more generally, your value system itself doesn’t rate ‘you taking over the world’ highly.
It’s an instrumental goal, it doesn’t have to be valuable in itself. If you don’t want for your “personal attitude” to apply to the world as a whole, it reflects the fact that your values disagree with your personal attitude, and you prefer for the world to be controlled by your values rather than personal attitude.
Taking over the world as a human ruler is certainly not what I meant, and I expect is a bad idea with bad expected consequences (apart from independent reasons like being in a position to better manage existential risks).
It’s an instrumental goal, it doesn’t have to be valuable in itself.
The point being that It can be a terminal anti-goal. People could (and some of them probably do) value not-taking-over-the-world very highly. Similarly there are people who actually do want to die after the normal alloted years, completely independently of sour grapes updating. I think they are silly, but it is their values that matter to them, not my evaluation thereof.
People could (and some of them probably do) value not-taking-over-the-world very highly.
This is a statement about valuation of states of the world, a valuation that is best satisfied by some form of taking over the world (probably much more subtle than what gets classified so by the valuation itself).
I think they are silly, but it is their values that matter to them, not my evaluation thereof.
It’s still your evaluation of their situation that says whether you should consider their opinion on the matter of their values, or know what they value better than they do. What is the epistemic content of your thinking they are silly?
How about ruler-of-the-universe deathism? Wouldn’t it be great if I were sore undisputed ruler of the universe? And yet thinking that rather unlikely, I don’t even try to achieve it. I even think trying to achieve it would be counter-productive. How freackin’ defeatist is that?
That you won’t try incorporates feasibility (and can well be a correct decision, just as expecting defeat may well be correct), but value judgment doesn’t, and shouldn’t be updated on lack of said feasibility. It’s not OK to not take over the world.
There is no value in trying.
I think that if I took over the world it might cause me to go Unfriendly; that is, there’s a nontrivial chance that the values of a DSimon that rules the world would diverge from my current values sharply and somewhat quickly.
Basically, I just don’t think I’m immune to corruption, so I don’t personally want to rule the world. However, I do wish that the world had an effective ruler that shared my current values.
See this comment. The intended meaning is managing to get your values to successfully optimize the world, not for your fallible human mind to issue orders.
Your actions are pretty “Unfriendly” even now, to the extent they don’t further your values because of poor knowledge of what you actually want and poor ability to form efficient plans.
I don’t think you know what “OK” means.
Yes, that was some rhetoric applause-lighting on my part with little care about whether you meant what my post seemed to assume you meant. I think the point is worth making (with deathist interpretation of “OK”), even if it doesn’t actually apply to yours or Ben’s positions.
Unless you know you’re kind of a git or, more generally, your value system itself doesn’t rate ‘you taking over the world’ highly. I agree with your position though.
It is interesting to note that Robin’s comment is all valid when considered independently. The error he makes is that he presents it as a reply to your argument. “Should” is not determined by “probably will”.
It’s an instrumental goal, it doesn’t have to be valuable in itself. If you don’t want for your “personal attitude” to apply to the world as a whole, it reflects the fact that your values disagree with your personal attitude, and you prefer for the world to be controlled by your values rather than personal attitude.
Taking over the world as a human ruler is certainly not what I meant, and I expect is a bad idea with bad expected consequences (apart from independent reasons like being in a position to better manage existential risks).
The point being that It can be a terminal anti-goal. People could (and some of them probably do) value not-taking-over-the-world very highly. Similarly there are people who actually do want to die after the normal alloted years, completely independently of sour grapes updating. I think they are silly, but it is their values that matter to them, not my evaluation thereof.
This is a statement about valuation of states of the world, a valuation that is best satisfied by some form of taking over the world (probably much more subtle than what gets classified so by the valuation itself).
It’s still your evaluation of their situation that says whether you should consider their opinion on the matter of their values, or know what they value better than they do. What is the epistemic content of your thinking they are silly?
I do not agree.