downvoted because you actually said “I would like to do whichever of these two alternatives leads to more utility.”
A) no one or almost no one thinks this way, and advice based on this sort of thinking is useless to almost everyone.
B) The entire point of the original post was that, if you try to do this, then you immediately get completely taken over by consideration of any gods you can imagine. When you say that thinking about unlikely gods is not “worth” the computational resources, you are sidestepping the very issue we are discussing. You have already decided it’s not worth thinking about tiny probabilities of huge returns.
I think he actually IS making the argument that you assign a low probability to, but instead of dismissing it I think it’s actually extremely important to decide whether to take certain courses based on how practical they are. The entire original purpose of this community is research into AI, and while you can’t choose your own utility function, you can choose an AI’s. If this problem is practically insoluble, then we should design AIs with only bounded utility functions.
downvoted because you actually said “I would like to do whichever of these two alternatives leads to more utility.”
Tim seemed to be implying that it would be absurd for unlikely gods to be the most important motive for determining how to act, but I did not see how anything that he said showed that doing so is actually a bad idea.
When you say that thinking about unlikely gods is not “worth” the computational resources, you are sidestepping the very issue we are discussing.
What? I did not say that; I said that thinking about unlikely gods might just be one’s actual preference. I also pointed out that Tim did not prove that unlikely gods are more important than likely gods, so one who accepts most of his argument might still not motivated by “a million unlikely gods”.
downvoted because you actually said “I would like to do whichever of these two alternatives leads to more utility.”
A) no one or almost no one thinks this way, and advice based on this sort of thinking is useless to almost everyone.
B) The entire point of the original post was that, if you try to do this, then you immediately get completely taken over by consideration of any gods you can imagine. When you say that thinking about unlikely gods is not “worth” the computational resources, you are sidestepping the very issue we are discussing. You have already decided it’s not worth thinking about tiny probabilities of huge returns.
I think he actually IS making the argument that you assign a low probability to, but instead of dismissing it I think it’s actually extremely important to decide whether to take certain courses based on how practical they are. The entire original purpose of this community is research into AI, and while you can’t choose your own utility function, you can choose an AI’s. If this problem is practically insoluble, then we should design AIs with only bounded utility functions.
Tim seemed to be implying that it would be absurd for unlikely gods to be the most important motive for determining how to act, but I did not see how anything that he said showed that doing so is actually a bad idea.
What? I did not say that; I said that thinking about unlikely gods might just be one’s actual preference. I also pointed out that Tim did not prove that unlikely gods are more important than likely gods, so one who accepts most of his argument might still not motivated by “a million unlikely gods”.