As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer’s conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don’t work in the presence of things like other agents that can read the AI’s source code, so we need to find some non-obvious guesses because that’s something that could actually happen.
Hey, I think your tone here comes across as condescending, which goes against the spirit of a ‘stupid questions’ thread, by causing people to believe they will lose status by posting in here.
As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer’s conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don’t work in the presence of things like other agents that can read the AI’s source code, so we need to find some non-obvious guesses because that’s something that could actually happen.
Hey, I think your tone here comes across as condescending, which goes against the spirit of a ‘stupid questions’ thread, by causing people to believe they will lose status by posting in here.
Fair point. My apologies. Getting rid of the first sentence.
Thanks!
data point: I didn’t parse it as condescending at all.
Did you read it before it was rephrased?
Ah, I see there was a race condition. I’ll retract my comment.