How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?
Why would anyone do that? (In the sense that your footnotes suggest this should be taken: quantifying over all possible worlds, trying to explicitly ground utility, and etc.)
We were human beings long before we started reading about rationality. I imagine EY looked at his schedule, his bank account, and the cost of a round-trip flight to New York, and said, “This might be cool, let’s do it.”
At the end of the day, everyone is still a human being. Everything adds up to normal, whether normality’s perfectly optimized or not.
Yes, my model agrees with that. But then it would be more fair to speak about things like they really are. To say “I was thinking for two minutes, and it seemed cool and without obvious problems, so I decided to do it”. You know, like an average mortal would do.
Speaking in a manner that suggests that decisions are done otherwise, seems to me just as dishonest as when a theist says “I heard Jesus speaking to me”, when in reality it was something like “I got this idea, it was without obvious problems, and it seemed like it could raise my status in my religious community”.
Not pretending to be something that I am not—isn’t this a part of the rationalist creed?
If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.
But then it would be more fair to speak about things like they really are.
I don’t understand. Who is not speaking about things like they really are? EY doesn’t even mention expected utility in his post. That was all a figment of someone’s imagination.
If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.
No need to Gendlin. People aren’t optimizing their utility functions, because they don’t have conscious access to their utility functions.
Why would anyone do that? (In the sense that your footnotes suggest this should be taken: quantifying over all possible worlds, trying to explicitly ground utility, and etc.)
We were human beings long before we started reading about rationality. I imagine EY looked at his schedule, his bank account, and the cost of a round-trip flight to New York, and said, “This might be cool, let’s do it.”
At the end of the day, everyone is still a human being. Everything adds up to normal, whether normality’s perfectly optimized or not.
Yes, my model agrees with that. But then it would be more fair to speak about things like they really are. To say “I was thinking for two minutes, and it seemed cool and without obvious problems, so I decided to do it”. You know, like an average mortal would do.
Speaking in a manner that suggests that decisions are done otherwise, seems to me just as dishonest as when a theist says “I heard Jesus speaking to me”, when in reality it was something like “I got this idea, it was without obvious problems, and it seemed like it could raise my status in my religious community”.
Not pretending to be something that I am not—isn’t this a part of the rationalist creed?
If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.
I don’t understand. Who is not speaking about things like they really are? EY doesn’t even mention expected utility in his post. That was all a figment of someone’s imagination.
No need to Gendlin. People aren’t optimizing their utility functions, because they don’t have conscious access to their utility functions.