I have no problem with how much he earns.
If he needs to spend contributed money for his personal fun that is completely justifiable.
If he’d need a big car to impress donors that is justifiable.
I called him the most intelligent person I know a few times, it is all on record.
I said a few times that I’m less worried by dying permanently because there is someone like Yudkowsky who contains all my values and much more.
That you people permanently try to accuse me of some base motives does only reinforce my perception that there is not enough doubt and criticism here. All I’m trying to argue is that if you people take low-probability high-risk possibilities seriously then I’m surprised nobody ever talks about the possibility that Yudkowsky or the SIAI might be a risk themselves and that one could take simple measures to reduce this possibility. Given your set of beliefs those people are going to code and implement the goal-system of a fooming AI, but everyone only talks about the friendliness of the AI and not the humans who are paid to create it with your money.
I’m not the person who you should worry about, although I have no particular problem with musing about the possibility that I work for some Chutulu institute. That doesn’t change much about what I am arguing though.
By the way, I was out of line with my last sentence in the grandparent. Sorry about that.