This has been pointed out a couple times, but Eliezer and the other leaders of SIAI don’t have nearly as high a standard of living as they could easily have working for a big tech company, which doesn’t require any extraordinary level of skill. Given that established programmers have been impressed with EY’s general intelligence and intuition, I find it highly likely that he could have gone this route if he’d wanted.
Now, you could allege that he instead became the poorly paid head of an x-risk charity in order to feel more self-important. But suggesting the motive of greed is nonsensical.
If he needs to spend contributed money for his personal fun that is completely justifiable.
If he’d need a big car to impress donors that is justifiable.
I called him the most intelligent person I know a few times, it is all on record.
I said a few times that I’m less worried by dying permanently because there is someone like Yudkowsky who contains all my values and much more.
That you people permanently try to accuse me of some base motives does only reinforce my perception that there is not enough doubt and criticism here. All I’m trying to argue is that if you people take low-probability high-risk possibilities seriously then I’m surprised nobody ever talks about the possibility that Yudkowsky or the SIAI might be a risk themselves and that one could take simple measures to reduce this possibility. Given your set of beliefs those people are going to code and implement the goal-system of a fooming AI, but everyone only talks about the friendliness of the AI and not the humans who are paid to create it with your money.
I’m not the person who you should worry about, although I have no particular problem with musing about the possibility that I work for some Chutulu institute. That doesn’t change much about what I am arguing though.
This has been pointed out a couple times, but Eliezer and the other leaders of SIAI don’t have nearly as high a standard of living as they could easily have working for a big tech company, which doesn’t require any extraordinary level of skill. Given that established programmers have been impressed with EY’s general intelligence and intuition, I find it highly likely that he could have gone this route if he’d wanted.
Now, you could allege that he instead became the poorly paid head of an x-risk charity in order to feel more self-important. But suggesting the motive of greed is nonsensical.
(edited to delete snark)
I have no problem with how much he earns.
If he needs to spend contributed money for his personal fun that is completely justifiable.
If he’d need a big car to impress donors that is justifiable.
I called him the most intelligent person I know a few times, it is all on record.
I said a few times that I’m less worried by dying permanently because there is someone like Yudkowsky who contains all my values and much more.
That you people permanently try to accuse me of some base motives does only reinforce my perception that there is not enough doubt and criticism here. All I’m trying to argue is that if you people take low-probability high-risk possibilities seriously then I’m surprised nobody ever talks about the possibility that Yudkowsky or the SIAI might be a risk themselves and that one could take simple measures to reduce this possibility. Given your set of beliefs those people are going to code and implement the goal-system of a fooming AI, but everyone only talks about the friendliness of the AI and not the humans who are paid to create it with your money.
I’m not the person who you should worry about, although I have no particular problem with musing about the possibility that I work for some Chutulu institute. That doesn’t change much about what I am arguing though.
By the way, I was out of line with my last sentence in the grandparent. Sorry about that.