A poll at the 2008 global catastrophic risks conference put the existential risk of machine intelligence at 5%
Compare this with this Yudkowsky quote from 2005:
And if Novamente should ever cross the finish line, we all die
This looks like a rather different probability estimate. It seems to me to be highly overconfident one.
They’re probabilities for two different things. The 5% estimate is for P(AIisCreated&AIisUnfriendly), while Yudkowsky’s estimate is for P(AIisUnfriendly|AIisCreated&NovamenteFinishesFirst).
They’re probabilities for two different things. The 5% estimate is for P(AIisCreated&AIisUnfriendly), while Yudkowsky’s estimate is for P(AIisUnfriendly|AIisCreated&NovamenteFinishesFirst).