I was recently reminded of the 2023 conversation between Aryeh Englander and Eliezer Yudkowsky quoted at the end of this post about model uncertainty. I re-read it today as well as all of the other comment’s on Aryeh’s Facebook post and still think that Aryeh’s perspective seems reasonable while Eliezer-and-Rob’s perspective seems to be lacking justification. That is, despite the conversation, it doesn’t seem like Eliezer’s comments about milking uncertainty into expecting good outcomes is actually an adequate answer to Aryeh’s question about why Eliezer is so confident that his model is correct and that everyone else’s models (of those with much lower p(doom from AI)) are wrong.
When I first read the quoted conversation a few years ago I didn’t think it was a major crux, but now I’m leaning toward thinking that this epistemological point is probably a major factor in why Eliezer’s credence that if anyone builds ASI anytime soon is ~99% while credence is much lower. (My p(doom from AI) is ~65%, my p(extinction from AI by 2100) is ~20%, and my p(doom from AI by 2100) is ~35%). Just wanted to note that I’ve updated on this point being a major crux.
I was recently reminded of the 2023 conversation between Aryeh Englander and Eliezer Yudkowsky quoted at the end of this post about model uncertainty. I re-read it today as well as all of the other comment’s on Aryeh’s Facebook post and still think that Aryeh’s perspective seems reasonable while Eliezer-and-Rob’s perspective seems to be lacking justification. That is, despite the conversation, it doesn’t seem like Eliezer’s comments about milking uncertainty into expecting good outcomes is actually an adequate answer to Aryeh’s question about why Eliezer is so confident that his model is correct and that everyone else’s models (of those with much lower p(doom from AI)) are wrong.
When I first read the quoted conversation a few years ago I didn’t think it was a major crux, but now I’m leaning toward thinking that this epistemological point is probably a major factor in why Eliezer’s credence that if anyone builds ASI anytime soon is ~99% while credence is much lower. (My p(doom from AI) is ~65%, my p(extinction from AI by 2100) is ~20%, and my p(doom from AI by 2100) is ~35%). Just wanted to note that I’ve updated on this point being a major crux.