Thanks for the replies.
99% is not very high confidence, in log-odds—I am much more than 99% confident in many claims.
I am too. But for how many of those beliefs that you’re 99+% sure of can you name several people like Paul Christiano who think you’re on the wrong-side-of-maybe about? For me, not a single example comes to mind.
However, “It would be lethally dangerous to build ASIs that have the wrong goals” is not circular. You might say it lacks justification
I agree that’s not circular. I meant that the full claim “building ASIs with the wrong goals would lead to human extinction because ‘It would be lethally dangerous to build ASIs that have the wrong goals’” is circular. “Lacks justification” would have been clearer.
For example, if they believe both that Drexlerian nanotechnology is possible and that the ASI in question would be able to build it.
I hold this background belief but don’t think that it means the original claim requires little additional justification. But getting into such details is beyond the scope of this discussion thread. (Brief gesture at an explanation: Even though humans could exterminate all the ants in a backyard when they build a house, they don’t. It similarly seems plausible to me that ASI could start building its factories on Earth to enable it to build von Neuman probes to begin colonizing the universe all without killing all humans on Earth. Maybe it’d extinct humanity by boiling the oceans like mentioned in IABIED, but I have enough doubt in these sorts of predictions to remain <<99% confident in the ‘It would be lethally dangerous [i.e. it’d lead to extinction] to build ASIs that have the wrong goals’ claim.)
I was recently reminded of the 2023 conversation between Aryeh Englander and Eliezer Yudkowsky quoted at the end of this post about model uncertainty. I re-read it today as well as all of the other comment’s on Aryeh’s Facebook post and still think that Aryeh’s perspective seems reasonable while Eliezer-and-Rob’s perspective seems to be lacking justification. That is, despite the conversation, it doesn’t seem like Eliezer’s comments about milking uncertainty into expecting good outcomes is actually an adequate answer to Aryeh’s question about why Eliezer is so confident that his model is correct and that everyone else’s models (of those with much lower p(doom from AI)) are wrong.
When I first read the quoted conversation a few years ago I didn’t think it was a major crux, but now I’m leaning toward thinking that this epistemological point is probably a major factor in why Eliezer’s credence that if anyone builds ASI anytime soon is ~99% while credence is much lower. (My p(doom from AI) is ~65%, my p(extinction from AI by 2100) is ~20%, and my p(doom from AI by 2100) is ~35%). Just wanted to note that I’ve updated on this point being a major crux.