I don’t literally think we are doomed. I’m just rather pessimistic about our chances of aligning AI if it is happening in the next 5 years or so.
My confidence in prosaic AGI is 30% to Ethan’s 25%, and my confidence in “more than 2100” is 15% to Ethan’s… Oh wait he has 15% too, huh. I thought he had less.
I’m somewhat confused as to how slightly more confident, and slightly less confident equate to doom- which is a pretty strong claim imo.
I don’t literally think we are doomed. I’m just rather pessimistic about our chances of aligning AI if it is happening in the next 5 years or so.
My confidence in prosaic AGI is 30% to Ethan’s 25%, and my confidence in “more than 2100” is 15% to Ethan’s… Oh wait he has 15% too, huh. I thought he had less.
I updated to 15% (from 5%) after some feedback so you’re right that I had less originally :)
Just as well, I’m also less confident that shorter timelines are congruent with a high, irreducible probability of failure.
EDIT: If doom is instead congruent simply with the advent of Prosaic AGI, then I still disagree- even moreso, actually.