The author explicitly states that their probability of the entire human race going extinct or some equivalent disaster will be 80% if AGI is developed by 2025. They also gave the probability of developing AGI by <2025 less than 5%. or so. Since AGI was, according to you and no one else, developed right here in 2023, this would make Porby’s estimate of extinction chance even higher than 80% and they would very wrong about when AGI would be developed. So tell me, do we give it to Porby even though the human race has not gone extinct and they were obviously way off on other estimates? No of course we don’t, because like I said in post one, Porby has clearly defined AGI in their own way, and whatever ambiguous model existing today that you think of as AGI is not a match with their definition of strong AGI.
Porby has clearly defined AGI in their own way, and whatever ambiguous model existing today that you think of as AGI is not a match with their definition of strong AGI.
Yup.
For what its worth, I haven’t noticeably updated my timelines since the post; GPT-4 and whatnot are pretty much what I expected, and I’d be pretty surprised if GPT-5 eats us.
(Edit: my P(doom by date) has actually gone down a bit for technical reasons as I’ve continued research! I’ll probably add a more complete update to this post as a comment when I get around to editing it for the openphil version of the contest.)
So do we call it in favor of porby, or wait a bit longer for the ambiguity over whether we’ve truly crossed the AGI threshold to resolve?
The author explicitly states that their probability of the entire human race going extinct or some equivalent disaster will be 80% if AGI is developed by 2025. They also gave the probability of developing AGI by <2025 less than 5%. or so. Since AGI was, according to you and no one else, developed right here in 2023, this would make Porby’s estimate of extinction chance even higher than 80% and they would very wrong about when AGI would be developed. So tell me, do we give it to Porby even though the human race has not gone extinct and they were obviously way off on other estimates? No of course we don’t, because like I said in post one, Porby has clearly defined AGI in their own way, and whatever ambiguous model existing today that you think of as AGI is not a match with their definition of strong AGI.
Yup.
For what its worth, I haven’t noticeably updated my timelines since the post; GPT-4 and whatnot are pretty much what I expected, and I’d be pretty surprised if GPT-5 eats us.
(Edit: my P(doom by date) has actually gone down a bit for technical reasons as I’ve continued research! I’ll probably add a more complete update to this post as a comment when I get around to editing it for the openphil version of the contest.)