Good interview. Perhaps a bit too consensual. Many equally intelligent people who have carefully examined his arguments hold views that are less radical on the subject. For most, uncertainty remains high, but for Yudkowsky, that’s simple. You jump off the cliff, you die.
This aspect is not ignored, but I would have appreciated an interview that questioned more his deeply entrenched position. The fact that current AIs are far more easily aligned than anyone could have dreamed of back in the 2000s is an evidence that should have compelled Yudkowsky to update. This is not proof that alignment is achievable for AGI, let alone ASI, but it is more an argument in favor than against such a possibility.
Notably, when we read the CoTs we often see that reasoning LLMs already do something that looks undistinguishable to me from a tentative coherent extrapolated volition (imperfect, but still).
However, while he should be happy about that, I see no significant update in Yudkowsky’s position. It doesn’t even count as evidence to him, it is negligible. I hope I’m wrong, but to me it seems he has succumbed to the bottom line syndrome. He made up his mind 20 years ago and I don’t expect him to update unless anyone builds it and everybody dies or not.
50% is pure incertitude. 95% (for Yudkowsky) is close to pure certitude. So, “all the agreement they have” seems kind of an overstatement.
Besides, even if they were in total agreement on the p(doom), it would definitely be a good thing to avoid the echo chamber effect and that the interviewer makes himself the advocate of other major figures of the debate, challenging Yudkowsky’s position. Not all along the interview, but more than we see here. It seems all the more necessary since Yudkowsky appears, maybe not isolated but at least on a border of the spectrum. My feeling is that an intellectually honest rationalist cannot ignore these considerations.
(although in my defense, you didn’t make that argument in the comment I responded to, and also, liron assigning 50% doesn’t mean he actually disagrees with Yudkowsky. It might be he’s just not sure, but doesn’t have any counterarguments per se).
Good interview. Perhaps a bit too consensual.
Many equally intelligent people who have carefully examined his arguments hold views that are less radical on the subject. For most, uncertainty remains high, but for Yudkowsky, that’s simple. You jump off the cliff, you die.
This aspect is not ignored, but I would have appreciated an interview that questioned more his deeply entrenched position. The fact that current AIs are far more easily aligned than anyone could have dreamed of back in the 2000s is an evidence that should have compelled Yudkowsky to update. This is not proof that alignment is achievable for AGI, let alone ASI, but it is more an argument in favor than against such a possibility.
Notably, when we read the CoTs we often see that reasoning LLMs already do something that looks undistinguishable to me from a tentative coherent extrapolated volition (imperfect, but still).
However, while he should be happy about that, I see no significant update in Yudkowsky’s position. It doesn’t even count as evidence to him, it is negligible. I hope I’m wrong, but to me it seems he has succumbed to the bottom line syndrome. He made up his mind 20 years ago and I don’t expect him to update unless anyone builds it and everybody dies or not.
PS : I will still buy the book and share it.
“Perhaps a bit too consensual.”
Yeah, horrible!! They should have pretended to disagree with each other in order to balance out all the agreement they have. They must be biased!!
50% is pure incertitude. 95% (for Yudkowsky) is close to pure certitude. So, “all the agreement they have” seems kind of an overstatement.
Besides, even if they were in total agreement on the p(doom), it would definitely be a good thing to avoid the echo chamber effect and that the interviewer makes himself the advocate of other major figures of the debate, challenging Yudkowsky’s position. Not all along the interview, but more than we see here. It seems all the more necessary since Yudkowsky appears, maybe not isolated but at least on a border of the spectrum. My feeling is that an intellectually honest rationalist cannot ignore these considerations.
Fair point, I’ve downvoted my comment. Apologies.
(although in my defense, you didn’t make that argument in the comment I responded to, and also, liron assigning 50% doesn’t mean he actually disagrees with Yudkowsky. It might be he’s just not sure, but doesn’t have any counterarguments per se).
I agree I should have pointed it out in the initial comment !