today on break I thought more about slow takeoff, and yesterday I read some PaulChristiano/Yudkowsky dialogues on LessWrong. It seems more plausible than before.
I have more highly-conditional hope that the alignment community can produce the needed work. So my P(doom) is optimistic-ish on average, but changes wildly if conditioned on different “do we rise to the occasion?” test-results.
The leading AI labs, despite their funding situations, are still day-to-day managed by Sam Altman (who reads SSC) and Demis Hassabis (who is vocally cautious about AGI).
China has released regulatory rules about generative AI, and their biggest tech companies might be more commercial-application focused than the US AGI labs.
Congresspeople at the hearing, from what little I heard, seemed appropriately both scared and cautious.
researchers in obscure corners are making progress, from Orthogonal to Conjecture to John Wentworth to mechanistic interpretability.
Then again:
elon musk ai company
yann lecun, leader of facebook ai, claims to disagree with most core AI safety concerns.
That highly-conditional hope I mentioned earlier, is specifically conditional on something like 1-20 people (not anyone specific, just around that number of people) in the alignment community “getting their act together”. That is, being at or reaching to fairly good levels of relevant skills/attributes (like intelligence AND conscientiousness AND security mindset AND systematization AND...), all at once. This is uncommon in general, especially among people pursuing weird things in academia, especially among the highly-neurodivergent rationality community.
I have a couple of timelines, some short and some long. I don’t think any of them really got shorter lately.
Any group with smart-enough researchers, and enough computing power, could still fuck up by accident. (Depending what the SOTA is for opensource AI frameworks, and who releases what, that group might not even need the “smart-enough researchers”.)
Counting some AI safety blessings:
today on break I thought more about slow takeoff, and yesterday I read some PaulChristiano/Yudkowsky dialogues on LessWrong. It seems more plausible than before.
I have more highly-conditional hope that the alignment community can produce the needed work. So my P(doom) is optimistic-ish on average, but changes wildly if conditioned on different “do we rise to the occasion?” test-results.
The leading AI labs, despite their funding situations, are still day-to-day managed by Sam Altman (who reads SSC) and Demis Hassabis (who is vocally cautious about AGI).
China has released regulatory rules about generative AI, and their biggest tech companies might be more commercial-application focused than the US AGI labs.
Congresspeople at the hearing, from what little I heard, seemed appropriately both scared and cautious.
researchers in obscure corners are making progress, from Orthogonal to Conjecture to John Wentworth to mechanistic interpretability.
Then again:
elon musk ai company
yann lecun, leader of facebook ai, claims to disagree with most core AI safety concerns.
That highly-conditional hope I mentioned earlier, is specifically conditional on something like 1-20 people (not anyone specific, just around that number of people) in the alignment community “getting their act together”. That is, being at or reaching to fairly good levels of relevant skills/attributes (like intelligence AND conscientiousness AND security mindset AND systematization AND...), all at once. This is uncommon in general, especially among people pursuing weird things in academia, especially among the highly-neurodivergent rationality community.
I have a couple of timelines, some short and some long. I don’t think any of them really got shorter lately.
Any group with smart-enough researchers, and enough computing power, could still fuck up by accident. (Depending what the SOTA is for opensource AI frameworks, and who releases what, that group might not even need the “smart-enough researchers”.)