At a recent family reunion my cousin asked me what I did, and since I always try to be straight with everyone, I told her I try to make sure AI doesn’t kill everyone. She asked me why I thought that would happen, and I told her.
Then she asked for my probability[1], and I told her “Probably 35% or so in the next 30 years”.
She looked confused, “in the next 30 years? When in the next 30 years?”
So I told her I didn’t know, but that some researchers, with a strong track record of predicting AI progress, had their median estimate around 2027, and she gasped. In protest she said, “But that’s when I graduate high school”[2].
IIRC, the AI 2027 scenario is those researchers’ median outcome in the sense that it’s a slightly pessimistic view of what they think could plausibly happen if nothing disruptive happens in the next two years; they expect disruptive things will probably happen and move the timeline back; 2027 might be their modal guess, but it’s not their median as most people use the term.
At a recent family reunion my cousin asked me what I did, and since I always try to be straight with everyone, I told her I try to make sure AI doesn’t kill everyone. She asked me why I thought that would happen, and I told her.
Then she asked for my probability[1], and I told her “Probably 35% or so in the next 30 years”.
She looked confused, “in the next 30 years? When in the next 30 years?”
So I told her I didn’t know, but that some researchers, with a strong track record of predicting AI progress, had their median estimate around 2027, and she gasped. In protest she said, “But that’s when I graduate high school”[2].
I think about this sometimes.
Sometimes I see myself in my family.
She took the idea seriously, this wasn’t a real protest. That was just her tone. She is not a stranger to normality burning up in smoke.
In the “Race” ending of “AI 2027”, the actual destruction of humanity only occurs in 2030, though?
I don’t think that makes that much of a difference with regards to regular people trying to plan out their lives.
It still seems a good clarification to make.
Yes, agree.
IIRC, the AI 2027 scenario is those researchers’ median outcome in the sense that it’s a slightly pessimistic view of what they think could plausibly happen if nothing disruptive happens in the next two years; they expect disruptive things will probably happen and move the timeline back; 2027 might be their modal guess, but it’s not their median as most people use the term.
(Also, what Rana Dexsin said.)