There’s some sound logic to what folk are saying. I think there’s a real concern. But the desperate tone strikes me as… something else. Like folk are excited and transfixed by the horror.
For what it’s worth, I think pre-2030 AGI Doom is pretty plausible[1], and this doesn’t describe my internal experience at all. My internal experience is of wanting to chill out and naturally drifting in that direction, only to be periodically interrupted by OpenAI doing something attention-grabbing or another bogus “innovator LLMs!” paper being released, forcing me to go look if I were wrong to be optimistic after all. That can be stressful, whenever that happens, but I’m not really feeling particularly doom-y or scared by default. (Indeed, I think fear is a bad type of motivation anyway. Hard agree with your “shared positive vision” section.)
I’m not sure whether that’s supposed to be compatible with your model of preverbal trauma...?
I would also like to figure out some observable which we’d be able to use to distinguish between “doom incoming” and “no doom yet” as soon as possible. But unfortunately, most observables that look like they fit would also be bogus. Vladimir Nesov’s analysis is the best we have, I think.
So I pretty much expect that yeah, the situation is going to look consistently and legitimately dire for the foreseeable future, even in the worlds where we’re not imminently doomed, with no obvious way to update out of it. One of the reasons is that there’s an additional set of powerful vested interests now, which would try to keep up the AGI hype even if research efforts start failing.
This state of affairs is immensely irritating, but it does seem to be the state of our affairs.
Though I do have most of my probability mass on “LLMs ain’t it” and AGI doom only happening (if it is to happen) at some unpredictable random point in 2030-2040.
I’m not sure whether that’s supposed to be compatible with your model of preverbal trauma...?
Sure! It sounds like maybe it’s not what’s going on for you personally. And I still wonder whether it’s what’s going on for a large enough subset (maybe a majority?) of folk who are reacting to AI risk. Which someone without a trauma slot for it might experience as something like:
“Gosh, they really seem kind of freaked out. I mean, sure, it’s bad, but what’s all this extra for?”
“Why are they fleshing out these scary details over here so much? How does that help anything?”
“Well sure, I’m having kids. Yeah, I don’t know what’s going to happen. That’s okay.”
“I’m taking up a new hobby. Why? Oh, because it’s interesting. What? Uh, no, I’m not concerned about how it relates to AI.”
I’m making that up off the top of my head based on some loose models. I’m more trying to convey a feel than give a ton of logical details. Not that they can’t be fleshed out; I just haven’t done so yet.
For what it’s worth, I think pre-2030 AGI Doom is pretty plausible[1], and this doesn’t describe my internal experience at all. My internal experience is of wanting to chill out and naturally drifting in that direction, only to be periodically interrupted by OpenAI doing something attention-grabbing or another bogus “innovator LLMs!” paper being released, forcing me to go look if I were wrong to be optimistic after all. That can be stressful, whenever that happens, but I’m not really feeling particularly doom-y or scared by default. (Indeed, I think fear is a bad type of motivation anyway. Hard agree with your “shared positive vision” section.)
I’m not sure whether that’s supposed to be compatible with your model of preverbal trauma...?
I would also like to figure out some observable which we’d be able to use to distinguish between “doom incoming” and “no doom yet” as soon as possible. But unfortunately, most observables that look like they fit would also be bogus. Vladimir Nesov’s analysis is the best we have, I think.
So I pretty much expect that yeah, the situation is going to look consistently and legitimately dire for the foreseeable future, even in the worlds where we’re not imminently doomed, with no obvious way to update out of it. One of the reasons is that there’s an additional set of powerful vested interests now, which would try to keep up the AGI hype even if research efforts start failing.
This state of affairs is immensely irritating, but it does seem to be the state of our affairs.
Though I do have most of my probability mass on “LLMs ain’t it” and AGI doom only happening (if it is to happen) at some unpredictable random point in 2030-2040.
Sure! It sounds like maybe it’s not what’s going on for you personally. And I still wonder whether it’s what’s going on for a large enough subset (maybe a majority?) of folk who are reacting to AI risk. Which someone without a trauma slot for it might experience as something like:
“Gosh, they really seem kind of freaked out. I mean, sure, it’s bad, but what’s all this extra for?”
“Why are they fleshing out these scary details over here so much? How does that help anything?”
“Well sure, I’m having kids. Yeah, I don’t know what’s going to happen. That’s okay.”
“I’m taking up a new hobby. Why? Oh, because it’s interesting. What? Uh, no, I’m not concerned about how it relates to AI.”
I’m making that up off the top of my head based on some loose models. I’m more trying to convey a feel than give a ton of logical details. Not that they can’t be fleshed out; I just haven’t done so yet.
I do feel like this sometimes, yeah. Particularly when OpenAI does something attention-grabbing and there’s a Twitter freakout about it.