I’m not sure whether that’s supposed to be compatible with your model of preverbal trauma...?
Sure! It sounds like maybe it’s not what’s going on for you personally. And I still wonder whether it’s what’s going on for a large enough subset (maybe a majority?) of folk who are reacting to AI risk. Which someone without a trauma slot for it might experience as something like:
“Gosh, they really seem kind of freaked out. I mean, sure, it’s bad, but what’s all this extra for?”
“Why are they fleshing out these scary details over here so much? How does that help anything?”
“Well sure, I’m having kids. Yeah, I don’t know what’s going to happen. That’s okay.”
“I’m taking up a new hobby. Why? Oh, because it’s interesting. What? Uh, no, I’m not concerned about how it relates to AI.”
I’m making that up off the top of my head based on some loose models. I’m more trying to convey a feel than give a ton of logical details. Not that they can’t be fleshed out; I just haven’t done so yet.
Sure! It sounds like maybe it’s not what’s going on for you personally. And I still wonder whether it’s what’s going on for a large enough subset (maybe a majority?) of folk who are reacting to AI risk. Which someone without a trauma slot for it might experience as something like:
“Gosh, they really seem kind of freaked out. I mean, sure, it’s bad, but what’s all this extra for?”
“Why are they fleshing out these scary details over here so much? How does that help anything?”
“Well sure, I’m having kids. Yeah, I don’t know what’s going to happen. That’s okay.”
“I’m taking up a new hobby. Why? Oh, because it’s interesting. What? Uh, no, I’m not concerned about how it relates to AI.”
I’m making that up off the top of my head based on some loose models. I’m more trying to convey a feel than give a ton of logical details. Not that they can’t be fleshed out; I just haven’t done so yet.
I do feel like this sometimes, yeah. Particularly when OpenAI does something attention-grabbing and there’s a Twitter freakout about it.