“And there’s a world not so far from this one where I, too, get behind a pause. For example, one actual major human tragedy caused by a generative AI model might suffice to push me over the edge.”—Scott Aaronson in https://scottaaronson.blog/?p=7174
My take: I think there’s a big chunk of the world, a lot of smart powerful people, who are in this camp right now. People waiting to see a real-world catastrophe before they update their worldviews. In the meantime, they are waiting and watching, feeling skeptical of implausible-sounding stories of potential risks.
This stood out to me when reading his take as well. I wonder if this has something to do with a security-mindedness spectrum that people are on. Less security-minded people going “Sure, if it happens we’ll do something. (But it will probablynever happen.)” and the more security-minded people going “Let’s try to prevent it from happening. (Because it totallycould happen.)”
I guess it gets hard in cases like these where the stakes either way seem super high to both sides. I think that’s why you get less security-minded people saying things like that, because they also rate the upside very highly, they don’t want to sacrifice any of it if they don’t have to.
Just my take (as a probably overly-security-minded person).
“And there’s a world not so far from this one where I, too, get behind a pause. For example, one actual major human tragedy caused by a generative AI model might suffice to push me over the edge.”—Scott Aaronson in https://scottaaronson.blog/?p=7174
My take: I think there’s a big chunk of the world, a lot of smart powerful people, who are in this camp right now. People waiting to see a real-world catastrophe before they update their worldviews. In the meantime, they are waiting and watching, feeling skeptical of implausible-sounding stories of potential risks.
This stood out to me when reading his take as well. I wonder if this has something to do with a security-mindedness spectrum that people are on. Less security-minded people going “Sure, if it happens we’ll do something. (But it will probably never happen.)” and the more security-minded people going “Let’s try to prevent it from happening. (Because it totally could happen.)”
I guess it gets hard in cases like these where the stakes either way seem super high to both sides. I think that’s why you get less security-minded people saying things like that, because they also rate the upside very highly, they don’t want to sacrifice any of it if they don’t have to.
Just my take (as a probably overly-security-minded person).