People don’t believe in a flat earth because of evidence. They believe in it because it makes them special to be able to see beyond the veil where others cannot, to acquire hidden knowledge. This is a very natural and human thing to do. Along with hidden knowledge comes some social / community effects which might be desirable. As far as I can see, for flat earth specifically, there’s really not much more to it than this.
I’d make the case that instead of using this frame to model how non X-risk people see AI, it’s better applied as a way to model what the rationalist community looks like to them. A small group of people with an extreme, fringe belief that spends most of its time generating evidence for that belief. People aren’t going off the epistemic validity of that evidence, they’re going off their gut feeling that it’s weird for a small group of people to be so concerned in the absence of a lot of social proof.
With that in mind, I don’t think the public messaging strategy should focus heavily on proving AI is dangerous. The rationalist community spends a lot of time arguing that AI powerful → bad. But the public mostly already agrees with that conclusion, they just got to it from a different premise (job loss, environment, stifling of the human spirit, fear of change, etc). I think it’s important to strategically validate some of those—you really are going to lose your job!—and make the connection between losing your job and being very capable. While they might not end up with a perfect policy picture, they don’t need one. They just need to be worried enough to add pressure to their representatives, who can be targeted more deliberately.
I’d make the case that instead of using this frame to model how non X-risk people see AI, it’s better applied as a way to model what the rationalist community looks like to them. A small group of people with an extreme, fringe belief that spends most of its time generating evidence for that belief. People aren’t going off the epistemic validity of that evidence, they’re going off their gut feeling that it’s weird for a small group of people to be so concerned in the absence of a lot of social proof.
With that in mind, I don’t think the public messaging strategy should focus heavily on proving AI is dangerous. The rationalist community spends a lot of time arguing that AI powerful → bad. But the public mostly already agrees with that conclusion, they just got to it from a different premise (job loss, environment, stifling of the human spirit, fear of change, etc). I think it’s important to strategically validate some of those—you really are going to lose your job!—and make the connection between losing your job and being very capable. While they might not end up with a perfect policy picture, they don’t need one. They just need to be worried enough to add pressure to their representatives, who can be targeted more deliberately.