I think the AI critic community has a MASSIVE PR problem.
There has been a deluge of media coming from the AI world in the last month or so. It seems the head of every major lab has been on a full scale PR campaign, touting the benefits and massively playing down the risks of the future of AI development. (If I were a paranoid person I would say that this feels like a very intentional move to lay the groundwork for some major announcement coming in the foreseeable future…)
This has got me thinking a lot about what AI critics are doing, and what they need to be doing. The gap between how the community is communicating and how the average person receives and understands information is huge. The general tone I’ve noticed in the few media appearances I’ve seen recently from AI critics is simultaneously over technical and apocalyptic, which makes it very hard for the lay-person to digest and care about in any tangible way.
Taking the message directly to policy makers is all well and good, but as you mentioned they are extremely pressed for resources and if the constituency is not banging at their door iabout an issue, its unlikely they can justify the time/resource expenditure. Especially on a topic that they don’t fully understand.
There needs to be a sizeable public relations and education campaign aimed directly at the general public to help them understand the dangers of what is being built and what they can do about it. Because at the moment I can tell you that outside of certain circles, absolutely no one understands or cares.
The movement needs a handful of very well media-trained figure heads making the rounds of every possible media outlet, from traditional media to podcast to tiktok. The message needs to be simplified and honed into sound-bytes that anyone can understand And I think there needs to be a clear call to action that the average person can engage with.
IMO this should be a primary focus of any person or organization concerned with mitigating the risk of AI. A major amount of time and money needs to be put into this
It doesn’t matter how right you are, if no one understands or cares about what you’re talking about you’re no going to convince them of anything…
I think the AI critic community has a MASSIVE PR problem.
There has been a deluge of media coming from the AI world in the last month or so. It seems the head of every major lab has been on a full scale PR campaign, touting the benefits and massively playing down the risks of the future of AI development. (If I were a paranoid person I would say that this feels like a very intentional move to lay the groundwork for some major announcement coming in the foreseeable future…)
This has got me thinking a lot about what AI critics are doing, and what they need to be doing. The gap between how the community is communicating and how the average person receives and understands information is huge. The general tone I’ve noticed in the few media appearances I’ve seen recently from AI critics is simultaneously over technical and apocalyptic, which makes it very hard for the lay-person to digest and care about in any tangible way.
Taking the message directly to policy makers is all well and good, but as you mentioned they are extremely pressed for resources and if the constituency is not banging at their door iabout an issue, its unlikely they can justify the time/resource expenditure. Especially on a topic that they don’t fully understand.
There needs to be a sizeable public relations and education campaign aimed directly at the general public to help them understand the dangers of what is being built and what they can do about it. Because at the moment I can tell you that outside of certain circles, absolutely no one understands or cares.
The movement needs a handful of very well media-trained figure heads making the rounds of every possible media outlet, from traditional media to podcast to tiktok. The message needs to be simplified and honed into sound-bytes that anyone can understand And I think there needs to be a clear call to action that the average person can engage with.
IMO this should be a primary focus of any person or organization concerned with mitigating the risk of AI. A major amount of time and money needs to be put into this
It doesn’t matter how right you are, if no one understands or cares about what you’re talking about you’re no going to convince them of anything…