Really good and well written post. While I think it always good to provide vigorous and strong evidence against Gary Marcus (and others) when he defaults to promoting claims that confirm his bias, I wonder if there is are beter long term solutions to this issue. For normal everyday people that don’t follow AI or people that show modest interest in learning more about AI, they will likely never see this posts or have even heard of LW/Alignment Forum/80k/etc. I do think that the lack of public mainstream content is part of the issue, but I’m sure that there are lots of difficulties that I am naive to. Also, I sense that there is a distrust of people working in the field being salesman trying to hype up the AI or fuel their ego for how important their work is, so that might not be the best solution either, but I’m curious to hear if anyone is working towards any concrete ideas for giving informed and strongly evidence backed claims about AI progress to the mainstream (think national tv) or what are the biggest roadblocks which make it hard or impossible.
This isn’t my area of expertise, so take what I have to say with a grain of salt.
I wouldn’t say that anyone is particularly on the ball here, but there are certainly efforts to bring about more “mainstream” awareness about AI. See e.g. the recent post by the ControlAI folk about their efforts to brief British politicians on AI, in which they mention some of the challenges they ran into. Still within the greater LW/80k circle, we have Rational Animations, which has a bunch of AI explainers via animated Youtube videos. Then slightly outside of it, we have Rob Miles’s videos, both on his own channel and on Computerphile. Also, I’d take a look at the PauseAI folk to see if they’ve been working on anything lately.
My impression is that the primary problem is “Not feeling the AGI”, where I think the emphasis should placed on “AGI”. People see AI everywhere, but it’s kinda poopy. There’s AI slop and AI being annoying and lots of startups hyping AI, but all of those suggest AI to be mediocre and not revolutionary. Outside of entry-level SWE jobs I don’t think people have really felt much disruption from an employment perspective. It’s also just hard to distinguish between more or less powerful models (“o1” vs “o3“ vs “gpt-4o” vs “gpt-4.5” vs “ChatGPT”, empirically I notice that people just give up and use “ChatGPT” to refer to all OAI LMs).
A clean demonstration of capabilities tends to shock people into “feeling the AGI”. As a result, I expect this to be fixed in part over time, as we get more events like the release of GPT-3 (a clean demo for academics/AIS folk), the ChatGPT release in late 2022, or the more recent Ghiblification wave. I also think that, unfortunately, “giving informed and strongly evidence backed claims about AI progress” is just not easy to do for as broad an audience as National TV, so maybe demos are just the way to go.
Secondarily, I think the second problem is fatalism. People switch from “we won’t get AGI, those people are just being silly” to “we’ll get AGI and OAI/Anthropic/GDM/etc will take over/destroy the world, there’s nothing we can do”. In some cases this even goes into “I’ll choose not to believe it, because if I do, I’ll give up on everything and just cry.” To be honest, I’m not sure how to fix this one (antidepressants, maybe?).
Then there’s the “but China” views, but I think that’s much more prevalent on Twitter or the words of corporate lobbyists than a response I’ve heard from people in real life.
Since last year, when AI really took off, my workload has plummeted. I used to get up to 15 [illustration] commissions a month; now I get around five. [...] I used to work in a small studio as a storyboard artist for TV commercials. Since AI appeared, I’ve seen colleagues lose jobs because companies are using Midjourney. Even those who’ve kept their jobs have had their wages reduced – and pay in south-east Asia is already low.
I noticed I was getting less [copywriting] work. One day, I overheard my boss saying to a colleague, “Just put it in ChatGPT.” The marketing department started to use it more often to write their blogs, and they were just asking me to proofread. I remember walking around the company’s beautiful gardens with my manager and asking him if AI would replace me, and he stressed that my job was safe.
Six weeks later, I was called to a meeting with HR. They told me they were letting me go immediately. [...] The company’s website is sad to see now. It’s all AI-generated and factual – there’s no substance, or sense of actually enjoying gardening. AI scares the hell out of me. I feel devastated for the younger generation – it’s taking all the creative jobs.
The effect of generative AI in my industry is something I’ve felt personally. Recently, I was listening to an audio drama series I’d recorded and heard my character say a line, but it wasn’t my voice. I hadn’t recorded that section. I contacted the producer, who told me he had input my voice into AI software to say the extra line. [...]
The Screen Actors Guild, SAG-AFTRA, began a strike last year against certain major video games studios because voice actors were unhappy with the lack of protections against AI. Developers can record actors, then AI can use those initial chunks of audio to generate further recordings. Actors don’t get paid for any of the extra AI-generated stuff, and they lose their jobs. I’ve seen it happen.
One client told me straight out that they have started using generative AI for their voices because it’s faster.
When generative AI came along, the company was very vocal about using it as a tool to help clients get creative. As a company that sells digital automation, developments in AI fit them well. I knew they were introducing it to do things like writing emails and generating images, but I never anticipated they’d get rid of me: I’d been there six years and was their only graphic designer. My redundancy came totally out of the blue. One day, HR told me my role was no longer required as much of my work was being replaced by AI.
I made a YouTube video about my experience. It went viral and I received hundreds of responses from graphic designers in the same boat, which made me realise I’m not the only victim – it’s happening globally, and it takes a huge mental toll.
Really good and well written post. While I think it always good to provide vigorous and strong evidence against Gary Marcus (and others) when he defaults to promoting claims that confirm his bias, I wonder if there is are beter long term solutions to this issue. For normal everyday people that don’t follow AI or people that show modest interest in learning more about AI, they will likely never see this posts or have even heard of LW/Alignment Forum/80k/etc. I do think that the lack of public mainstream content is part of the issue, but I’m sure that there are lots of difficulties that I am naive to. Also, I sense that there is a distrust of people working in the field being salesman trying to hype up the AI or fuel their ego for how important their work is, so that might not be the best solution either, but I’m curious to hear if anyone is working towards any concrete ideas for giving informed and strongly evidence backed claims about AI progress to the mainstream (think national tv) or what are the biggest roadblocks which make it hard or impossible.
This isn’t my area of expertise, so take what I have to say with a grain of salt.
I wouldn’t say that anyone is particularly on the ball here, but there are certainly efforts to bring about more “mainstream” awareness about AI. See e.g. the recent post by the ControlAI folk about their efforts to brief British politicians on AI, in which they mention some of the challenges they ran into. Still within the greater LW/80k circle, we have Rational Animations, which has a bunch of AI explainers via animated Youtube videos. Then slightly outside of it, we have Rob Miles’s videos, both on his own channel and on Computerphile. Also, I’d take a look at the PauseAI folk to see if they’ve been working on anything lately.
My impression is that the primary problem is “Not feeling the AGI”, where I think the emphasis should placed on “AGI”. People see AI everywhere, but it’s kinda poopy. There’s AI slop and AI being annoying and lots of startups hyping AI, but all of those suggest AI to be mediocre and not revolutionary. Outside of entry-level SWE jobs I don’t think people have really felt much disruption from an employment perspective. It’s also just hard to distinguish between more or less powerful models (“o1” vs “o3“ vs “gpt-4o” vs “gpt-4.5” vs “ChatGPT”, empirically I notice that people just give up and use “ChatGPT” to refer to all OAI LMs).
A clean demonstration of capabilities tends to shock people into “feeling the AGI”. As a result, I expect this to be fixed in part over time, as we get more events like the release of GPT-3 (a clean demo for academics/AIS folk), the ChatGPT release in late 2022, or the more recent Ghiblification wave. I also think that, unfortunately, “giving informed and strongly evidence backed claims about AI progress” is just not easy to do for as broad an audience as National TV, so maybe demos are just the way to go.
Secondarily, I think the second problem is fatalism. People switch from “we won’t get AGI, those people are just being silly” to “we’ll get AGI and OAI/Anthropic/GDM/etc will take over/destroy the world, there’s nothing we can do”. In some cases this even goes into “I’ll choose not to believe it, because if I do, I’ll give up on everything and just cry.” To be honest, I’m not sure how to fix this one (antidepressants, maybe?).
Then there’s the “but China” views, but I think that’s much more prevalent on Twitter or the words of corporate lobbyists than a response I’ve heard from people in real life.
People in other creative fields have also been affected:
Oh, fair. Thanks for the correction, I didn’t realize how much artists were affected.