How strong is the constitutional case that LLM-generated text and AI-generated videos are First-Amendment protected free speech? I’m interested in both the actual constitutional case and whether there’s a practical consensus on it right now.
For context, I’m studying AI (super)persuasion risks and my colleagues and some other people I talk to are much more skeptical of legal/policy solutions than I am. They both normatively think it’d be bad for government to interfere with LLM persuasion/speech, and descriptively think it’s a non-starter in the US legal context.
Personally, I’m more skeptical that LLM text is speech or that our moral or legal intuitions should apply to LLM text. Since to me speech implies conscious intent. And obviously the Founding Fathers didn’t consider this specific use-case and it’s unclear how it ought to generalize.[1] But of course legal solutions don’t depend on my understanding of the Law, but more like what the consensus of professionals + people with power believe.
To be clear I’m not particularly optimistic about legal or policy solutions either. And I probably wouldn’t recommend them at the current juncture (the AIs aren’t superpersuasive yet). But I just want to see if this option is at all open before prematurely foreclosing it.
And sure they didn’t have typewriters or handguns or Airbnbs but the first three amendments still apply. Nonetheless I think the generalization is more ambiguous for AI than in most past cases.
In case it helps: I think hate speech is speech, I think talking in movies is speech, I think typing is speech, I think Facebook likes is speech, I think t-shirt designs is speech, I think Confederate flags is speech. I don’t think algorithmic recommendation is speech, I’m skeptical code is speech, I don’t currently think LLM outputs are speech (at least before LLM personhood), I really don’t think AI video is speech.
You might find this recent Federal Middle District of Florida Court decision interesting. I think this is one of the first cases where defendants are trying to argue that LLM output is speech under the 1st Am. The Court held that CharacterAI could assert 1st Am. rights of users, but did not find that LLM output constituted speech for constitutional purposes (p. 27-31)
Off the top of my head, I share your colleagues’ intuition that US courts will be extremely skeptical of a law that says you can’t publish (certain persuasion-y) media created by AI.
But I actually agree with you not to foreclose the option. If the law is narrow and the benefit is large, it could be constitutional.
One thing that might be worth looking into is mandated provenance disclosures. Assuming the shape of AI (super)persuasion that I currently think is likely (potentially more persuasive than the most persuasive human but not like arbitrary hacks), prominent provenance disclosures might be enough in the medium term, combined with voluntary technical countermeasures.
From a “law on the books” perspective, there’s more we could do about commercial speech than non-commercial speech. I bet the courts would back a law mandating disclosure of AI writing in commercial contexts as a way to protect against misleading users (the legal rationale being that people might falsely ascribe human origins to written text). The courts already limit the copyrightability of AI text which decreases the benefits of AI writing in many contexts.
From a “law as lived” perspective, enforcement would be difficult. Perhaps robust whistle blower laws could deter big actors, but reigning in smaller actors would bet very hard.
I think it would likely make a difference if you primarily had in mind the step where an AI system generates some text or video vs when a human user receives the generated content and communicates it further.
For the output generation itself my sense is there are various arguments out there and it will likely get litigated in various contexts but its unclear how it will pan out. Some arguments this is protected are arguments similar to the ones that came up in the Moody v NetChoice SC case (although Barrett explicitly calls out in a concurrence that AI might change the analysis), as well as the possibility that the rights of human “listeners” would be protected.
My take here is that this issue is likely to be important but is very much in flux and its worthwhile to think about how AI policy and legal actions can held this move in a positive direction, given that early cases on this topic could be highly influential on this area of law.
For human users who repost AI generated content, my gut instinct is that this is likely to be viewed as normal speech, with the AI part not changing things all that much necessarily. I think this would mean any regulations would need to address existing 1A exceptions (fraud, defamation etc.) or meet strict scrutiny.
How strong is the constitutional case that LLM-generated text and AI-generated videos are First-Amendment protected free speech? I’m interested in both the actual constitutional case and whether there’s a practical consensus on it right now.
For context, I’m studying AI (super)persuasion risks and my colleagues and some other people I talk to are much more skeptical of legal/policy solutions than I am. They both normatively think it’d be bad for government to interfere with LLM persuasion/speech, and descriptively think it’s a non-starter in the US legal context.
Personally, I’m more skeptical that LLM text is speech or that our moral or legal intuitions should apply to LLM text. Since to me speech implies conscious intent. And obviously the Founding Fathers didn’t consider this specific use-case and it’s unclear how it ought to generalize.[1] But of course legal solutions don’t depend on my understanding of the Law, but more like what the consensus of professionals + people with power believe.
To be clear I’m not particularly optimistic about legal or policy solutions either. And I probably wouldn’t recommend them at the current juncture (the AIs aren’t superpersuasive yet). But I just want to see if this option is at all open before prematurely foreclosing it.
And sure they didn’t have typewriters or handguns or Airbnbs but the first three amendments still apply. Nonetheless I think the generalization is more ambiguous for AI than in most past cases.
In case it helps: I think hate speech is speech, I think talking in movies is speech, I think typing is speech, I think Facebook likes is speech, I think t-shirt designs is speech, I think Confederate flags is speech. I don’t think algorithmic recommendation is speech, I’m skeptical code is speech, I don’t currently think LLM outputs are speech (at least before LLM personhood), I really don’t think AI video is speech.
You might find this recent Federal Middle District of Florida Court decision interesting. I think this is one of the first cases where defendants are trying to argue that LLM output is speech under the 1st Am. The Court held that CharacterAI could assert 1st Am. rights of users, but did not find that LLM output constituted speech for constitutional purposes (p. 27-31)
Thanks, this is an useful update!
Off the top of my head, I share your colleagues’ intuition that US courts will be extremely skeptical of a law that says you can’t publish (certain persuasion-y) media created by AI.
But I actually agree with you not to foreclose the option. If the law is narrow and the benefit is large, it could be constitutional.
One thing that might be worth looking into is mandated provenance disclosures. Assuming the shape of AI (super)persuasion that I currently think is likely (potentially more persuasive than the most persuasive human but not like arbitrary hacks), prominent provenance disclosures might be enough in the medium term, combined with voluntary technical countermeasures.
Is CBA really relevant to constitutional cases?
Kind of. Ask a chatbot about “intermediate scrutiny” in first amendment cases.
From a “law on the books” perspective, there’s more we could do about commercial speech than non-commercial speech. I bet the courts would back a law mandating disclosure of AI writing in commercial contexts as a way to protect against misleading users (the legal rationale being that people might falsely ascribe human origins to written text). The courts already limit the copyrightability of AI text which decreases the benefits of AI writing in many contexts.
From a “law as lived” perspective, enforcement would be difficult. Perhaps robust whistle blower laws could deter big actors, but reigning in smaller actors would bet very hard.
I think it would likely make a difference if you primarily had in mind the step where an AI system generates some text or video vs when a human user receives the generated content and communicates it further.
For the output generation itself my sense is there are various arguments out there and it will likely get litigated in various contexts but its unclear how it will pan out. Some arguments this is protected are arguments similar to the ones that came up in the Moody v NetChoice SC case (although Barrett explicitly calls out in a concurrence that AI might change the analysis), as well as the possibility that the rights of human “listeners” would be protected.
My take here is that this issue is likely to be important but is very much in flux and its worthwhile to think about how AI policy and legal actions can held this move in a positive direction, given that early cases on this topic could be highly influential on this area of law.
For human users who repost AI generated content, my gut instinct is that this is likely to be viewed as normal speech, with the AI part not changing things all that much necessarily. I think this would mean any regulations would need to address existing 1A exceptions (fraud, defamation etc.) or meet strict scrutiny.