That is materially narrower than a blanket ban on AI “answering questions related to” medicine, law, dentistry, nursing, psychology, social work, engineering, etc.
So the post overstates the bill’s scope: the bill does not ban AI from answering all questions related to those professions;
Nitpick: the tweet in question does not actually use the phrases “all questions” or “blanket ban”. ChatGPT is interpreting ambiguity in a hyperbolic way to make the correction look stronger.
It’s also not clear from the excerpt of the bill that you quoted that a chatbot simply prefacing its response with “I am not a lawyer / doctor / hair stylist / etc. but”, (which would be annoying but not catastrophic) is sufficient to avoid liability.
The bill’s author is a democratic socialist that has said, among other things, “People deserve real care from real people.”, implying that the bill is indeed designed to limit what AIs can say, not just that they need to include more disclaimers when discussing certain subjects. Which is indeed very bad and stupid, even if it is not a “blanket ban” on “answering all questions” (which again, are ChatGPT’s words, not the words of anyone credible who supports or opposes the bill).
Nitpick: the tweet in question does not actually use the phrases “all questions” or “blanket ban”. ChatGPT is interpreting ambiguity in a hyperbolic way to make the correction look stronger.
I did not notice this earlier, but given Habryka’s comment, I don’t think this is a nitpick at all. I think the AI is actively trying to make the user believe that the bill doesn’t ban advice, without saying that.
Seems reasonable to me to interpret the words “ban X” as a ban on X, not as ban on some subset of X. That is certainly how people who are responding to the tweet appear to be reading it.
It’s also not clear from the excerpt of the bill that you quoted that a chatbot simply prefacing its response with “I am not a lawyer / doctor / hair stylist / etc. but”, (which would be annoying but not catastrophic) is sufficient to avoid liability.
Here are the attached sources that the AI gave, & its quotes & paraphrases from the sections:
NY State Senate Bill 2025-S7263 Summary: “Imposes liability for damages caused by a chatbot impersonating certain licensed professionals.” Sponsor memo: the bill would prohibit a chatbot from giving responses or advice that, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title.
STATE OF NEW YORK — S7263 bill text Section 390-f(2)(a): “A proprietor of a chatbot shall not permit such chatbot to provide any substantive response, information, or advice, or take any action which, if taken by a natural person” would constitute specified crimes of unauthorized professional practice or unauthorized use of title.
AI Chatbot Ban for Minors Passes Internet & Technology Committee, among 11 Bills The Senate press release says S7263 would “prevent AI from impersonating certain licensed professionals” and “would prohibit chatbots from giving substantive responses, including information or advice, that can be mistaken for professional counseling.”
If there is actually any ambiguity here, I am willing to bet literally anyone on this website that, if the bill goes forward, the ambiguities will be resolved in favor of a less aggressive interpretation in subsequent edits.
Seems reasonable to me to interpret the words “ban X” as a ban on X, not as ban on some subset of X. To ban something means to ban it.
The phrases “A ban on X related to Y” and “A blanket ban on all X related to Y” do not have identical meanings and interpretations in practice.
The former is ambiguous and therefore potentially misleading in the context of this bill, the latter is precise but definitely false.
I agree that the original tweet is bad and hyperbolic, but my point is that if you’re going to make a correction, you should correct the exact precise thing the person you’re correcting actually said.
If there is actually any ambiguity here, I am willing to bet literally anyone on this website that, if the bill goes forward, the ambiguities will be resolved in favor of a less aggressive interpretation in subsequent edits.
Yes, I think it is ambiguous whether this:
“would prohibit chatbots from giving substantive responses, including information or advice, that can be mistaken for professional counseling.”
means that making the chatbot include a disclaimer that it is not a licensed professional is sufficient on its own. If you start a response with “I am not a doctor, but...” and then give 2000+ words of actionable medical advice, that could reasonably be mistaken for “professional counseling”, which means it would be prohibited.
I am willing to bet literally anyone on this website that, if the bill goes forward, the ambiguities will be resolved in favor of a less aggressive interpretation in subsequent edits.
… because people look at the bill, find the places that it’s unreasonable, and contact legislators about it. The discussions about the ways the current bill is bad are how the bill gets better.
Nitpick: the tweet in question does not actually use the phrases “all questions” or “blanket ban”. ChatGPT is interpreting ambiguity in a hyperbolic way to make the correction look stronger.
It’s also not clear from the excerpt of the bill that you quoted that a chatbot simply prefacing its response with “I am not a lawyer / doctor / hair stylist / etc. but”, (which would be annoying but not catastrophic) is sufficient to avoid liability.
The bill’s author is a democratic socialist that has said, among other things, “People deserve real care from real people.”, implying that the bill is indeed designed to limit what AIs can say, not just that they need to include more disclaimers when discussing certain subjects. Which is indeed very bad and stupid, even if it is not a “blanket ban” on “answering all questions” (which again, are ChatGPT’s words, not the words of anyone credible who supports or opposes the bill).
I did not notice this earlier, but given Habryka’s comment, I don’t think this is a nitpick at all. I think the AI is actively trying to make the user believe that the bill doesn’t ban advice, without saying that.
Seems reasonable to me to interpret the words “ban X” as a ban on X, not as ban on some subset of X. That is certainly how people who are responding to the tweet appear to be reading it.
Here are the attached sources that the AI gave, & its quotes & paraphrases from the sections:
NY State Senate Bill 2025-S7263
Summary: “Imposes liability for damages caused by a chatbot impersonating certain licensed professionals.” Sponsor memo: the bill would prohibit a chatbot from giving responses or advice that, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title.
STATE OF NEW YORK — S7263 bill text
Section 390-f(2)(a): “A proprietor of a chatbot shall not permit such chatbot to provide any substantive response, information, or advice, or take any action which, if taken by a natural person” would constitute specified crimes of unauthorized professional practice or unauthorized use of title.
AI Chatbot Ban for Minors Passes Internet & Technology Committee, among 11 Bills
The Senate press release says S7263 would “prevent AI from impersonating certain licensed professionals” and “would prohibit chatbots from giving substantive responses, including information or advice, that can be mistaken for professional counseling.”
If there is actually any ambiguity here, I am willing to bet literally anyone on this website that, if the bill goes forward, the ambiguities will be resolved in favor of a less aggressive interpretation in subsequent edits.
The phrases “A ban on X related to Y” and “A blanket ban on all X related to Y” do not have identical meanings and interpretations in practice.
The former is ambiguous and therefore potentially misleading in the context of this bill, the latter is precise but definitely false.
I agree that the original tweet is bad and hyperbolic, but my point is that if you’re going to make a correction, you should correct the exact precise thing the person you’re correcting actually said.
Yes, I think it is ambiguous whether this:
means that making the chatbot include a disclaimer that it is not a licensed professional is sufficient on its own. If you start a response with “I am not a doctor, but...” and then give 2000+ words of actionable medical advice, that could reasonably be mistaken for “professional counseling”, which means it would be prohibited.
… because people look at the bill, find the places that it’s unreasonable, and contact legislators about it. The discussions about the ways the current bill is bad are how the bill gets better.
Yeah you’re probably right.