My understanding is that giving “substantive medical advice” is “unauthorized practice” and as such is banned.
It depends on what exactly you mean by “substantive medical advice” but you have to read these statutes against the backdrop of robust free speech protections when applied to giving advice.
This is a somewhat unsettled legal area and the Supreme Court is currently considering a case (Chiles v. Salazar) that may reshape the doctrine, but if you’re talking about professional practice that purely involves speech then it gets a lot of protection. That quickly recedes once any kind of conduct is involved (i.e., any kind of physical interaction), so the government usually wins unauthorized practice cases by pointing to something other than talking that was done.
Presumptively, if you’re only talking the government can’t regulate the content of that speech without clearing an exceptionally high legal bar and so someone giving you purely verbal medical advice without any physical component is likely in the clear. That’s why it’s not unauthorized practice of medicine if you tell me you have a headache and I suggest you take an aspirin, and all of the various websites out there that let you submit a list of symptoms and then suggest a diagnosis are fine too.
To crack down on something that doesn’t involve fraud/impersonation, the government would have to rely on the so-called “professional speech” exception to the first amendment, used in some circuit courts. It’s not clear this is even good law; the Supreme Court’s majority has never endorsed it, and they cast significant doubt on in in NIFLA v. Becerra (2018).
To the extent the professional speech exception exists at all, it applies when there is a “personal nexus” between the professional and the client. The threshold inquiry is whether or not “a speaker … purport[s] to be exercising judgment on behalf of any particular individual with whose circumstances he is directly acquainted.” So, to the Tweet above, a chatbot is definitely allowed to answer generic questions related to these professions as much as it likes. A chatbot answering “what are the symptoms of a heart attack” is no more acting as a professional than WebMD is by publishing an article on that.
Even if the chatbot is giving particularized advice, it probably isn’t practicing a profession because the courts still are looking for a narrow construction that avoids the first amendment issue; scholars have suggested that the right way of thinking about is whether the advisor has a fiduciary duy. Rosemond v. Markham is a good model for thinking about that in the chatbot context, holding that even if you’re giving personalized advice in response to someone’s specific question about themself, that does not necessarily establish such a relationship. (Rosemond was a columnist who gave psychological advice to people who wrote in to him, describing himself as a “family psychologist.” Someone in Kentucky complained that, because Rosemond was not a licensed professional in Kentucky, this amounted to unauthorized practice. Applying the professional speech doctrine pre-NIFLA, the court held that just responding to questions without establishing some further relationship doesn’t qualify as professional speech and thus Rosemond gets first amendment protection).
It depends on what exactly you mean by “substantive medical advice” but you have to read these statutes against the backdrop of robust free speech protections when applied to giving advice.
This is a somewhat unsettled legal area and the Supreme Court is currently considering a case (Chiles v. Salazar) that may reshape the doctrine, but if you’re talking about professional practice that purely involves speech then it gets a lot of protection. That quickly recedes once any kind of conduct is involved (i.e., any kind of physical interaction), so the government usually wins unauthorized practice cases by pointing to something other than talking that was done.
Presumptively, if you’re only talking the government can’t regulate the content of that speech without clearing an exceptionally high legal bar and so someone giving you purely verbal medical advice without any physical component is likely in the clear. That’s why it’s not unauthorized practice of medicine if you tell me you have a headache and I suggest you take an aspirin, and all of the various websites out there that let you submit a list of symptoms and then suggest a diagnosis are fine too.
To crack down on something that doesn’t involve fraud/impersonation, the government would have to rely on the so-called “professional speech” exception to the first amendment, used in some circuit courts. It’s not clear this is even good law; the Supreme Court’s majority has never endorsed it, and they cast significant doubt on in in NIFLA v. Becerra (2018).
To the extent the professional speech exception exists at all, it applies when there is a “personal nexus” between the professional and the client. The threshold inquiry is whether or not “a speaker … purport[s] to be exercising judgment on behalf of any particular individual with whose circumstances he is directly acquainted.” So, to the Tweet above, a chatbot is definitely allowed to answer generic questions related to these professions as much as it likes. A chatbot answering “what are the symptoms of a heart attack” is no more acting as a professional than WebMD is by publishing an article on that.
Even if the chatbot is giving particularized advice, it probably isn’t practicing a profession because the courts still are looking for a narrow construction that avoids the first amendment issue; scholars have suggested that the right way of thinking about is whether the advisor has a fiduciary duy. Rosemond v. Markham is a good model for thinking about that in the chatbot context, holding that even if you’re giving personalized advice in response to someone’s specific question about themself, that does not necessarily establish such a relationship. (Rosemond was a columnist who gave psychological advice to people who wrote in to him, describing himself as a “family psychologist.” Someone in Kentucky complained that, because Rosemond was not a licensed professional in Kentucky, this amounted to unauthorized practice. Applying the professional speech doctrine pre-NIFLA, the court held that just responding to questions without establishing some further relationship doesn’t qualify as professional speech and thus Rosemond gets first amendment protection).
God damnit, this makes me want to unredact my post again.