I didn’t see that portion of the bill you posted, and my earlier takeaway from 5.4′s objection was that it would have permitted the kinds of chat you mention. So I think your take is largely accurate and I’m sorry for posting something like this without reading the bill all the way through; I probably wouldn’t have done so if I had understood this before making it.
If you read closely, the AI’s correction is actually only really objecting to the “related to”, which (in its interpretation) I guess is supposed to imply the idea that the bill was supposed to ban any advice “related to” medicine, engineering, etc. As Max H pointed out, there’s a reasonable read of the post that goes “bans AI from answering (some) questions related to several licensed professions like medicine...”, so the AI’s correction is assuming a reading of the above tweet that isn’t even correct.
I think the AI’s correction is designed to give the reader the impression that the bill doesn’t ban advice, without actually claiming that it doesn’t ban advice. I suspect that AI models are just not good/trustworthy enough to do this sort of thing yet. I still do think though that the tweet would be better saying “effectively ban” and removing the “related to”.
to be completely honest, i wish that your original claims had been correct
because i suspect that it wouldn’t actually make a difference at all, and that would have been a very useful way to get at the crux: that, even without this clear and direct language, an attempt to outlaw chatbots “impersonating medical professionals” would have effectively prevented AI from offering individualized medical advice even if the bill in question were not so extreme
this is just an intuition from seeing how licensure works in other areas, but it seems to be an intuition that a lot of other people in this sphere have. it would have been very useful for epistemics, imo, if we could have gotten a version of the bill that actually looked the way you originally portrayed it, and then observed whether it did in reality lead to AI no longer giving medical, legal etc advice without extensive jailbreaking
I didn’t see that portion of the bill you posted, and my earlier takeaway from 5.4′s objection was that it would have permitted the kinds of chat you mention. So I think your take is largely accurate and I’m sorry for posting something like this without reading the bill all the way through; I probably wouldn’t have done so if I had understood this before making it.
If you read closely, the AI’s correction is actually only really objecting to the “related to”, which (in its interpretation) I guess is supposed to imply the idea that the bill was supposed to ban any advice “related to” medicine, engineering, etc. As Max H pointed out, there’s a reasonable read of the post that goes “bans AI from answering (some) questions related to several licensed professions like medicine...”, so the AI’s correction is assuming a reading of the above tweet that isn’t even correct.
I think the AI’s correction is designed to give the reader the impression that the bill doesn’t ban advice, without actually claiming that it doesn’t ban advice. I suspect that AI models are just not good/trustworthy enough to do this sort of thing yet. I still do think though that the tweet would be better saying “effectively ban” and removing the “related to”.
to be completely honest, i wish that your original claims had been correct
because i suspect that it wouldn’t actually make a difference at all, and that would have been a very useful way to get at the crux: that, even without this clear and direct language, an attempt to outlaw chatbots “impersonating medical professionals” would have effectively prevented AI from offering individualized medical advice even if the bill in question were not so extreme
this is just an intuition from seeing how licensure works in other areas, but it seems to be an intuition that a lot of other people in this sphere have. it would have been very useful for epistemics, imo, if we could have gotten a version of the bill that actually looked the way you originally portrayed it, and then observed whether it did in reality lead to AI no longer giving medical, legal etc advice without extensive jailbreaking