Every country is different, but in the US, the natural course in those situations is that if it works, it gets normalized and brought into official channels, and if it doesn’t, a lot of people are going to get sued for malpractice. If you’re being put in that position, then you should insist on making sure the lawyers and compliance officers are consulted to refine whatever language the trainings use, or else refuse to put your name on any of it.
Some of this is pretty straightforward, if not simple to execute. For example, if you’re talking about LLMs, and you’re following the news, then you know that the US courts have ruled that the NY Times has the right to read every chat anyone, anywhere has with ChatGPT, no matter what OpenAI’s terms of service previously said, in order to look for evidence of copyright infringement. There seem (I think) to be exceptions for enterprise customers. But, this should be sufficient to credibly say, “If you’re using this through a personal account on an insecure device, you’re breaking the laws on patient confidentiality, and can be sanctioned for it in whatever ways just like you would if you got caught talking about a patient’s condition and identity in public.”
Good trainings build on what the audience already knows. Maybe start with something like, “Used well, you can consult LLMs with maybe about as much trust as you would a first-day new resident.” You need to be very careful how you frame your questions—good prompting strategy, good system prompts, consistent anonymization practices, asking for and confirming references to catch hallucinations or other errors. Those are all things you can look up in many places, here and elsewhere, but you’ll have to pick through a lot of inapplicable and poor advice because nothing is rock-solid or totally stable right now. Frame it as a gradual rollout requiring iteration on all of those, plan for updated trainings every quarter or two, provide a mechanism for giving feedback on how it’s going. You can use feedback to develop a system prompt people can use with their employer-provided accounts.
This may be harder, but you also need to make sure you’re able to collect accurate information on what people in various roles are actually, currently doing, and they need to be able to tell you that without fear of retroactive reprisal or social sanction, or you’re likely to drown in deception about the situation on the ground, and won’t know what goals or use cases you need to be thinking about.
One thing to add: if you find that the whole system is demanding you, an amateur, provide training while refusing to do the bare minimum necessary to even follow the law or maintain the standard of care, then think very carefully about what you want to do in response. Try to muddle through and do the best you can without incurring personal liability? Become a whistleblower? Find a new job? At that point you might well want to consult a lawyer yourself to help navigate the situation. And make sure you document, in writing, any requests made of you that you are uncomfortable with, and your objections to them.
Every country is different, but in the US, the natural course in those situations is that if it works, it gets normalized and brought into official channels, and if it doesn’t, a lot of people are going to get sued for malpractice. If you’re being put in that position, then you should insist on making sure the lawyers and compliance officers are consulted to refine whatever language the trainings use, or else refuse to put your name on any of it.
Some of this is pretty straightforward, if not simple to execute. For example, if you’re talking about LLMs, and you’re following the news, then you know that the US courts have ruled that the NY Times has the right to read every chat anyone, anywhere has with ChatGPT, no matter what OpenAI’s terms of service previously said, in order to look for evidence of copyright infringement. There seem (I think) to be exceptions for enterprise customers. But, this should be sufficient to credibly say, “If you’re using this through a personal account on an insecure device, you’re breaking the laws on patient confidentiality, and can be sanctioned for it in whatever ways just like you would if you got caught talking about a patient’s condition and identity in public.”
Good trainings build on what the audience already knows. Maybe start with something like, “Used well, you can consult LLMs with maybe about as much trust as you would a first-day new resident.” You need to be very careful how you frame your questions—good prompting strategy, good system prompts, consistent anonymization practices, asking for and confirming references to catch hallucinations or other errors. Those are all things you can look up in many places, here and elsewhere, but you’ll have to pick through a lot of inapplicable and poor advice because nothing is rock-solid or totally stable right now. Frame it as a gradual rollout requiring iteration on all of those, plan for updated trainings every quarter or two, provide a mechanism for giving feedback on how it’s going. You can use feedback to develop a system prompt people can use with their employer-provided accounts.
This may be harder, but you also need to make sure you’re able to collect accurate information on what people in various roles are actually, currently doing, and they need to be able to tell you that without fear of retroactive reprisal or social sanction, or you’re likely to drown in deception about the situation on the ground, and won’t know what goals or use cases you need to be thinking about.
One thing to add: if you find that the whole system is demanding you, an amateur, provide training while refusing to do the bare minimum necessary to even follow the law or maintain the standard of care, then think very carefully about what you want to do in response. Try to muddle through and do the best you can without incurring personal liability? Become a whistleblower? Find a new job? At that point you might well want to consult a lawyer yourself to help navigate the situation. And make sure you document, in writing, any requests made of you that you are uncomfortable with, and your objections to them.