[Question] Request for comments/​opinions/​ideas on safety/​ethics for use of tool AI in a large healthcare system.

I know somebody at a large healthcare system who is working on an AI roadmap/​policy. He has an opportunity to do things right from the start—on a local level but with tangible real-world impact.

The primary types of AI we are looking at are LLMs (for grinding through repetitive natural language tasks) and more traditional predictive models trained on diagnostic imaging or structured numeric data. These will be mostly provided by EHR vendors and third-party vendors but possibly with some in-house development where it makes sense to do so.

I value this community’s thoughts regarding:

  • Ethical use of AI tools

  • Safeguarding patient safety and privacy

  • Having a net-positive effect on patients, providers, and the enterprise

  • Novel use-cases

  • What, if any, overlap there is between this and the friendly AI topic

  • Non-obvious risks

Things that are already not on the table for legal and common-sense reasons:

  • Uploading patient health information to any service that doesn’t have a BAA relationship with the health system

  • Making medical decisions without human supervision

I am writing this as a private individual. My views and statements do not reflect those of my employer or collaborators.

Thank you.

No comments.