Imagine that the knowledge and reasoning of the world’s finest doctors are encapsulated in a single technology, ready to assist at a moment’s notice. Would you trust the technology with your healthcare needs? The use of large language models (LLMs), like ChatGPT, in the medical field is a double-edged sword, offering both groundbreaking potential and ethical concerns. As AI-driven systems begin to make their way into our everyday lives, they spark a crucial debate among healthcare workers: Are we stepping into a new era of medical care, or are we opening up a Pandora’s box of unforeseen consequences for patient care and data privacy?
Catherine Bain, AFMC’s privacy and security officer and senior vice president of administrative services, explored this delicate balance between technological advancement and ethical responsibility in patient care.
What are the Primary Concerns with AI and Large Language Models?
Safeguarding protected health information (PHI) and personally identifiable information (PII) is paramount in healthcare. “Not allowing the use of PHI or PII in LLMs is one certain way to protect that data,” Cathy says. “But it ignores an evolving tool that can help improve processes and analysis of data. So, it’s best to consider how to leverage these technologies while ensuring strong security protocols.
The use of AI in any public industry requires transparency and informed consent from consumers. This is especially true for the healthcare industry.
Growing Congressional interest in establishing U.S. policies and regulations relating to AI did not slow down the European Union, which recently took the plunge with the EU AI Act. “The European Union established a seminal legislative framework for AI companies operating within the EU,” Cathy says. “Global AI research and the influence of the Act will impact the U.S. and the AI solutions we access.”
Potential HIPAA and Compliance Risks
There are a few risks that healthcare organizations should be aware of before implementing large-scale LLMs into their practice.
- Security Concerns (Data Breach, Exposure, and Leakage): When exploring solutions, Cathy stresses the importance of identifying the security posture of the solution your organization is considering. This includes HIPPA compliance and General Data Protection Regulation
“Healthcare organizations should consider what protocols they have in place to prevent malicious attacks, unauthorized data access, and biased outputs,” Cathy adds. “Look for things like SOC-2, SOC-3, and ISO certification. Ask if the company behind the LLM will share the security report detailing their certification to meet the certification compliance.”
- Information Flow: It’s also important to identify where information goes once entered into the LLM. Closed-loop systems only allow specified users to access information and are not included in the information that trains the solution.
“Find out what happens to your data when you use the AI system or LLM,” Cathy advises. “Is it used to train the system — meaning that the information goes into its encyclopedic knowledge base and could appear in response to other queries? Or is it stored and de-identified for your company’s instance alone?”
- Hallucination, veracity, and bias: Some AI models and LLMs can be largely inaccurate and produce false information. In the healthcare world, this can lead to serious harm.
“Learn what protocols and principles are embedded into the models to ensure responsible AI. It’s good to think of it as today’s LLM systems providing good first drafts, but that draft does require human intervention to refine key points,” Cathy explains.
LLMs and AI Can Add Value to Practice
Patient engagement and compliance have been long-standing struggles in healthcare. “By using AI technology, providers can step up the game in these areas,” Cathy says. “Communications unique to a patient and their health can prompt and encourage patient compliance and afford consumers a greater opportunity to be an active participant in their healthcare journey.”
LLMs and AI are also beneficial for automating menial tasks, which overburden the healthcare industry. “Automating repetitive actions can free up providers and their teams to focus on activities that require a human touch,” Cathy explains.
Another significant advantage is data compilation and monitoring. It can be difficult to combine data across sources and assess what the picture truly is without spending a great deal of time. Most, if not all, AI solutions can do this within a fraction of the time. “AI can be used to compile data from lab results, radiology, and other data sources to provide a more comprehensive picture of a person’s health status,” Cathy says. “AI can enable providers to see a holistic view of a patient’s data within minutes if not seconds.”
Best Practices for Using AI and Large Language Models
Practice Transparency and Education. Patients and healthcare teams may raise some concerns about utilizing AI/LLMs. That’s expected! “Keeping your team and patients informed of what you’re doing will help to ease these concerns,” Cathy says. Sharing the benefits gained from using AI and the steps the team can take to mitigate risks will help with the adoption of the solution. “Sharing real-world testimonials and experiences will help demystify your chosen solution,” Cathy adds.
Remember: AI Tools Augment Human Effort. AI tools are just that — tools. As with every tool, to truly benefit from using them, we must use them correctly. It’s no different with AI. “A consistent message I’m hearing is that when AI/LLMs are used to produce natural language, it should be considered a starting place or first draft that a human reviews and edits as needed,” Cathy says. “So, don’t have an LLM generate text that you don’t vet in some way.”
Start Small, Then Expand. Leveraging a small group who can kick-start or try out the solution will help to identify corrections to be made and barriers to be overcome. “Once you make the small adjustments, launch another small group, then another, and eventually, each launch will have fewer and fewer tweaks until you’re able to launch larger groups more successfully,” Cathy says. Initially, it’s important to take incremental steps to prevent failure, loss of reputation, and potential catastrophe with patient information in the long run.
Focus on Value and Viability. AI has access to a universe of data it can pull from. LLMs constantly learn from conversations with users, storing responses in a virtual encyclopedia. Knowing this, it can be easy for anyone to get lost in the possibilities. “Don’t try to do everything at once,” Cathy says. “Resist the urge to get lost in the data and possibilities. Focus on the greatest immediate value and return on effort.”
Implement a Thoughtful, Comprehensive Approach
Successfully adding AI and LLMs into a healthcare setting requires collaboration and openness with staff and patients. While it can seem overwhelming, there are tips to use to make it a little easier to start and manage the conversation.
- Do the prep work ahead of time. “Set up governance, parameters, and expectations of what you’re looking for in a solution. Doing the work on the front end will make everything that comes later an easier and safer lift,” Cathy says. Needs analyses are great for optimizing strategies that appeal to the staff and the organization as a whole.
- Involve critical teams. Be sure to involve the IT and frontline teams in the selection, training, and launching of AI/LLMs. This is key to a successful implementation that protects patient PHI and PII. “Integrating providers and representative consumers will also help you anticipate barriers and design an education campaign,” Cathy adds.
- Education, transparency, and informed consent are crucial. “Your reputation is your greatest asset,” Cathy says. “The average consumer is leery of AI/LLM, so dispelling myths and educating on how it will be used before you use it is vital to ensuring your patients make the transition with you.”
For more important healthcare topics, follow AFMC on Facebook, Instagram, LinkedIn, X, and YouTube.
Subscribe to ournewsletter for the latest news and updates, including the most recent episode of AFMC TV.
Meet Catherine Bain
Ms. Bain began her AFMC career 30 years ago. As a member of the executive leadership team, she has supported the development, implementation, and achievement of the corporation's goals. In her roles as Senior Vice President of Administrative Services and Privacy and Security Officer, she is responsible for overseeing special projects, corporate legal and contract matters, and operational maintenance needs across two campuses, as well as ensuring data security and confidentiality.
During her time with AFMC, Ms. Bain has managed the medical affairs office and has coordinated and facilitated special events.
As a certified foresight practitioner, Ms. Bain applies forward-thinking and institutional knowledge to facilitate individual and organizational discovery at AFMC. This allows AFMC to explore and challenge assumptions, uncover hidden opportunities, and apply strategic thought and insight to build a vibrant future for AFMC and the clients we serve.