As a company whose primary business is clinical content creation, you might think we’re worried about the advent of generative AI in healthcare. We’re not – in fact, we’re cautiously excited about its possibilities. Let us explain why.
In healthcare, more than 80% of clinicians report having communication barriers with their patients. A fifth of hospital readmissions are said to be due to a communication issue.
Effective communication with patients is paramount. It can mean the difference between understanding treatment options clearly, or feeling confused and anxious, leading to poor patient experience. It can lead to patient harm through poor medication adherence or confusion over treatment guidance.
Technology that can enhance patient communication is much in demand, as we well know, but does AI provide the answer?
AI is already delivering positive results
For some, generative AI and large language models (LLMs) such as ChatGPT have the potential to revolutionise the way healthcare professionals communicate with patients
AI is already delivering positive results. We are collaborating with generative AI technologies to expand CardMedic’s capabilities, such as creating photo-realistic video content. Human experts still drive all content to ensure it is safe and empathetic.
Meanwhile, human translators are leveraging generative AI to enhance productivity, for example by using ChatGPT to critique or shorten content. Telemedicine providers can use ChatGPT to provide patients with tailored information and support, for example.
However, getting caught in the hype risks oversimplifying the complexity of healthcare. We believe it is essential to tread carefully and understand the clinical and cultural nuances when considering these technologies.
The challenges of generative AI
Generative AI doesn’t solve all communication challenges in healthcare. It’s a tool, albeit a powerful one, with its own set of limitations and considerations.
The effectiveness of generative AI hinges on the quality of the data it’s trained on. Without careful consideration, AI algorithms may inadvertently perpetuate biases in the source data.
For example, a review by the NHS Race and Health Observatory found that tests and assessments that indicate the health of newborns, moments after birth, are limited and not fit for purpose for Black, Asian and ethnic minority babies, as they are based on the skin tones of White European babies.
Could AI make recommendations based on such data, ignorant of the colour of the patient’s skin, and perpetuate health inequity? Addressing these possible issues requires vigilance and intentional efforts to mitigate bias and make equitable patient care possible. After all, this is our primary goal.
In addition, there are concerns over usefulness and accuracy of ChatGPT in clinical practice. Researchers have said that current forms of AI chatbots should not be used for diagnostic or treatment purposes without human expert oversight.
Human oversight is essential
Writing in Forbes, health AI expert Dr Lance Eliot reiterates the point in his article on the use of AI in diagnostic decision-making. He explains that whilst AI can assist in generating content to support clinical decision-making, it still requires human oversight to ensure accuracy and relevance to the patient’s situation.
At CardMedic, we fully support this approach. We use machine learning to indicate suitable content, and then subject this to thorough review by experienced clinical translators. We want to ensure our content is clinically sound and safe, and human involvement is critical in that process.
Currently, there is no substitute for such clinical peer review. Human experience understands the context and emotions that are central to healthcare communication.
Like Google Translate, ChatGPT and similar models can provide a foundation for translations, but in healthcare, it requires clinical peer review to achieve optimal results for patients. This is exactly the role that we expect AI to play in CardMedic’s journey – it will support clinicians, but also be driven by their healthcare expertise and contextual insight.
We proceed with caution
However, as data sets grow and deployment models evolve, the potential for safer and more reliable AI-driven communication tools grows too. We are continually developing more clinical interactions and scaling content on the app. Doing so, we are working at the intersection of human expertise and technology to achieve optimal results with the tools we know we can trust.
For now, we must avoid the potential for any instance of incorrect clinical recommendations and take a cautious approach to the integration of generative AI in healthcare communication. You would not rely solely on autopilot to fly a plane from launch to landing and the same goes for how we use technology in healthcare. AI can help steer us in the right direction, but it should be driven by the expertise and judgment of healthcare professionals.
So, as we navigate the intersection of AI and patient communication, we approach with caution so we can develop a nuanced understanding of it. Generative AI holds promise, but it requires human oversight and continuous refinement to truly revolutionise healthcare communication. Once it does, we’ll be there to ensure it does so safely, sensibly and equitably.