Call us at 650-400-3029 (PST)

Using LLMs / GenAI to Improve Healthcare Communications

The US healthcare system is notoriously complex. One of the ways this complexity shows up is in claims – what gets paid, by whom, when, for what and at what level. While claims are legally required to have an explanation of benefits, this is often opaque – so such so that call centers regularly provide an explanation of the explanation of benefits! Given how good LLMs and Generative AI are at answering questions and chatting with humans, claims explanations are a hot topic for AI. 

There are two main problems. The first problem, of course, is hallucination. How can you be sure your AI is not hallucinating when it explains the claim to your member? What if it SAYS you decided for a reason that’s actually illegal? Even if you didn’t, you might end up in serious regulatory trouble. The second problem is that even a rational, coherent explanation might not match what your claims system actually does. And this kind of inconsistency is also a problem. 

The solution, as it so often does, requires mixing technology in a decision-centric way. 

The decision here is claims adjudication – should this claim be paid and in what amount. It needs to be automated in a way that is transparent and manageable so that the business knows it is correct. And it needs to be able to explain how it makes each decision it makes. Using a decision platform to automate the decision and a decision model to specify it ensures transparency and business manageability while allowing the creation of a structured decision log that shows how each element of the decision was made. 

This log explains to the business why it made the decision it did. It can also be used to explain the “why” to members. A subset of the log – the top-level decisions if you like – is selected based on criteria set by the business owners. An LLM or GenAI model can be trained that takes this structured information and turns it into a human-readable explanation. You get the chattiness ad readability of AI but the solidly compliant decisioning of business rules. 

For bonus points, you can expose a “skinny” version of the claims decision that lets your chat bots ask your members a few questions and then say “if you’ve answered my questions accurately, that would be covered because XXX and YYY”. You’re upleveling the conversation from raw data to a conversation but reusing the same decision-making for cross-channel consistency. 

Contact us today if you’d like to discuss how we can help.

Amit Rawool

Amit Rawool

AI/ML developer

Amit Rawool

Amit Rawool is a seasoned Python Developer, with nearly a decade of experience in the field of machine learning, computer vision, natural language processing (NLP), reinforcement learning, and large language models (LLMs). His technical prowess is complemented by his ability to develop scalable applications using modern technologies like FastAPI, React, and Next.js.

Amit’s academic journey includes a Master of Technology from the Indian Institute of Technology Bombay and a Machine Learning Certification from Stanford University. Throughout his career, Amit has held various impactful roles, including Consultant Machine Learning Engineer, Research Engineer, and Lead Engineer, across prestigious organizations such as General Electric, General Motors and Sandvik Asia. He has led and contributed to several high-profile projects, including a real-time data processing system, and an AI-based product recommendation system.

Amit’s work is characterized by his innovative approach to solving complex problems and his commitment to integrating cutting-edge technology solutions. His expertise extends across a broad spectrum of software skills, including Python, JavaScript, PyTorch, TensorFlow, and OpenAI GPT models.