Call us at 650-400-3029 (PST)

Interface ML/AI

In the first post in this series, I introduced the different ways ML/AI could be used to improve business operations. Perhaps the most accessible and understandable is what we call Interface AI. Interface AI includes natural language processing, image and handwriting recognition, video processing, generative AI/Large Language Models, and sentiment analysis – a wide range of ML and AI tools that make it easier to interact with computers.   These kinds of ML and AI allow computers to access data that used to be “off limits” like video, audio, images, and handwriting. They can allow computers to understand that content in new ways – detecting the sentiment for instance. And they can make it more efficient to process this data in an existing way, by extracting actionable information from the source and passing it to a system that can act on it.   This kind of ML/AI can save a great deal of money by eliminating data entry costs and preventing data entry errors. It can also make new kinds of systems possible by allowing direct interaction with images or text about real things that is not in the slightly artificial format most business users for such data.  Much of the recent enthusiasm for ML and AI has come from these technologies and many companies have succeeded at deploying this kind of ML/AI even while failing to deploy other kinds. It can often be trained using fairly general data sets, the problems are often widely replicable and the end result is increasingly easy to buy and use.  Let’s use a claims handling system as a concrete example. How could I improve my claims handling with Interface AI, assuming I had already got basic claims decision automation working? 

  • Use image recognition to identify the contents of evidence and match it to the claim. 
  • Use natural language processing to extract a structured claim request from an email or chat text that describes the claim 
  • Build a chat bot to interact with a claimant to gather this data interactively. 
  • Scan an image of a bill and turn it into the structured information needed to calculate the reimbursement. 
  • Summarize key facts from a medical or other report so they can be used in the claim assessment. 
  • Describe the rejection reasons in natural language so that a customer understands why their claim was not paid. 

And so on. All ways to make it easier for the user to interact. 

Next up, research AI.