I introduced our overall approach to ML/AI and then discussed the role of Interface AI in operations. Research AI is perhaps the most common use case for ML and AI – and the one least likely to generate value.
Research AI is all about the insight it generates. Generally, its research into an open ended question – something people don’t know such as “someone said X and we wondered if it was true” or “can you find any surprising patterns” or “we’ve never been able to tell these things apart before, can we now?.” these are not very targeted or directed but can be a lot of fun for the researchers and fascinating for the business sponsors. This is also, of course, why they are both widespread and prone to failure.
When this research is done there are three possible outcomes:
- The team can’t answer the question or the answer was ambiguous or unsurprising so there’s nothing to do.
- An immediately actionable result emerged from the research – a recommendation to “do this now” or “Stop doing that”. Perhaps the analysis proves that a particular product line is not profitable or that a service is completely mispriced. This is actually relatively rare in complex, mature organizations.
- An insight is discovered that requires an operational change to exploit it. Perhaps a product is only profitable in certain very specific customer, geographic and business environments. This means we shouldn’t sell it unless those things are true which would require a more nuanced and fine grained customer eligibility decision in operations.
This last outcome is common but complicated. Sometimes the result can be explicitly implemented – the insight drives a mechanical change to operational behavior. More often, the result requires that an additional ML/AI project be undertaken, generally with similar data. For instance, we believe we can predict which customers will be profitable using only the data available when they first apply for the product so we should build a model to predict likely profit using that data and operationalize it. Essentially the research is setting you up for a valuable opportunity that will require operationalization – change and adoption – and often additional analytics modeling to work.
One of the reasons ML/AI continue to have a high failure rate is that too many projects are commissioned as research projects and then the operationalization step is omitted even if the research outcome is positive. Everyone gets excited about the insight, but nothing changes.
Returning to our claims example:
- Research if claim complexity and claim wastage both need to be calculated or if one acts as a proxy for the other.
- See if it is possible to detect that a piece of photographic evidence has been used before (it is, by the way, ask me if you need to do this).
- Determine if we can use document storage access logs and data entry timestamps to tell which documents are being consulted for which claims.
Next and last, operational AI.