Call us at 650-400-3029 (PST)

What is an Acceptable Analytic Failure?

Many speakers on predictive analytics, machine learning (ML) and AI talk about the need to allow data science teams to fail. Without failure, without a willingness to fail sometimes, it’s very hard to build a successful data science program. This is true and often a barrier for companies that find it hard to accept that not all analytics initiatives succeed. 

But…

Just because some analytic failures are necessary, it does not mean that all analytic failures are acceptable. In fact, in our experience, many analytic projects fail for reasons that could have easily been prevented if the data science team followed some best practices.

First, some reasons it’s OK to fail:

  • If the model just doesn’t work 
  • If the data doesn’t support the accuracy you need
  • If the premise of the approach turned out to be false (i.e. the data contradicted it) 
  • If there’s not enough data or not the right data or the data is just no good

Why are these OK? Because it is the nature of analytic initiatives to not know what the data is going to say until they finish asking. That means that sometimes you are going to get an unacceptable answer no matter how well you run the project.

We also hear about lots of analytics projects that fail for unacceptable reasons:

  • The algorithm works but doesn’t really help
  • The business users don’t understand the algorithm
  • The business owner doesn’t trust the algorithm
  • The business users don’t use the algorithm
  • IT can’t deploy the algorithm

These are a failure of analytic requirements. These failures show that the data science team didn’t engage the business experts early in a discussion of the decision-making. Business understanding should come first and for data science initiatives this means understanding the business decision-making. Failing to begin with the decision in mind means:

  • The data science team doesn’t really know what algorithm is needed, what it must do
  • The business users don’t have any shared understanding with the data science team
  • The business team can’t explain how the algorithm will improve their decision-making to the business owner 
  • Any algorithm developed will be at an angle or even at odds with the way business users make decisions
  • There’s no basis for talking to IT about the other elements of the decision-making process that will need to be in place (such as eligibility or suitability rules) and no engagement of IT in collaboratively building a solution

Our DecisionsFirst™ approach begins data science initiatives the right way, building a decision model with the business owners. Then it uses this decision model to drive the implementation forward in parallel, capturing and automating the business rules for the decisions that can and should be automated. Business engagement, IT engagement and clear framing for the data science team all avoid unnecessary analytic failures.

To learn more about how leading with a focus on decisions leads to success, read this whitepaper