• Mike Leslie

Bad AI is a thing - What's Preventing Teams from Monitoring it?

We've been talking to Artificial Intelligence (AI) companies across many different verticals, and we're hearing some common, yet distinct, reasons that are preventing them from monitoring their Machine Learning (ML) models in production. Whatever the industry - whether it's Business Intelligence, Energy, Biotech, Fintech, etc. - the challenges around establishing the individual pieces of the MLOps toolchain are universal. Here's a quick breakdown on the factors we are seeing at play.


Universal Truth

The one thing that every Data Scientist we've talked to can agree upon is that model monitoring is important. And virtually everyone also admits (rather sheepishly sometimes) that their organization isn't doing enough to make sure their production models are performing as intended. With more and more ML models reaching production and actually being used in the real-world the consequences of unattended models are very real. "AI gone wrong" is a favorite topic among today's media as the public continues to focus their imagination on the doom and gloom of robots taking over humanity, rather than the very real benefits this technology is now bringing to almost every aspect of life. From discriminatory algorithms wrongly accusing families of childcare fraud to AI showing bias against woman candidates, the examples of AI gone wrong seem to be endless.


So this begs the question: Why are companies not doing everything in their power to keep a watchful eye on their ML models?


Maturity is Key

Some industries have been deploying ML models for years now and therefore have more mature MLOps processes. For example, ML algorithms have been used in the financial industry since the 1990s to execute tasks such as predicting market trends and detecting fraud. Therefore large financial institutions have a long head start in getting models into production over, say, Biotech companies just now opening the door to using ML for drug discovery. These more mature industries have learned the hard way why it's crucial to implement strict quality control measures for ML models going into deployment. It took these companies years to establish the appropriate monitoring tools. Unfortunately this is a timeframe and cost not palatable for newer AI companies surging forward into new frontiers.


Inhouse or not to Inhouse?

We've heard it multiple times now from Data Science teams - "We've been trying to set up a model mentoring framework but we are too busy focusing on developing ML for our product." No one wants to spend time building and maintaining the tools that they will use to build and maintain their product. Companies trying to get ML models into production are either forced to (a) build these model monitoring tools themselves so they can have peace of mind on how their models are behaving, or (b) simply cross their fingers and pray that their models will never have an issue that will impact their customers. Our research shows that scenario (a) can consume up to 30% of a Data Science team's time and cost a company hundreds of thousands of dollars annually in maintenance costs. Not to mention Data Scientists don't really want to be doing this custodial type work. Scenario (b) doesn't immediately cost any money, but it presents a potentially huge risk to the end users of the product who may by unknowingly relying on predictions from a bad model.


In Conclusion

Outsourcing more of the individual steps of the MLOps process will continue to help companies get their ML models into production faster, more cost effectively, and above all more responsibly. It's typical for the early pioneers of an industry to do everything themselves, but as the industry becomes more established it will naturally break down into two camps: the ones building the products to serve the end users, and the ones providing the infrastructure and tools to support this.