ModelOps – AI Model Operationalization for the Enterprise ModelOps – AI Model Operationalization for the Enterprise
We’re seeing a growing number of large enterprises working to scale up their use of machine learning algorithms (statistical models) over... ModelOps – AI Model Operationalization for the Enterprise

We’re seeing a growing number of large enterprises working to scale up their use of machine learning algorithms (statistical models) over the past few years. Increasingly, enterprises are relying on machine learning models to turn huge volumes of data into competitive insights and information. However, one big bump in the road is the ability to “operationalize” these models in order to apply them for a growing number of use cases across the enterprise. Rumor has it that 50% of models never make it into production and those that do take a minimum of 3 months for deployment. This time and effort translates to a real operational cost coupled with a slower time to value. 

The key to operationalizing models is the ability to address the critical challenges centered on governance and scale required to effectively unlock the transformational value of enterprise AI and machine learning investments. As a result, there is a lot of buzz around new “ModelOps” platforms designed to help manage the many models floating around enterprises these days. 

ModelOps represents a holistic approach for quickly and iteratively advancing models through the machine learning life cycle so they are deployed more rapidly and deliver desired business value. 

In this article, we’ll take a dive into what these platforms are all about and why they’re needed. We’ll also take a high-level view of the technologies that support this effort. We’ll consider some use case examples of where ModelOps makes a difference. And then we’ll wrap up with a short-list of important players in this space.

What is ModelOps and why is it needed?

Enterprises continue to report optimistic goals for the amount of AI adoption they expect moving forward, yet when asked to divulge how many projects were actually deployed, the adoption rate was a fraction of what was planned. Undeployed and unrefreshed models represented sizable unrealized investments. Moreover, if market conditions change, enterprises that fail to move on these investments may never realize any significant level of ROI. 

Unlike traditional software, models “decay” over time, requiring retraining with new data, and transparency of key performance metrics for the line of business and compliance departments. Many models require re-running production data pipelines on a periodic basis, e.g. every month, quarter, year, etc.

modelops exampleModel performance decays over time due to changing data, code, users, system environment, and other external factors. Machine learning models must be monitored in production and retrained or redeveloped periodically. (Source: Forrester)

ModelOps represents an evolution of MLOps that goes beyond the routine deployment of machine learning models to include important features like continuous retraining, automated updating, and synchronized development of more complex machine learning models. According to Gartner, having fully operationalized analytics capability places ModeOps directly between both DataOps and DevOps (see image below).

ModelOps allows the analytical models to move from the data science team to the IT production team for a regular sequence of deployment and updates including validation, testing and production as quickly as possible while ensuring quality results. Further, it enables the management and scaling of models to match demand and continuously monitor them to identify and remedy early signs of degradation.

modelops exampleModelOps (and its MLOps subset which focuses on ML models only) is a key capability that is required for successful AI/ML model operations once models have been developed. It is a discipline that is separate and apart from model development. Industry experts and analysts are recognizing that model development and model operations are different disciplines, requiring different capabilities, tools, and even teams.  Gartner, in a recent article [1], states, “Platform independence: AI pipelines span multiple environments from developer notebooks to edge to data center to cloud deployments. A true ModelOps framework allows you to bring standardization and scalability across these disparate environments so that development, training, and deployment processes can run consistently and in a platform-agnostic manner.”

“AI model operationalization (ModelOps) is primarily focused on the governance and life cycle management of all AI and decision models (including models based on machine learning, knowledge graphs, rules, optimization, linguistics, and agents). In contrast to MLOps, which focuses only on the operationalization of ML models, and AIOps which is AI for IT Operations, ModelOps focuses on the operationalization of all AI and decision models.” — Gartner, Innovation Insight for ModelOps, August 2020. Farhan Choudhary, Shubhangi Vashisth, Arun Chandrasekaran, Erick Brethenoux

Here is a useful RFP template for addressing ModelOps (and MLOps) functional requirements. It is the result of interviews with several industry experts and analysts as conducted by ModelOp, a provider of ModelOps software for major enterprises. Also sponsored by ModelOp, was a recent conference: The 2021 ModelOps Summit featuring some very timely content, sessions, and panel discussions, available on demand. 

Additionally, ModelOp announced the release of the first annual “State of ModelOps 2021” report [2]. The report summarizes research into the state of model operationalization and details the challenges faced by AI-focused executives from top global financial services companies as they scale their AI initiatives. Highlights from the survey of 100 AI-focused executives from F100 and Global financial services companies include the following:

  • They have an average of 270 models in production, representing a wide range of model types
  • Their data scientists are using 5-7 different tools to develop models
  • Only 25% rate their existing processes for inventorying models in production as very effective
  • 80% say difficulty managing risk and ensuring compliance is a key barrier to AI adoption
  • 69% say improving the enforcement of AI governance processes is a key reason to invest in a ModelOps platform.
  • 76% of respondents say achieving cost reductions is at least a ‘very important’ benefit of such an investment, with 42% describing it as crucial
  • 90% have or expect to have a dedicated budget for ModelOps within 12 months.

Another recent report was developed by Forrester, “Introducing ModelOps to Operationalize AI.” [3] The Forrester analysts identify several types of so-called “drift” effects that enterprises need to be aware of including: data drift, prediction distribution drift, concept or business KPI drift, and explainability and fairness. AI has many moving parts, so monitoring is key to ensuring that everything is functioning as planned.

Common Problems ModelOps Can Solve

Here is a short-list of three common problems the ModelOps approach can help solve. 

One of the reasons a ModelOps strategy is needed is due to a characteristic of machine learning called “model decay.” All models decay, and if they are not given regular attention, performance suffers. This happens when a data science team evaluates model performance during the early stages of a project (say using a test set), sees good accuracy, and decides to move on. Unfortunately, machine learning models often interact with real-life conditions, and their accuracy can degrade over time. It is useful to automatically detect model decay, update a model, and deploy it to production using ModelOps. 

ModelOps enables you to manage and scale models to meet demand and continuously monitor them to spot and fix early signs of degradation. Without ModelOps capabilities, an enterprise finds it is not able to scale and govern AI initiatives. The answer to model decay (or drift) is creating a strong approach to model stewardship in your organization. 

Another problem rests with data quality. Minute variations or shifts in data may have a substantial effect on machine learning model accuracy. It’s important to properly assess the data sources and feature variables available for use by your models so you can find answers to the following questions:

  • What are your model’s data sources?
  • What level of transparency are you willing to share with your customers in terms of decisions made with the data sets used?
  • Are you able to reproduce your feature engineering process in production?
  • How frequently are new feature variables added to the mix or changed?
  • What steps have you taken to address model bias?
  • Do you provide model explainability/interpretability?
  • Do your data violate (either directly or indirectly) any regulations?

Lastly, another problem is with time to deployment. Since the model deployment cycle can be long, it’s important to first assess how long that cycle is for your organization, and then follow-up by establishing benchmarks to measure improvement. Additionally, any effort to identify best and worst practices is beneficial – by breaking down your process into distinct steps, and then measuring and comparing results. A ModelOps solution can help automate some of these actions.

ModelOps Methods

Now let’s focus on what ModelOps methodologies do in support of the above needs. In essence, the ModelOps process includes automating the governance, management, and monitoring of models in production across the enterprise. 

It is important to monitor the ModelOps practice. This is necessary because ModelOps represents a rotation of development, testing, deployment and monitoring, however it can only be effective if it makes progress toward the goal of providing scale and accuracy required by your organization. 

It’s also important to determine the effectiveness of your ModelOps practice. Specifically, determine whether the implementation of ModelOps methods helped you achieve the scale, accuracy, and process thoroughness as needed. You can set accuracy targets and track them through development, validation and deployment for dimensions such as drift and degradation. 

Further, you need to monitor the performance of each individual model. As they degrade, they will need to be retrained and redeployed. You can identify business metrics affected by the model in operation, e.g. if a model is designed to identify churn, is it having a positive effect on subscription rates? 

Although there is no hard fast strategy for implementing a ModelOps framework, it’s possible to identify a short-list of basic elements:

  • Model retraining and redeployment – as model drift is encountered, you should be prepared to retain the model using new data and then redeploy in production. 
  • Model monitoring – it is critical to regularly monitor model performance in order to help ensure accurate results are produced. This addresses the inevitability that business conditions are constantly changing, which can render data used in the initial training process obsolete. 
  • Scalability – once deployed in production, the ModelOps framework should embrace rapid scalability as demand dictates. Model scalability is vital since large enterprises may ultimately create hundreds or thousands of models. 
  • Diversity – it is important that enterprises discover different data sets and machine learning algorithms that can solve the same business problems. Reproducibility in data experiments is imperative, and versioning each data set, algorithm, and data pipeline is important for creating the desired results. 

Use Case Examples

Now let’s take a look at one general and one specific case study that demonstrates the success of using ModelOps methods. 

Financial Services  

In the financial services sector, copious time-series models are used alongside stringent rules for bias and auditability. Here, ModelOps automates the model lifecycle to make sure they are reliable, fair, and accurate. Such automation guarantees technical, business, and compliance metrics will govern the model as it runs while monitoring for anomalies and updating the model where required without disrupting production applications. 

Domino’s Pizza

Domino’s, the largest pizza company in the world based on retail sales, is taking advantage of Datatron’s ModelOps platform to improve the performance of its AI and ML efforts to automate and standardize the deployment, monitoring, management, governance, and validation of AI models.

Domino’s was eager to have the ability to monitor models in real-time and understand how predictions were changing over time. The company understood that as the truth or the reality changes, they needed a way for their models to be refreshed with new data. They also wanted to minimize the involvement needed by their data science resources as new models were being rolled out.

As a result, Domino’s was able to constantly monitor and update models and create a multi-level view of key metrics to monitor how their models perform in production. Through automation and standardization of ML operations, the company was able to increase the efficiency of managing multiple models at scale to optimize outcomes, reduce the efforts required by data scientists and other hard-to-find IT resources, and identify areas of growth.


There are several companies that have become players in this new space. Many are only a couple of years old. Here is a short-list to consider in alphabetical order:

Datatron – Production AI model management at scale

DataKitchen – Operationalizing machine learning at scale 

Domino Data Lab – ModelOps platform

ModelOp – Enterprise ModelOps platform

Modzy – ModelOps platform for enterprise and teams

Quickpath – Operationalizing machine learning models

RapidMiner – Automated ModelOps

SAS – Model Manager 

superwise.ai – Optimal and risk-free use of AI at scale


The predictive power of models in the enterprise, in conjunction with the availability of big data along with accelerated compute resources, will continue to be a source of competitive advantage for intelligent organizations. Businesses that fail to embrace the benefits of ModelOps face increasing challenges in scaling their analytics and will fall short in the marketplace.

ModelOps works to remove a key point of friction in the machine learning life cycle, helping to ensure that the investment in this technology delivers business value at a faster pace. ModelOps methodologies facilitate getting machine learning solutions out of the lab quicker and into use, enabling the enterprise a distinct competitive advantage.


[1] Gartner, “Innovation Insight for ModelOps,” Farhan Choudhary, Shubhangi Vashisth, Arun Chandrasekaran, Erick Brethenoux. August 6, 2020

[2] ModelOp, “State of ModelOps 2021” report. April 15, 2021

[3] Forrester, “Introducing ModelOps to Operationalize AI,” Kjell Carlsson, PhD, Mike Gualtieri, Srividya Sridharan, Jeremy Vale. August 13, 2020

Editor’s note 7/19/2021: How to Learn More about ModelOps and MLOps

At our upcoming event this November 16th-18th in San Francisco, ODSC West 2021 will feature a plethora of talks, workshops, and training sessions on MLOps. You can register now for 60% off all ticket types before the discount drops to 40% in a few weeks. Some highlighted sessions include:

  • Tuning Hyperparameters with Reproducible Experiments: Milecia McGregor | Senior Software Engineer | Iterative
  • MLOps… From Model to Production: Filipa Peleja, PhD | Lead Data Scientist | Levi Strauss & Co
  • Operationalization of Models Developed and Deployed in Heterogeneous Platforms: Sourav Mazumder | Data Scientist, Thought Leader, AI & ML Operationalization Leader | IBM
  • Develop and Deploy a Machine Learning Pipeline in 45 Minutes with Ploomber: Eduardo Blancas | Data Scientist | Fidelity Investments

Daniel Gutierrez, ODSC

Daniel D. Gutierrez is a practicing data scientist who’s been working with data long before the field came in vogue. As a technology journalist, he enjoys keeping a pulse on this fast-paced industry. Daniel is also an educator having taught data science, machine learning and R classes at the university level. He has authored four computer industry books on database and data science technology, including his most recent title, “Machine Learning and Data Science: An Introduction to Statistical Learning Methods with R.” Daniel holds a BS in Mathematics and Computer Science from UCLA.