Monitoring in-production ML models at large scale using Amazon SageMaker Model Monitor
Machine learning (ML) models are impacting business decisions of organizations around the globe, from retail and financial services to autonomous vehicles and space exploration. For these organizations, training and deploying ML models into production is only one step towards achieving business goals. Model performance may degrade over time for several reasons, such as changing consumer purchase patterns in the retail industry and changing economic conditions in the financial industry. Degrading model quality has a negative impact on business outcomes. To proactively address this problem, monitoring the performance of a deployed model is a critical process. Continuous monitoring of production models allows you to identify the right time and frequency to retrain and update the model. Although retraining too frequently can be too expensive, not retraining enough could result in less-than-optimal predictions from your model.
Amazon SageMaker is a fully managed service that enables developers and data scientists to quickly and easily build, train, and deploy ML models at any scale. After you train an ML model, you can deploy it on SageMaker endpoints that are fully managed and can serve inferences in real time with low latency. After you deploy your model, you can use Amazon SageMaker Model Monitor to continuously monitor the quality of your ML model in real time. You can also configure alerts to notify and trigger actions if any drift in model performance is observed. Early and proactive detection of these deviations enables you to take corrective actions, such as collecting new ground truth training data, retraining models, and auditing upstream systems, without having to manually monitor models or build additional tooling.
In this post, we discuss monitoring the quality of a classification model through classification metrics like accuracy, precision, and more.
Solution overview
The following diagram illustrates the high-level workflow of Model Monitor. You start with an endpoint to monitor and configure a fraction of inference data to be captured in real time and stored in an Amazon Simple Storage Service (Amazon S3) bucket of your choice. Model Monitor allows you to capture both input data sent to an endpoint and predictions made by the model. After that, you can create a baseline job to generate statistical rules and constraints that serve as the basis for your model analysis later. Then, you define monitoring job and attach it to an endpoint through a schedule.
Model Monitor starts monitoring jobs to analyze the model prediction data collected during a given period. For monitoring model performance characteristics such as accuracy or precision in real time, Model Monitor allows you to ingest the ground truth labels collected from your applications. Model Monitor automatically merges the ground truth information with prediction data to compute the model performance metrics.
Model Monitor offers four different types of monitoring capabilities to detect and mitigate model drift in real time:
- Data quality – Helps detect change in statistical properties of independent variables and alerts you when a drift is detected.
- Model quality – Monitors model performance characteristics such as accuracy and precision in real time and alerts you when there is a degradation in model performance.
- Model bias – Helps you identify unwanted bias in your ML models and notify you when a bias is detected.
- Model explainability – Drift detection alerts you when there is a change in the relative importance of feature attributions.
For more information, see Amazon SageMaker Model Monitor.
The rest of this post dives into a notebook with the various steps involved in monitoring a pre-trained and deployed XGBoost customer churn binary classification model. You can use a similar approach for monitoring a regression model for increased error rates.
For detailed notebooks on other Model Monitor capabilities, see the data drift and bias notebook examples on GitHub.
Beyond the steps discussed in this post, there are other steps necessary to import libraries and set up AWS Identity and Access Management (IAM) permissions, and utility functions defined in the notebook, which this post doesn’t mention. You can walk through and run the code with the following notebook in the GitHub repo.
Monitoring model quality
To monitor our model quality, we complete two high-level steps:
- Deploy a pre-trained model with data capture enabled
- Generate a baseline for model quality performance
Deploying a pre-trained model
In this step, you deploy a pre-trained XGBoost churn prediction model to a SageMaker endpoint. The model was trained using the XGB Churn Prediction Notebook. If you have a pre-trained model that you want to monitor, you can use your own model in this step.
- Upload a trained model artifact to an S3 bucket:
You should see output similar to the following code:
- Create a SageMaker model object:
- Create a variable to specify the data capture parameters. To enable data capture for monitoring the model data quality, you specify the capture option called
DataCaptureConfig
. You can capture the request payload, the response payload, or both with this configuration.
- Create the SageMaker Predictor object from the endpoint to use for invoking the model:
Generating a baseline for model quality performance
In this step, you generate a baseline model quality that you can use to continuously monitor model quality against. To generate the model quality baseline, you first invoke the endpoint created earlier using validation data. Predictions from the deployed model using this validation data are used as a baseline dataset. You can use either the training or validation dataset to create the baseline. You then use Model Monitor to run a baseline job that computes model performance data and suggests model quality constraints based on the baseline dataset.
- Invoke the endpoint with the following code:
- Examine the predictions from the model:
You see output similar to the following code:
Next, you configure a processing job to generate statistical rules and constraints (referred to as your baseline) against which the model quality drift can be detected. Model Monitor suggests a set of default baseline statistics and constraints. You can also bring in custom baseline constraints.
- Start by uploading the validation data and predictions to Amazon S3:
- Create the model quality monitor:
- Run the baseline suggestion processing job:
When the baseline job is complete, you can explore the generated metrics and constraints.
- View the binary classification metrics with the following code:
The following screenshot shows your results.
- View the constraints generated:
From the constraints generated, you can see that model monitoring makes sure that the recall score from your model doesn’t regress and drop below 0.571. Similarly, it makes sure that you’re alerted when precision falls below 1.0. This may be too aggressive, but you can modify the generated constraints based on your use case and business needs.
Setting up continuous model monitoring
Now that you have the baseline of the model quality, you set up a continuous model monitoring job that monitors the quality of the deployed model against the baseline to identify model quality drift.
In addition to the generated baseline, Model Monitor needs two additional inputs: predictions made by the deployed model endpoint and the ground truth data to be provided by the model-consuming application. Because you already enabled data capture on the endpoint, prediction data is captured in Amazon S3. The ground truth data depends on the what your model is predicting and what the business use case is. In this case, because the model is predicting customer churn, ground truth data may indicate if the customer actually left the company or not. For the purposes of this notebook, you generate synthetic data as ground truth.
- First generate traffic to the deployed endpoint. If there is no traffic, the monitoring jobs are marked as
Failed
because there is no data to process. See the following code:
- View the data captured with the following code:
You see output similar to the following:
- View the contents of a single file:
You see output similar to the following:
Next, you generate synthetic ground truth. Model Monitor allows you ingest the ground truth data collected periodically from your application and merge it with prediction data to compute model performance metrics. You can periodically upload the ground truth labels as they arrive and upload to Amazon S3. Model Monitor automatically merges the ground truth with prediction data and evaluates model performance against ground truth. The merged data is stored in Amazon S3 and can be accessed later for retraining your models. You can encrypt the data in this bucket and configure fine-grained security, access control mechanisms, and data retention policies.
- Enter the following code to generate ground truth in the way that the SageMaker first party merge container expects:
The model quality job fails if either the data capture or ground truth data is missing.
Next, you set up a monitoring schedule that monitors the real-time performance of the model against the baseline.
- Set the name of the monitoring scheduler:
You now create the EndpointInput
object. For the monitoring schedule, you need to specify how to interpret an endpoint’s output. Because the endpoint in this notebook outputs CSV data, the following code specifies that the first column of the output, 0
, contains a probability (of churn in this example). You further specify 0.5
as the cutoff used to determine a positive label (that is, predict that a customer will churn).
- Create the
EndpointInput
object with the following code:
- Create the monitoring schedule. You specify how frequently the monitoring job runs using
ScheduleExpression
. In the following code, we set the schedule to one time per hour. ForMonitoringType
, you specifyModelQuality
.
Each time the model quality monitoring job runs, it first runs a merge job and then a monitoring job. The merge job combines two different datasets: inference data collected by data capture enabled on the endpoint and ground truth inference data provided by you.
- Examine a single run of the scheduled monitoring job:
- Check the violations against the baseline constraints:
The following screenshot shows the various violations generated.
From this list, you can see the false positive rate and false negative rate are both greater than the constraints generated or modified during the baselining step. Similarly, the accuracy and precision metrics are less than expected, indicating model quality degradation.
Analyzing model quality with Amazon CloudWatch metrics
In addition to the violations, the monitoring schedule also emits Amazon CloudWatch metrics. In this step, you view the metrics generated and set up a CloudWatch alarm to trigger when the model quality drifts from the baseline thresholds. You can also use CloudWatch alarms to trigger remedial actions such as retraining your model or updating the training dataset.
- To view the list of the CloudWatch metrics generated, enter the following code:
You see output similar to the following:
- Create an alarm for when a specific metric doesn’t meet the threshold configured. In the following code, we create an alarm if the F2 value of the model falls below the threshold suggested by the baseline constraints:
In a few minutes, you should see a CloudWatch alarm created. The alarm first shows the status Insufficient Data
and then changes to Alert
. You can view its status on the CloudWatch console.
After you generate the alarm, you can decide on what actions you want to take on these alerts. A possible action could be updating the training data and retraining the model.
Visualizing the reports in Amazon SageMaker Studio
You can collect all the metrics that Model Monitor emits and view them in Amazon SageMaker Studio, a visual, fully integrated development environment (IDE) for ML so you can visually analyze your model performance without writing code or using third-party tools. You can also run ad-hoc analysis on the reports generated in a SageMaker notebook instance.
The following figure shows sample metrics and charts in Studio. Run the notebook in the Studio environment to view all metrics and charts related to the customer churn example.
Conclusion
SageMaker Model Monitoring is a very powerful tool that enables organizations employing ML models to create a continuous monitoring and model update cycle. This post discusses the monitoring capability with a focus on monitoring the quality of a deployed ML model. The notebook included with the post provides detailed instructions on monitoring an XGBoost binary classification model, along with a view into the baseline constraints generated and violations against the baseline constraints, and configures automated responses to the violations using CloudWatch alerts. This end-to-end workflow enables you to build continuous model training, monitoring, and model update pipelines. Give Model Monitor a try and leave your feedback in the comments.
About the Authors
Sireesha Muppala is an AI/ML Specialist Solutions Architect at AWS, providing guidance to customers on architecting and implementing machine learning solutions at scale. She received her Ph.D. in Computer Science from University of Colorado, Colorado Springs. In her spare time, Sireesha loves to run and hike Colorado trails.
David Nigenda is a Software Development Engineer in the Amazon SageMaker team. His current work focuses on providing useful insights on production machine learning workflows. In his spare time he tries to keep up with his kids.
Archana Padmasenan is a Senior Product Manager at Amazon SageMaker. She enjoys building products that delight customers.
Tags: Archive
Leave a Reply