Optimizing the cost of training AWS DeepRacer reinforcement learning models
AWS DeepRacer is a cloud-based 3D racing simulator, an autonomous 1/18th scale race car driven by reinforcement learning, and a global racing league. Reinforcement learning (RL), an advanced machine learning (ML) technique, enables models to learn complex behaviors without labeled training data and make short-term decisions while optimizing for longer-term goals. But as we humans can attest, learning something well takes time—and time is money. You can build and train a simple “all-wheels-on-track” model in the AWS DeepRacer console in just a couple of hours. However, if you’re building complex models involving multiple parameters, a reward function using trigonometry, or generally diving deep into RL, there are steps you can take to optimize the cost of training.
As a Senior Solutions Architect and an AWS DeepRacer PitCrew member, you ultimately rack up a lot of training time. Recently we shared tips for keeping it frugal with Blaine Sundrud, host of DeepRacer TV News. This post discusses that advice in more detail. To see the interview, check out the August 2020 Qualifiers edition of DRTV.
Also, look out for the cost-optimization article coming soon to the AWS DeepRacer Developer Guide for step-by-step procedures on these topics.
The AWS DeepRacer console provides you with many tools to help you get the most of training and evaluating your RL models. After you build a model based on a reward function, which is the incentive plan you create for the agent, your AWS DeepRacer vehicle, you need to train it. This means you enable the agent to explore various actions in its environment, which, for your vehicle is its track. There it attempts to take actions that result in rewards. Over time it learns the behaviors that will lead to a maximum reward—training time that takes machine time and costs money. My goal is to share how avoiding overtraining, validating your model, analyzing logs, using transfer learning, and creating a budget can help keep the focus on fun, not cost.
In this post, we walk you through some strategies for training better performing and more cost-effective AWS DeepRacer models:
- Avoid overtraining your RL model
- Validate each RL model early in the process
- Analyze logs to further assess success
- Experiment with transfer learning
- Create a budget using the AWS Billing and Cost Management dashboard on the AWS Management Console
When training an RL model, more isn’t always better. Training longer than necessary can lead to overfitting, which means a model doesn’t adapt, or generalize well, from the environment it’s trained in to a novel environment, real or online. For AWS DeepRacer, a model that is overfit may perform well on a virtual track, but conditions like gravity, shadows on the track, the friction of the wheels on the track, wear in the gears, degradation of the battery, and even smudges on the camera lens can lead to the car running slowly or veering off a replica of that track in the real world. When training and racing exclusively in the AWS DeepRacer console, a model overfitted to an oval track will not do as well on a track with s-curves. In practical terms, you can think of an email spam filter that has been overtrained on messages about window replacements, credit card programs, and rich relatives in foreign lands. It might do an excellent job detecting spam related to those topics, but a terrible job finding spam related to scam insurance plans, gutters, home food delivery, and more original get-rich-quick schemes. To learn more about overfitting, watch AWS DeepRacer League – Overfitting.
We now know overtraining that leads to overfitting isn’t a good thing, but one of the first lessons an ML practitioner learns is that undertraining isn’t good either. So how much training is enough? The key is to stop training at the point when performance begins to degrade. With AWS DeepRacer, the Training Reward graph shows the cumulative reward received per training episode. You can expect this graph to be volatile initially, but over time the graph should trend upwards and to the right, and, as your model starts converging, the average should flatten out. As you watch the reward graph, also keep an eye on the agent’s driving behavior during training. You should stop training when the percentage of the track the car completes is no longer improving. In the following image, you can see a sample reward graph with the “best model” indicated. When the model’s track completion progress per episode continuously reaches 100% and the reward levels out, more training will lead to overfitting, a poorly generalized model, and wasted training time.
Validate your model
A reward function describes the immediate feedback, as a reward or penalty score, your model receives when your AWS DeepRacer vehicle moves from one position on the track to a new one. The function’s purpose is to encourage the vehicle to make moves along the track that reach a destination quickly, without incident or accident. A desirable move earns a higher score for the action, or target state, and an illegal or wasteful move earns a lower score. It may seem simple, but it’s easy to overlook errors in your code or find that your reward function unintentionally incentivizes undesirable moves. Validating your reward function both in theory and practice helps you avoid wasting time and money training a model that doesn’t do what you want it to do.
The validate function is similar to a Python lint tool. Choosing Validate checks the syntax of the reward function, and if successful, results in a “passed validation” message.
After checking the code, validate the performance of your reward function early and often. When first experimenting with a new reward function, train for a short period of time, such as 15 minutes, and observe the results to determine whether or not the reward function is performing as expected. Look at the reward results and percentage of track completion on the reward graph to see that they’re increasing (see the following example graph). If it looks like a well performing model, you can clone that model and train for additional time or start over with the same reward function. If the reward doesn’t improve, you can investigate and make adjustments without wasting training time and putting a dent in your pocketbook.
Analyze logs to improve efficiency
Focusing on the training graph alone does not give you a complete picture. Fortunately, AWS DeepRacer produces logs of actions taken during training. Log analysis involves a detailed look at the outputs produced by the AWS DeepRacer training job. Log analysis might involve an aggregation of the model’s performance at various locations on the track or at different speeds. Analysis often includes various kinds of visualization, such as plotting the agent’s behavior on the track, the reward values at various times or locations, or even plotting the racing line around the track to make sure you’re not oversteering and that your agent is taking the most efficient path. You can also include Python print() statements in your reward function to output interim results to the logs for each iteration of the reward function.
Without studying the logs, you’re likely only making guesses about where to improve. It’s better to rely on data to make these adjustments. You usually get a better model sooner by studying the logs and tweaking the reward function. When you get a decent model, try conducting log analysis before investing in further training time.
The following graph is an example of plotting the racing line around a track.
For more information about log analysis, see Using Jupyter Notebook for analysing DeepRacer’s logs.
Try transfer learning
In ML, as in life, there is no point in reinventing the wheel. Transfer learning involves relying on knowledge gained while solving one problem and applying it to a different, but related, problem. The shape of the AWS DeepRacer Convolutional Neural Network (CNN) is determined by the number of inputs (such as the cameras or LIDAR) and the outputs (such as the action space). A new model has weights set to random values, and a certain amount of training is required to converge to get a working model.
Instead of starting with random weights, you can copy an existing trained model. In the AWS DeepRacer environment, this is called cloning. Cloning works by making a deep copy of the neural network—the AWS DeepRacer CNN—including all the nodes and their weights. This can save training time and money.
The learning rate is one of the hyperparameters that controls the RL training. During each update, a portion of the new weight for each node results from the gradient-descent (or ascent) contribution, and the rest comes from the existing node weight. The learning rate controls how much a gradient-descent (or ascent) update contributes to the network weights. If you are interested in learning more about gradient descent, check out this post on optimizing deep learning.
You can use a higher learning rate to include more gradient-descent contributions for faster training, but the expected reward may not converge if the learning rate is too large. Try setting the learning rate reasonably high for the initial training. When it’s complete, clone and train the network for additional time with a reduced learning rate. This can save a significant amount of training time by allowing you to train quickly at first and then explore more slowly when you’re nearing an optimal solution.
Developers often ask why they can’t modify the action space during or after cloning. It’s because cloning a model results in a duplicate of the original network, and both the inputs and the action space are fixed. If you increase the action space, the behavior of a network with additional output nodes that had no connections to the other layers and no weights is unpredictable, and could lead to a lot more training or even a model that can’t converge at all. CNNs with node weights equal to zero are unpredictable. The nodes might even be deactivated (recall that 0 times anything is 0). Likewise, pruning one or more nodes from the output layer also drives unknown outcomes. Both situations require additional training to ensure the model works as expected, and there is no guarantee it will ever converge. Radically changing the reward function may result in a cloned model that doesn’t converge quickly or at all, which is a waste of time and money.
To try transfer learning following steps in the AWS DeepRacer Developer Guide, see Clone a Trained Model to Start a New Training Pass.
Create a budget
So far, we’ve looked at things you can do within the RL training process to save money. Aside from those I’ve discussed in the AWS DeepRacer console, there is another tool in AWS Management console that can help you keep your spend where you want it—AWS Budgets. You can set monthly, quarterly, and annual budgets for cost, usage, reservations, and savings plans.
On the Cost Management page, choose Budgets and create a budget for AWS DeepRacer.
To set a budget, sign in to the console and navigate to AWS Budgets. Then select a period, effective dates, and a budget amount. Next, configure an alert so that you receive an email notification when usage exceeds a stated percentage of that budget.
Clean up when done
When you’re done training, evaluating, and racing, it’s good practice to shut down unneeded resources and perform cleanup actions. Storage costs are minimal, but delete any models or log files that aren’t needed. If you used Amazon SageMaker or AWS RoboMaker, save and stop your notebooks and if they are no longer needed, delete them. Make sure you end any running training jobs in both services.
In this post, we covered several tips for optimizing spend for AWS DeepRacer, which you can apply to many other ML projects. Try any or all of these tips to minimize your expenses while having fun learning ML, by getting started in the AWS DeepRacer Console today!
About the Authors
Tim O’Brien brings over 30 years of experience in information technology, security, and accounting to his customers. Tim has worked as a Senior Solutions Architect at AWS since 2018 and is focused on Machine Learning and Artificial Intelligence.
Previously, as a CTO and VP of Engineering, he led product design and technical delivery for three startups. Tim has served numerous businesses in the Pacific Northwest conducting security related activities, including data center reviews, lottery security reviews, and disaster planning.
A wordsmith, futurist, and relatively fresh recruit to the position of technical writer – AI/ML at AWS, Heather Johnston-Robinson is excited to leverage her background as a maker and educator to help people of all ages and backgrounds find and foster their spark of ingenuity with AWS DeepRacer. She recently migrated from adventures in the maker world with Foxbot Industries, Makerologist, MyOpen3D, and LEGO robotics to take on her current role at AWS.