The development of Bundesliga Match Fact Passing Profile, a deep dive into passing in football
This post was authored by Simon Rolfes. Simon played 288 Bundesliga games as a central midfielder, scored 41 goals, and won 26 caps for Germany. Currently, he serves as Sporting Director at Bayer 04 Leverkusen, where he oversees and develops the pro player roster, the scouting department, and the club’s youth development. Simon also writes weekly columns on Bundesliga.com about the latest Bundesliga Match Facts powered by AWS. There he offers his expertise as a former player, captain, and TV analyst to highlight the impact of advanced statistics and machine learning in the world of football. In this post, Simon analyzes the importance of some of the new Bundesliga Match Facts powered by AWS that fans can see during the 2021-2022 season. The AWS Professional Services team then details the AWS technology used behind these advanced stats.
Passing the ball is one of the most common actions on the pitch. It’s simply one player maneuvering the ball to another on his team. “Simple” is a word every coach uses daily when it comes to passing: “Keep it simple.” Yet, looking at the effect a pass can have on a match, nothing is simple.
Consider this for example: in the 2020-2021 Bundesliga season, on average 917 passes were completed per match. The record for the highest and lowest number of completed passes was set in the same match, on Match Day 26 when Arminia Bielefeld hosted RB Leipzig. Arminia completed 152 passes compared to 865 by Leipzig. In traditional football analysis, the amount of completed passes is widely seen as an indicator for team dominance. Given that, you might be surprised that Leipzig won that match only by the slightest of margins: 1:0.
Or take an individual player’s performance: In 2020-2021, the average Bundesliga player completed 86% of his passes, but individual numbers vary from 22% completion rate to 100%. If a higher completion rate really indicates dominance, what then if the player with 22% brought a striker into a scoring position with every pass, ideally bypassing several defenders each time? And what does it say if the player with the 100% rate was a defender passing the ball horizontally to another defender, because he couldn’t find a teammate to pass to up the pitch? Then again, a horizontal pass can be a great tool for offense too, opening up the field by moving the ball to the other side of the pitch.
So, there are myriad types of passes, played for a great variety of reasons. What is it then that makes a pass special, and how are players using passes in different situations to move the ball around?
The new Bundesliga Match Fact Passing Profile uncovers exactly that by providing real-time insights into the passing capabilities of all players and teams in the Bundesliga. Machine learning (ML) models trained on Amazon SageMaker analyzed nearly 2 million passes from previous Bundesliga seasons to construct an algorithm that can compute a difficulty score for each pass at any moment in time. It does this by first computing 26 pass characteristics for each pass in AWS Glue. These passing features, developed in collaboration with a group of football experts, include the distance to the receiver, the number of defending players in between, the pressure a player is under, and many more. They’re then input to train an ML model in SageMaker to calculate the effect of each feature on the chance that a pass completes.
After the model is trained, it can estimate a difficulty score for each pass it sees. These difficulty scores can then be used in a variety of ways. For example, they can be aggregated for each player to form a passing profile, showing which passing decisions players make. Do they prioritize an offensive pass? Are they looking for a safe option by passing the ball back? Or do they seek to open up the play by a long ball. All of this is captured in the Match Fact “Passing Profile”, which shows the direction of passes and the difficulty score for each player live on television and in the Bundesliga app.
In the following section, the AWS Professional Services team, who worked with Bundesliga to bring these Match Facts to life, explains how this advanced stat came to fruition.
How does it work?
Building an ML model that can predict the difficulty of a given pass requires us to create a large dataset filled with both successful and unsuccessful passes from the past. Although much is known about successful passes (for example, the receiver, the location where the ball was controlled, the duration and distance of the pass), little is known about an unsuccessful pass because it simply didn’t reach its intended target. We therefore adopt an approach proposed by Anzer and Bauer (2021) to identify the intended receiver of an unsuccessful pass using a ball trajectory and motion model so that we can add these entries into our passing dataset.
Identifying the intended target
Although it sometimes doesn’t look like it, a ball has to adhere to the laws of physics. We can use gravity, air drag, and rolling drag to map the trajectory of a pass. With a physical model as proposed by Spearman and Basye (2017), we can use the first 0.4 seconds after a pass is given to map the entire trajectory of the ball. The physical model in the following figure estimates the trajectory based on this 0.4-second timeframe. This computed path is shown in the image on the left in orange. In this example, player 11 from the blue team attempts to initiate an attack with a pass to player 32, who is making a run on the right flank. To evaluate our physical model, we can compare the estimated ball trajectory to the actual trajectory provided by the tracking data, shown in black. Comparing both trajectories shows that the estimated trajectory is fairly close to reality. However, the model doesn’t account for the drag force after a pass meets the pitch (due to weather conditions), and doesn’t consider curve balls because this information isn’t available within the tracking data.
After the trajectory of the ball is modeled, we know where it’s estimated to land. The next step is to calculate who could reach the ball. This is done by a motion model. This motion model estimates the area a player can reach within a pre-defined time window and is largely based on a player’s speed and direction. The model is compared to the movement of players in the previous three seasons of Bundesliga data to understand how players move. The results can be visualized into four circles around each player, representing the area they can reach within 0.5, 1, 1.5, and 2 seconds.
Each player’s potential movement is computed and compared to the estimated landing location of the ball. Given the assumption that a ball can be controlled when it’s below 1.5 meters in height, we can make an estimated guess of which player could reach the ball first. Now, to estimate the intended receiver of a given pass, we combine the ball trajectory model with the motion model. If we map the trajectory of an unsuccessful pass, we can use our motion model to determine which player could reach the ball first. This player is likely to be the intended receiver of a pass. We can then use this information to add relevant data points (such as the receiver, the location where the ball was controlled, the duration and distance of the pass) for unsuccessful passes to our dataset.
Passing difficulty
We can use ML to estimate the difficulty of each pass. We use the passing dataset that contains 2 million passes from previous Bundesliga seasons to train a supervised ML model that computes a pass completion probability for each of those passes. This is computed by finding patterns in a set of tailored features that are available at the time of a pass. These features were developed in collaboration with football experts to understand the relevant aspects impacting the difficulty of a pass. The ML algorithm decides which features truly have an impact and which are negligible. This results in a model taking a pass and predicting its likely chance of completion.
Passing profile and efficiency
We can use this passing or xPass (expected passes) model to estimate the passing profile of a player and his passing efficiency. The passing profile consists of the passing decisions a player makes. Does the player look for short balls or long balls, pass left or right, and how difficult are the passes the player attempts? We can use the xPass model to evaluate how effective a player is in his passing decisions and therefore estimate his impact in the game.
The passing profile is displayed in two ways in the live broadcast. The graphic on the left displays the direction of play a player favors in the current game, featuring the main passing direction and the distribution of passes until that moment in time. The graphic on the right shows additional statistics that complement the passing direction, such as the number of difficult passes a player has attempted so far, including the completion rate and the ratio between short and long passes.
These statistics are further explored during the end of the first half and start of the second half using a graphical comparison format. In this comparison template, two players can be compared in their passing profile, showcasing the difference in passing choices as well as completion rate.
In addition to the passing profiles, fans can also explore the passing efficiency of players across Bundesliga games. In the stats section of the Bundesliga app, viewers can see the efficiency of players by comparing their actual completed passes with their expected completed passes (as by the xPass model). This provides a much more objective view on the passing capabilities of players than simply looking at the number of passes and the completion rate.
In this overview, the difficulty of a pass is taken into perspective. For example, let’s say we have two players that both complete two passes. Player A completes two difficult passes with an expected pass completion rate of 40%. Player B completes two simple passes with an expected completion rate of 95%. Evaluating both players using the old metrics would results in both players having completed two passes with a completion rate of 100%. With the new xPass model, we can actually see that Player A was expected to complete 0.8 pass (40% + 40%) but actually completed two passes, which result in an efficiency score of 2 – 0.8 = 1.2 passes. He is therefore over-performing by 1.2 passes. Player B completed two 95% passes, so we expect him to complete 1.9 passes. He actually completed two passes. This results in an over-performance of 2 – 1.9 = 0.1 passes. Player B is pretty much performing as expected, whereas Player A is putting on a top performance.
Let’s look at an example of two players that play at the same position (right-back), Lars Bender (Bayer 04 Leverkusen) and Stefan Lainer (Borussia Mönchengladbach) on Match Day 23 from season 2020-2021. When looking at pure passing completion rates, Bender seems to be outperforming Lainer by completing about 90% of his passes. Lainer only completes 60% of his passes and seems to be falling behind. However, if we take a closer look, we find that Lainer passes about 80% of his passes forward. Bender on the other hand is only passing 15% of his passes forward, and seems to be prioritizing safe passes backward. This risk-taking behavior and ability to spot the attacking intention of players wasn’t possible before with the standard metrics.
Passing profile and efficiency allows us to make this comparison between players that wasn’t previously possible. It allows us to see which player is demonstrating exceptional passing skills and which players aren’t finding their teammates.
Training the passing profile model
The passing profile model is only the tip of the iceberg; behind the scenes we need to account for several important operations, such as continuous training, continuous improvements to the model, continuous deployment of new models, model monitoring, metadata tracking, model lineage, and multi-account deployment. To address these particularities of industrializing ML models, we created training and deployment pipelines. Moreover, looking towards the future development of additional Match Facts, we invested additional time in developing reusable model training and deployment pipelines. These generic pipelines are designed and implemented using the AWS Cloud Development Kit (AWS CDK). Templatizing these pipelines ensure the consistent development of new Match Facts while reducing effort and time to market.
Our architecture considers all our three environments: development, staging, and production. Given the experimental nature of model training, the actual training pipeline resides on our development environment. This allows our data and ML engineers to freely work and experiment with new features and analysis.
After the team tests the new model and is satisfied with the results, we promote the model from development to staging through an approval chain (pull requests) on Bitbucket. After we test further on staging, we use the same process from staging to production to make the new model available for a live setting.
For the end-to-end workflow, we use AWS Step Functions; all the steps are defined using the AWS CDK. The AWS CDK generates an AWS CloudFormation template containing the final state definitions for the Step Functions state machine in Amazon States Language.
Using AWS CDK and Step Functions allows us to instantiate the same base training pipeline definition for different Match Facts. This setup is flexible and adapts to different Match Fact requirements. For example, we can adjust parameters in a certain step, such as the underlying type of ML algorithm. We can also add, remove, and adjust new steps without needing to change the underlying core structure of the training pipeline. In this manner, our data scientists can focus on creating the best models for the Match Facts, without the burden of creating infrastructure and handling operations.
We have two main workflows (state machines) for any given Match Fact model training pipeline instance: one for the data preprocessing pipeline, and another for the actual training pipeline. This setup avoids running the preprocessing over thousands of matches every time we want to train a new model. Therefore, we can experiment with different parameters for training the model while saving time and money on data preprocessing. Conversely, we can experiment with creating new features without needing to incur costs for training the model immediately afterwards.
The following diagram shows our data preprocessing pipeline.
The state machines consist of various jobs in AWS Glue, functions in AWS Lambda, and SageMaker jobs to provide the end-to-end flexibility to our data scientists. The preprocessing workflow is responsible for the data preprocessing, where the defined Lambda function (Step 1) dynamically fetches the match data from the stored match information, which is then fed to a processing job in AWS Glue (Step 2) that handles the feature extraction from the fetched raw match data. With the nature of positional match data, we have plenty of data that needs to be preprocessed before training. Thanks to the mapping feature of Step Functions, we can run jobs in AWS Glue in parallel, which allows us to save time in preprocessing. Finally, AWS Glue saves the processed match data to Amazon Simple Storage Service (Amazon S3) to be used by the model training state machine.
The following diagram illustrates our training pipeline.
The training pipeline workflow starts the training with a single AWS Glue job (Step 3) that aggregates all the processed match data from the previous step, and shuffles and splits the data into three datasets: training, validation, and test.
The training and validation datasets are used to train and find the best hyperparameters for the model using SageMaker automatic model tuning (Step 4). Testing data is used by our data scientists to evaluate and analyze the model outcomes and metrics; for instance, to detect problems in our training such as over- or under-fitting. The outcome of the SageMaker tuning job is the model with the hyperparameters that has the best performance.
After we produce the best model, we use several Lambda functions (Step 5) to clean the output and start the process of verifying and registering the new model to the SageMaker Model Registry (Step 7). This allows us to promote the same successfully verified and tested model to the other environments such as staging and production while also having a conditional state that can deploy or update the corresponding SageMaker model endpoints (Step 6).
We train the models in one account (development) and deploy them to different accounts because there’s no need to retrain the models. The deployment pipeline (see the following diagram) allows us to move the trained ML model to other accounts and is driven by the SageMaker Model Registry and BitBucket custom pipelines.
For governance purposes, we defined a manual approval process using pull requests that can be approved by product owners. After the pull request is approved in BitBucket (Step 8), we perform cross-account deployments using the SageMaker Model Registry for the desired model to the target environment, such as staging or production (Step 9). This allows us to have a single trail of truth with a consistent model that is tested from the beginning and that we can trace back to its initial release. It also provides an approval process for new models whenever we want to release a new model to the live production environment, for example.
With the aforementioned training and deployment architecture, the Passing Profile Match Fact has fostered faster modifications, faster bug-fixing, faster integration of successful experiments to other environments, and lower operational and development costs.
Summary
In this post, we demonstrated how the Bundesliga Match Fact Passing Profile makes it possible to objectively compare the difficulty of passes. We used historical data of nearly 2 million passes to build an ML model on SageMaker, which computes the difficulty of a pass. The model is based on 26 factors, such as distance the ball travels or the pressure the passer is under (for more information about pressure, see the Match Fact Most Pressed Player). We’ve shown how to build a reusable model training pipeline and facilitate multi- and cross-account deployments of ML models with the click of a button.
Passing Profile will be on display in Bundesliga broadcasts and the Bundesliga app starting September 11, 2021. We hope you enjoy the insights this advanced stat will provide. Learn more about the partnership between AWS and Bundesliga by visiting the webpage!
About the Authors
Simon Rolfes played 288 Bundesliga games as a central midfielder, scored 41 goals and won 26 caps for Germany. Currently Rolfes serves as Sporting Director at Bayer 04 Leverkusen where he oversees and develops the pro player roster, the scouting department and the club’s youth development. Simon also writes weekly columns on Bundesliga.com about the latest Bundesliga Match Facts powered by AWS
Luuk Figdor is a Senior Sports Technology Specialist in the AWS Professional Services team. He works with players, clubs, leagues and media companies such as the Bundesliga and Formula 1 to help them tell stories with data using machine learning. In his spare time he likes to learn all about the mind and the intersection between psychology, economics, and AI.
Gabriel Anzer is the lead data scientist at Sportec Solutions AG, a subsidiary of the DFL. He works on extracting interesting insights from football data using AI/ML for both fans and clubs. Gabriel’s background is in Mathematics and Machine Learning, but he is additionally pursuing his PhD in Sports Analytics at the University of Tübingen and working on his football coaching license.
Gabriella Hernandez Larios is a data scientist at AWS Professional Services. She works with customers across industries unveiling the power of AI/ML to achieve their business outcomes. Gabriela loves football (soccer) and in her spare time she likes to do sports like running, swimming, yoga, CrossFit and hiking.
Jakub Michalczyk is a Data Scientist at Sportec Solutions AG. Several years ago, he chose Math studies over playing football, as he came to the conclusion he was not good enough at the latter. Now he combines both these passions in his professional career by applying machine learning methods to gain a better insight into this beautiful game. In his spare time, he still enjoys playing seven-a-side football, watching crime movies, and listening to film music.
Murat Eksi is a full-stack technologist at AWS Professional Services. He has worked with various industries including finance, sports and media, gaming, manufacturing, and automotive to accelerate their business outcomes through Application Development, Security, IoT, Analytics, DevOps and Infrastructure. Outside of work, he loves traveling around the world, learning new languages while setting up local events for entrepreneurs and business owners in Stockholm. He also recently started taking flight lessons.
Tags: Archive
Leave a Reply