Expected goals
Expected goals is a statistical metric in association football that assigns a probability to each shot resulting in a goal. By summing these probabilities across a match, season, or set of shots, xG is used to estimate how many goals a team or player would be expected to score given the chances created, independent of whether those chances were actually converted.
xG values are produced by statistical or machine-learning models trained on historical shot data. Implementations differ in the data they use and in which shot features are included; as a result, xG figures from different providers are not necessarily directly comparable.
The same general approach has also been applied in ice hockey analytics, where “expected goals” models have been used as an alternative to goals for evaluating team and player performance in a low-scoring sport.
Meaning
In association football, expected goals assigns each shot a value between 0 and 1 representing the estimated probability that the shot becomes a goal. The sum of these shot probabilities is an expected value for goals scored over a set of shots, so team and player xG totals are commonly reported alongside goals scored.xG values are produced by statistical models trained on historical shot outcomes. Models typically include features describing the shot attempt and its context, such as shot location, body part used, and type of assist or phase of play, though the exact inputs and definitions depend on the data source and provider.
As a probability, an xG value of 0.3 is commonly interpreted as meaning that shots of similar characteristics would be expected to be scored around 30% of the time over many repeated instances; it is not a statement about the outcome of any single shot.
Because xG is model-based, different implementations can assign different probabilities to the same shot, particularly when they use different event definitions or additional information such as contextual or positional data.
History and application of xG
Association football
There is some debate about the origin of the term expected goals. Vic Barnett and his colleague Sarah Hilditch referred to "expected goals" in their 1993 paper that investigated the effects of artificial pitch surfaces on home team performance in association football in England. Their paper included this observation:
Quantitatively we find for the AP group about 0.15 more goals per home match than expected and, allowing for the lower than expected goals against in home matches, an excess goal difference of about 0.31 goals per home match. Over a season this yields about 3 more goals for, an improved goal difference of about 6 goals.
Jake Ensum, Richard Pollard and Samuel Taylor reported their study of data from 37 matches in the 2002 World Cup in which 930 shots and 93 goals were recorded. Their research sought "to investigate and quantify 12 factors that might affect the success of a shot". Their logistic regression identified five factors that had a significant effect on determining the success of a kicked shot: distance from the goal; angle from the goal; whether or not the player taking the shot was at least 1 m away from the nearest defender; whether or not the shot was immediately preceded by a cross; and the number of outfield players between the shot-taker and goal. They concluded "the calculation of shot probabilities allows a greater depth of analysis of shooting opportunities in comparison to recording only the number of shots". In a subsequent paper, Ensum, Pollard and Taylor combined data from the 1986 and 2002 World Cup competitions to identify three significant factors that determined the success of a kicked shot: distance from the goal; angle from the goal; and whether or not the player taking the shot was at least 1 m away from the nearest defender. More recent studies have identified similar factors as relevant for xG metrics.
Howard Hamilton proposed "a useful statistic in soccer" that "will ultimately contribute to what I call an 'expected goal value' — for any action on the field in the course of a game, the probability that said action will create a goal".
Sander Itjsma discussed "a method to assign different value to different chances created during a football match" and in doing so concluded:
we now have a system in place in order to estimate the overall value of the chances created by either team during the match. Knowing how many goals a team is expected to score from its chances is of much more value than just knowing how many attempts to score a goal were made. Other applications of this method of evaluation would be to distinguish a lack of quality attempts created from a finishing problem or to evaluate defensive and goalkeeping performances. And a third option would be to plot the balance of play during the match in terms of the quality of chances created in order to graphically represent how the balance of play evolved during the match.
Sarah Rudd discussed probable goal scoring patterns in her use of Markov chains for tactical analysis from 123 games in the 2010-2011 English Premier League season. In a video presentation of her paper at the 2011 New England Symposium of Statistics in Sport, Rudd reported her use of analysis methods to compare "expected goals" with actual goals and her process of applying weightings to incremental actions for P outcomes.
In April 2012, Sam Green wrote about 'expected goals' in his assessment of Premier League goalscorers. He asked "So how do we quantify which areas of the pitch are the most likely to result in a goal and therefore, which shots have the highest probability of resulting in a goal?". He added:
If we can establish this metric, we can then accurately and effectively increase our chances of scoring and therefore winning matches. Similarly, we can use this data from a defensive perspective to limit the better chances by defending key areas of the pitch.
Green proposed a model to determine "a shot's probability of being on target and/or scored". With this model "we can look at each player's shots and tally up the probability of each of them being a goal to give an expected goal value".
Ice hockey
In 2004, Alan Ryder shared a methodology for the study of the quality of an ice hockey shot on goal. His discussion started with this sentence “Not all shots on goal are created equal”. Ryder's model for the measurement of shot quality was:
- Collect the data and analyze goal probabilities for each shooting circumstance
- Build a model of goal probabilities that relies on the measured circumstance
- For each shot, determine its goal probability
- Expected Goals: EG = the sum of the goal probabilities for each shot
- Neutralize the variation in shots on goal by calculating Normalized Expected Goals
- Shot Quality Against
Ryder concluded:
The model to get to expected goals given the shot quality factors is simply based on the
data. There are no meaningful assumptions made. The analytic methods are the classics
from statistics and actuarial science. The results are therefore very credible.
In 2007, Ryder issued a product recall notice for his shot quality model. He presented “a cautionary note on the calculation of shot quality” and pointed to “data quality problems with the measurement of the quality of a hockey team’s shots taken and allowed”.
He reported:
I have been worried that there is a systemic bias in the data. Random errors don’t concern me. They even out over large volumes of data. But I do think that... the scoring in certain rinks has a bias towards longer or shorter shots, the most dominant factor in a shot quality model. And I set out to investigate that possibility.
The term 'expected goals' appeared in a paper about ice hockey performance presented by Brian Macdonald at the MIT Sloan Sports Analytics Conference in 2012. Macdonald's method for calculating expected goals was reported in the paper:
We used data from the last four full NHL seasons. For each team, the season was split into two halves. Since midseason trades and injuries can have an impact on a team’s performance, we did not use statistics from the first half of the season to predict goals in the second half. Instead, we split the season into odd and even games, and used statistics from odd games to predict goals in even games. Data from 2007-08, 2008-09, and 2009-10 was used as the training data to estimate the parameters in the model, and data from the entire 2010-11 was set aside for validating the model. The model was also validated using 10-fold cross-validation. Mean squared error of actual goals and predicted goals was our choice for measuring the performance of our models.
Model inputs and methods
xG models are typically trained on historical data in which each shot is labelled by whether it resulted in a goal. Many implementations rely on event data that describe the shot and its immediate context, such as distance and angle to goal, body part, type of assist, and whether the attempt was a set piece. Other approaches use synchronised positional data to incorporate spatial context, such as the locations of defenders and the goalkeeper at the time of the shot, with the aim of improving probability estimates compared with models based on event data alone.A variety of modelling techniques have been used, ranging from logistic regression and other probabilistic classifiers to more complex machine-learning approaches. Some studies extend shot-based models by incorporating information from sequences of actions leading to the shot, reflecting the view that chance quality can depend on the build-up as well as the shot itself. Research has explored more interpretable formulations, such as Bayesian mixed models, to make the influence of shot characteristics and surrounding opponents easier to communicate to practitioners.
Because xG is a probabilistic estimate, model evaluation is commonly framed in terms of both how well a model separates goals from non-goals and how well predicted probabilities align with observed scoring frequencies. Differences in underlying data and in modelling choices can therefore lead to systematic differences between xG values produced by different models for the same set of shots.