Adjusted Plus-Minus (often abbreviated APM) is a basketball analytic that attempts to predict the impact of an individual player on the scoring margin of a game by controlling for the rest of the players on the court at any given time. The metric is derived using play-by-play data to keep track of all substitution and possession ending actions. It was first implemented by the Dallas Mavericks in the early 2000s after owner Mark Cuban commissioned data scientists Jeff Sagarin, Wayne Winston, and Dan Rosenbaum, who developed the metric alongside their WINVAL conversion to aid in player salary determination.[1] In combination with other innovations, this gave the Mavericks one of the most progressive front offices in the league at the time. Since APM's creation several derivative metrics attempting to improve on the skeleton have been created.[2]
The APM skeleton has been utilized so heavily since the metric's conception because, assuming a large enough sample size, it offers the most comprehensive methodology for player performance evaluation. Because APM only cares about the way a player impacts his team's scoring margin, in effect it accounts for everything that happens on court. More traditional metrics used for player evaluation, such as John Hollinger's Player Efficiency Rating (PER), are limited to the scope of the box score, and particularly struggle in the evaluating the defensive contributions of a particular player. Additionally, because APM is derived from the record of every possession played over the course of an NBA season, it evaluates players on a per possession basis instead of a per minute basis, making it agnostic to variations over teams and time such as pace of play or league average scoring efficiency.
Adjusted Plus-Minus particularly struggles with small sample sizes and distinguishing statistical noise from long-term trends. It takes several seasons of data for APM to paint an accurate picture of a player's impact, and while single season data can be generated, it often fails to pass the "laugh test," often mistaking auxiliary role-players for high impact stars over similar sample sizes.[3] Because roster construction and a player's quality of play differ drastically over the course of several seasons, this can harm the overall efficacy of the metric. Additionally, this sample size restriction marginalizes the utility of the metric in determining candidates for league superlatives such as the often contentious and prestigious Most Valuable Player award. Even over large sample sizes, the natural margin of error makes it difficult to distinguish the efficacy of role players from each other, though the separation of high impact star players from the pack makes them easy to distinguish. The presence of star players can also create biases toward role players who spend a disproportionate amount of time on the court with them, viewing a particular player's performance as disproportionately productive due to the large scale impact of the star. APM also fails to account for the impact of coaching and the synergistic effects of roster construction, ultimately making it more an indicator of player performance rather than talent level.[4]
Due to the limitations of APM, there have been several attempts at improvements to the methodology since its inception. Popular examples include: