20 Second Timeout is the place to find the best analysis and commentary about the NBA.

Thursday, March 08, 2012

The Strengths and Limitations of "Advanced Basketball Statistics"

Frank Herbert's masterpiece Dune series features many themes involving politics, religion and ecology/scarcity of resources but Herbert's biggest message is that it is extremely dangerous for any society to elevate one person to hero or messiah status; Paul Muad'Dib's holy war overthrew the repressive regime of the Padishah Emperor Shaddam Corrino IV but then Muad'Dib's followers--blinded with messianic dreams of his supposed infallibility and drunk with their newfound power--brought forth their own brand of tyranny. What does this have to do with basketball? There is no doubt that scouting methods and player evaluation techniques in the NBA have been improved over the past few decades and that they can and should continue to be improved. Four decades ago, the expansion Cleveland Cavaliers built their roster in no small part by relying on statistics found on the back of basketball cards; obviously, that is not a very effective scouting method or even a very effective use of basketball statistics. Within the past two decades, technology has transformed NBA scouting and game preparation and executives/coaches now have access to a proliferation of statistical information that would have been impossible to gather and organize before. I am not opposed to the use of statistics--or even "advanced basketball statistics"--to evaluate basketball players individually or collectively. I am opposed to any form of thought or analysis that lacks rigor, logic and consistency. I am opposed to theories that are presented as definitive fact without any testable hypotheses. In short, I am opposed to the way that some "stat gurus" are trying to replace previous player evaluation methods with some kind of blind, unquestioning certainty that anything that appears in a spreadsheet must be treated as holy gospel: these "stat gurus" are overthrowing the "Padishah Emperor" only to go on a holy war to wipe out any beliefs about basketball that do not rigidly conform with what appears on their spreadsheets.

"Advanced basketball statistics" can be useful as a supplement to traditional box score data and to the observations of trained scouts/coaches--but some "stat gurus" (and their media sycophants) do a disservice to their cause by overstating the meaning and reliability of their data (I suspect that legitimate researchers into basketball statistics cringe every time they read one of Henry Abbott's biased, tendentious rants). Published reports indicate that the 2011 NBA champion Dallas Mavericks used data from Roland Beech about the effectiveness of various lineup combinations to help decide how to allocate minutes during their playoff run; plus/minus numbers and adjusted plus/minus numbers for various lineups can be useful information for a coach to consider, provided that the data is from a large enough sample size and that there is some other corroborating information--such as observations about mismatches generated by certain lineups--that confirm what the data suggests. In Dallas' case, plus/minus information apparently confirmed what could also be seen visually: Dallas' playoff opponents had trouble matching up with J.J. Barea's speed and quickness. However, that data neither proved nor disproved that Barea is an All-Star caliber player or even that he is a better overall player than the players whose minutes he took; the data merely suggested that, paired with four other particular Dallas players, Barea helped Dallas to exploit certain matchup advantages against various lineups being used by opposing teams.

The problem--the tipping point where the necessary revolution overthrowing the old Emperor transforms into a bloody holy war--is when "stat gurus" who have proprietary numbers that they have created and used to sell books/articles start loudly and repeatedly proclaiming that they can precisely rank every individual player in the NBA and that their rankings are absolutely correct and completely objective while all other rankings (including those by other competing "stat gurus") are the products of sheer ignorance. Scientists who are conducting legitimate research consistently use cautious, guarded language, while "stat gurus" are often bombastic and tend to make wild, unverifiable claims about the accuracy of their formulas; Albert Einstein's theories led to the creation of technological marvels ranging from the atom bomb to GPS and yet researchers are still running experiments to verify his predictions. In contrast, many "stat gurus" devise their own personal interpretations of which box score numbers are most important in order to create "advanced basketball statistics" that have no designated margin for error and no framework providing ways to prove or disprove their validity. How naive do you have to be to think that a basketball player's value can absolutely and precisely be calculated to the tenth or hundredth of a point? You would think that these "stat gurus" would be concerned about the demonstrated fallibility of boxscore numbers but far too many "stat gurus" close their eyes and pretend that the basic assist, steal and blocked shot numbers that they plug into their precious "advanced" formulas are completely accurate.

The basketball "stat gurus" are trying to follow in the footsteps of Bill James and the other baseball numbers crunchers who have transformed our understanding of that sport but basketball and baseball are fundamentally different games from an analytical standpoint; it would perhaps be only a slight exaggeration to say that baseball is like checkers while basketball is like chess: computers have "solved" checkers but, even though computers have become quite proficient at playing chess, computers have not come close to "solving" chess. Similarly, baseball's number crunchers have made some valuable observations about how to properly analyze that sport but basketball's "stat gurus" are lagging far behind because their task is much more complicated: baseball consists of discrete actions that can be accurately separated and measured--pitcher throws the ball, batter hits the ball, fielder catches the ball, etc.--while basketball consists of 10 players simultaneously doing a variety of things, many of which cannot be measured.

Phil Birnbaum has worked extensively with baseball statistics but after thoroughly studying "advanced basketball statistics" he concluded that they are not particularly reliable:

You know all those player evaluation statistics in basketball, like "Wins Produced," "Player Evaluation Rating," and so forth? I don't think they work. I've been thinking about it, and I don't think I trust any of them enough put much faith in their results.

That's the opposite of how I feel about baseball. For baseball, if the sportswriter consensus is that player A is an excellent offensive player, but it turns out his OPS is a mediocre .700, I'm going to trust OPS. But, for basketball, if the sportswriters say a guy's good, but his "Wins Produced" is just average, I might be inclined to trust the sportswriters.

I don't think the stats work well enough to be useful.


Nick Collison is a perfect example of what Birnbaum is talking about. Collison is a plus/minus superstar but does that mean that he is an All-Star or All-NBA caliber player? No, but it could mean any number of other things:

1) Collison very effectively fills a limited role on a team that has two All-NBA players (Kevin Durant and Russell Westbrook) plus a third high quality player (James Harden) who provide scoring and shot creation.

2) Collison is much more effective than other players on his team who play his position, so when he enters the game his team does better than it does with him off of the court.

3) Collison is not better than the other power forwards on his team but he has more of a matchup advantage against the reserve players he competes against than other Thunder power forwards have against the opposing power forwards who they face.

4) Collision's gaudy plus/minus numbers merely reflect a lot of noise due to an insufficiently large sample size of minutes.

For the Oklahoma City Thunder, all that matters is that lineups that include Collison are very productive; "advanced basketball statistics" can be helpful for the Thunder in terms of identifying that trend and thus confirming what the coaching staff likely already had figured out by watching the games--but "stat gurus" or media members who try to extend the use of plus/minus data from one tool that can help the coaching staff to configure a playing rotation to some kind of absolute player rating system are asking far more of the data that it can rightfully be expected to provide. Plus/minus data can be noisy and is much more applicable within a team setting than applied on a league basis; at best, Collison's numbers just suggest that he can be an effective member of certain five man rotations for the Thunder--but those numbers do not prove that he is a better player than a power forward on a different team who has lower plus/minus numbers.

Last year I cited Ken Pomeroy's research about the limitations of plus/minus statistics; Pomeroy concluded, "It's true plus-minus captures everything that's happening, but that includes a whole lot of random things that lead to a hoop or a stop. Things that have nothing to do with the ability of the player you want to analyze. In basketball analysis, we should be filtering out randomness, not embracing it." In my article I added the following analysis:

Pomeroy notes that because the professional season is much longer than the college season there may be "limited use" for adjusted plus/minus in the NBA but even in that case one probably needs at least two full seasons of data to make any meaningful evaluations; in other words, most of the stat-based articles (about "clutch performance," player ratings, MVP rankings, etc.) that are popping up like dandelions in an untended yard are using data sets that are far too small to form the basis for sweeping, definitive conclusions (I realize that not all of these articles are using plus/minus or advanced plus/minus data but there is even less reason to trust the accuracy of Berri's numbers or Hollinger's numbers--both of which are based on subjective formulas that can be tweaked to reach whatever conclusions the author desires--then there is to trust plus/minus data that truly is objective in some sense even if it is only potentially meaningful when the data set is very large).

As Birnbaum mentioned, similar limitations apply to the seemingly endless number of highly touted player rating systems that have popped up in recent years. A recent article suggested that Kevin Garnett's value was not properly appreciated until some "stat gurus" created numbers that proved how effective he is. Kevin Garnett was drafted straight out of high school and shortly after that he received the biggest contract in NBA history at that time, a contract so huge that it helped lead to the 1999 lockout; Garnett was a highly valued commodity many years before "stat gurus" started touting his worth. More to the point, while the "stat gurus" declared that he was the best player in the NBA during the mid-2000s the reality that we have seen since the Boston Celtics formed their "Big Three" is that Garnett has a tremendous impact defensively and he is valuable as a screener/passer offensively but he and his teams are most effective when he is surrounded by multiple perimeter players who can create their own shots and create shots for others. Garnett's lone deep playoff run in Minnesota came when he teamed up with Sam Cassell and Latrell Sprewell and his playoff runs in Boston have been aided by the offensive skills of Paul Pierce, Ray Allen and Rajon Rondo. Regardless of what the "stat gurus" think that their numbers show, Garnett is not a dominant player in the same way that Shaquille O'Neal, Tim Duncan and Kobe Bryant have been dominant players for multiple championship teams--and despite having at least three future Hall of Famers, the Celtics won exactly one title since Garnett arrived and they do not seem likely to add to that total. In that same time period, Kobe Bryant won two championships paired with a player who had earned one All-Star selection (and had not won a single playoff game) prior to joining the Lakers and Dirk Nowitzki won a championship paired with an aging future Hall of Famer plus a cast of good role players--and Nowitzki's squad beat the "stat guru" dream team of LeBron James, Dwyane Wade and Chris Bosh. Before the "stat gurus" get too proud of themselves for allegedly discovering Kevin Garnett they might want to try to explain why the James-Wade combination has not been nearly as dominant as they predicted it would be.

Roland Beech has done some nice research about game-winning shots but, unfortunately, a lot of people borrow his data without bothering to consider his conclusion: "Ultimately though while this kind of thing is fun, it's not to my mind particularly meaningful, other than indicating that the league as a whole could probably get more efficient in 'end game' possessions...one easy place to start might be to try and be less predictable! It's nice to have a go-to guy, but when the other team knows without much doubt that a certain guy is getting the ball, it is going to be a lot easier to defend!" Beech is right on target that this data is both "fun" and "not...particularly meaningful" though I think that he is a bit harsh regarding the alleged lack of efficiency on "end game" possessions; he fails to consider two very important points: (1) since this is a small sample size the shooting percentages are disproportionately skewed downward by desperation heaves, broken plays, etc.; (2) it is very difficult to score against a set NBA defense and it is even more difficult to do so when your time is extremely limited, particularly if you need a three pointer just to tie. When the time is limited why would a coach design a play for someone other than his best player? Anyway, most people have no idea how plays work in the first place; no NBA coach is just giving the ball to one guy and saying, "Shoot it" (unless there is only enough time to catch and shoot): you give the ball to your best player because he is most capable of creating his own shot, creating a shot for someone else if he gets trapped and making free throws if he is fouled. You don't want to give the ball to someone who cannot dribble or who cannot get a shot off or who is a bad free throw shooter. When role players hit big shots it is usually after the team's best player created an opening--but if you give the ball to the role player first then you are asking him to do something he is not comfortable doing. If "stat gurus" think that "clutch shooting" percentages are low now just imagine what those percentages would look like if coaches started drawing up plays for non-ballhandlers to catch the ball at the top of the key with five seconds remaining.

I have consistently maintained that Being a Clutch Player is More Significant than Just Making Clutch Shots; I have never pretended to know or even care which NBA player is the best at making last second shots--but I am perplexed that so many "stat gurus" (other than Beech) think that this is an important topic to investigate ("stat gurus" famously do not believe in the so-called "hot hand" so there is no reason for them to believe that a player will perform much differently in some arbitrarily defined "clutch" moment than at any other time); I am also amazed at the lack of intellectual rigor displayed by the conclusions that have been loudly and repeatedly stated in some quarters about this issue. Setting aside for a moment the fact that "clutch shots" have not been universally defined in terms of time remaining/score differential, regardless of how such shots are categorized they comprise just a tiny, unrepresentative portion of a player's total shot attempts--and within that small subset of "clutch shots" there are in fact many different kinds of shots that cannot reasonably be lumped together. For instance, consider two "clutch shots" that Kobe Bryant recently attempted; near the end of the fourth quarter versus Detroit, Bryant received the ball outside the three point line in the top of the key area, took two strong dribbles and drained a midrange pullup jumper to send the game into overtime; near the end of overtime, with the Lakers trailing by three and the Pistons possibly ready to foul rather than permit a three point attempt, Bryant caught the ball well behind the three point line and quickly fired a shot that missed. If you are a "stat guru" measuring "clutch shots" then you lump in Bryant's desperation three pointer with his two dribble pullup, combine it with some half court shots and other miscellaneous attempts taken against a variety of defenses with differing amounts of time on the clock and then you produce one field goal percentage that supposedly provides a definitive measurement of Bryant's "clutchness." Does anyone measure the "clutchness" of NFL quarterbacks by looking at their completion percentages on "Hail Mary" passes? This stuff is so foolish that I cannot believe that it is a topic for supposedly serious discussion; the problems with sample size are so obvious that it should be readily apparent that "clutch shot" data is, at best, a fun, frivolous stat to consider lightly, and not something that is worthy of in depth debate. If someone nails a lucky half court shot does that prove that he is "clutch"? The reality is that most shots taken in the final few seconds against a set defense are inherently low percentage shots--but it should not be surprising to anyone that in the same game Bryant calmly nailed a two dribble pullup (a shot that is a normal part of his repertoire) and then missed a twisting, rushed, long three point attempt; anyone who combines those two attempts into one "clutch shooting percentage" and takes that number seriously is an idiot.

***************

Further Reading:

The Counterfeit Currency of David Berri's Wages of Wins

Economics is Not a Science, Nor is Basketball Statistical Analysis

Economics is Not a Science, Nor is Basketball Statistical Analysis, Part II

Economics is Not a Science, Nor is Basketball Statistical Analysis, Part III

The Difference Between Measuring Defense in Basketball and Baseball

Labels: , , , , , , , ,

posted by David Friedman @ 7:20 AM

17 comments

links to this post

Tuesday, March 08, 2011

MLB "Stat Guru" Phil Birnbaum Explains Why "Advanced Basketball Statistics" Don't Work

I have written several articles detailing the flawed methodologies of "advanced basketball statistics," including Economics is Not a Science, Nor is Basketball Statistical Analysis and Economics is Not a Science, Nor is Basketball Statistical Analysis, Part II. Phil Birnbaum is a "stat guru" who primarily focuses on baseball, a sport whose discrete, one on one encounters between pitchers and batters lends itself much more readily to accurate statistical analysis than a free flowing five on five sport like basketball. Birnbaum has taken a look at "advanced basketball statistics" and he is not impressed by what he found:

You know all those player evaluation statistics in basketball, like "Wins Produced," "Player Evaluation Rating," and so forth? I don't think they work. I've been thinking about it, and I don't think I trust any of them enough put much faith in their results.


That's the opposite of how I feel about baseball. For baseball, if the sportswriter consensus is that player A is an excellent offensive player, but it turns out his OPS is a mediocre .700, I'm going to trust OPS. But, for basketball, if the sportswriters say a guy's good, but his "Wins Produced" is just average, I might be inclined to trust the sportswriters.

I don't think the stats work well enough to be useful.

Please click on the above link and read Birnbaum's article in its entirety, because he does an excellent job of explaining exactly how difficult it is to correctly assign individual credit for team success in basketball--and Birnbaum does not even address an issue that I have brought up several times: the raw box score numbers themselves are very subjective (I have mainly focused on assists but the same could be said for blocked shots, steals and, to some degree, even rebounds, depending on how the official scorekeepers define tips, etc.).

Birnbaum cites a study by David Lewin and Dan T. Rosenbaum that shows that minutes played by players in a preceding season is at least as good of a predictor of team performance in the subsequent season as the so-called "advanced basketball statistics" are. Birnbaum notes that minutes played "is probably the closest representation you can get to what the coach thinks of a player's skill," so this is an indication that--contrary to the constant bleating by "stat gurus" like Dave Berri and their media sycophants like Henry Abbott--NBA coaches actually do have some idea about what they are doing.

Birnbaum expresses some hope that plus/minus statistics could be useful if the sample sizes are large enough but, as I previously reported, "stat guru" Ken Pomeroy has studied plus/minus stats and is very skeptical of their usefulness. Birnbaum concludes, "But, just picking up a box score or looking up standard player stats online, and trying [to figure out] from that which players are how much better than others (the approach that 'Wins Produced' and other stats take)...well, I don't think you're ever going to be able to make that work."

Labels: , , , ,

posted by David Friedman @ 5:24 PM

24 comments

links to this post