"Stat Gurus" Forced to Consider Possibility That the "Hot Hand" Exists"Stat gurus" puff out their chests and declare that their proprietary methods give them a significant edge over "old school" talent evaluators but research shows that tanking does not work precisely because of just how difficult it is for anyone--even a "stat guru" armed with reams of "advanced basketball statistics"--to predict/project future player performance. Another cherished "stat guru" assumption is that the "hot hand"--also known as being in the "zone"--does not really exist. Many people who have coached, played or even just watched basketball believe that they can recognize when a player gets "hot"--when he is in an unstoppable "zone"--but "stat gurus" dismiss such ideas.
"Stat gurus" have been mocking the "hot hand" for decades, deriding the concept as nothing more than a figment of the imagination that reveals the inherent fallibility of evaluating players by using the "eye test." Old school basketball talent evaluators say things like "Eyeball is number one" but many "stat gurus" believe that the "eye" lies and that it is more effective to read spreadsheets than to watch games. Of course, a wise talent evaluator combines the knowledge he gains from the "eye test" with the information he gleans from pertinent statistics to paint a full picture of a player's strengths and weaknesses.
"Stat gurus" cheered when a 1985 study conducted by Thomas Gilovich, Robert Vallone and Amos Tversky indicated that what may look like a "hot hand" is really just a random occurrence. One major problem with that study, though, is that it did not represent a meaningful sample size. The researchers focused on the shooting statistics of the Philadelphia 76ers because the 76ers were the only NBA team at that time which kept complete shot by shot data. That is kind of like looking for your lost keys in one small area not because that is where you think that you lost them but because that is the only place where there is enough light to conduct your search.
The amount of available statistical data has exploded in recent years and, after examining more than 70,000 NBA shots from the 2012-13 season, three Harvard researchers concluded that a player who has made his previous several shots is at least slightly more likely to make his next shot. Does this study conclusively prove that the "hot hand" exists? Of course not. The scientific method requires that hypotheses be repeatedly tested; Albert Einstein's Theory of Relativity is perhaps the best known and most successful theory in scientific history but researchers to this day still test Einstein's hypotheses regarding space, time and gravity. Any "stat guru" who asserts that he has created the definitive player rating system is not practicing science; he is peddling snake oil.
When I criticize the flawed reasoning employed by many "stat gurus" and when I point out the inherent limitations of "advanced basketball statistics," some people misinterpret my analysis to mean that I harbor some reflexive biases against using the best possible statistical tools to better understand basketball. My main point is that "advanced basketball statistics" should not be worshiped as some infallible bastion of truth; "stat gurus" should habitually create testable hypotheses and then see if the best, most comprehensive data that can be gathered supports or refutes those hypotheses. If Player X supposedly has a "rating" of 33.8 and is supposedly exactly 2.5 rating points better than Player Y, what is the margin of error in that rating system? If a player rating system cannot be tested objectively then it is of limited use; anyone can juggle certain basic boxscore numbers in order to create a rating system that is biased toward particular statistics at the expense of other statistics. For instance, a player who sports a relatively high field goal percentage may be a very limited offensive player while a player who has a relatively low field goal percentage may be a very dangerous and versatile offensive player whose skills force the opposing team to trap him. A "stat guru" who favors "efficiency" (as defined by his own preferred rating system) will be unduly swayed by the gaudy shooting percentages of an offensively challenged big man, while a shrewd talent evaluator will see that big man for who he is: a player who is dependent on other players to create his scoring opportunities.
Ironically, as more data about basketball is collected and analyzed, it is becoming evident that assumptions made by allegedly objective "stat gurus" are not any more trustworthy than assumptions made by supposedly subjective and/or biased observers.
posted by David Friedman @ 7:16 PM