When basketball players get hot, watch out. They’re likely to stay hot, sinking more free throws than usual. That, at least, is how many players and fans see it.
Not so fast, psychologists and behavioral economists have long contended. Beginning with a seminal 1985 study by Gilovich, Vallone and Tversky (GVT), researchers have found little statistical evidence for the hot hand in basketball. The perception of streakiness is a mass cognitive illusion, they said, the result of inferring too much from random samples.
Now, though, it turns out that it may be the researchers, not the fans, who got it wrong.
Last November, a paper published in Econometrica by two economists in Spain, Joshua B. Miller and Adam Sanjurjo, argues that GVT – and hundreds of scholars who cited their work over the decades – overlooked a finite sample bias in a seemingly intuitive hypothesis test. After correcting for the bias, the authors claim that the data used by GVT actually provided evidence for the existence of the hot hand in basketball.
Unfortunately, Miller and Sanjurjo’s paper is an arduous read – even for academics. We’ve explored the mathematics in a technical piece but summarize the findings below.
Most people know that the chance a fair coin flip comes up heads is 50%, no matter the outcome of the previous flip. However, if we flip a coin three times and record only the outcome of flips that immediately follow a heads (i.e., k, the number of consecutive heads required for the recording, is 1), the expected proportion of heads among those flips, surprisingly, is not 50% but about 42% (5 in 12). (See Figure 1.)
This downward bias tends to increase in tandem with the number of consecutive heads, k, required for the recording and decrease as the total number of coin flips in each sequence grows. Nonetheless, even for a sequence of 100 coin flips, the bias for the proportion of heads following a streak of three heads is still substantial at -4% (or 46% as opposed to 50%).
In their controlled study of 100 free throws by each of 26 players, GVT tested whether the difference between the conditional probability of a hit after three consecutive hits, and that after three consecutive misses, is zero. The total bias therefore doubled to around -8%.
The test based on GVT’s raw, or unprocessed, data suggests no hot hand in basketball. Simply adjusting the differences by the estimated biases, however, leads to the opposite conclusion – players can get hot.
In this case, correcting for the finite sample bias turned out to be critical. As detailed in our longer quantitative analysis, we investigated whether bias correction could reverse the result of a test designed to assess the short-term persistence of relative performance of 21 mutual funds in the Morningstar Intermediate-Term Bond category for 100 quarters from 1993 to 2017. We found that when the bias is corrected, the conclusion changes to support the existence of short-run serial dependence in the relative performance of these funds with a long history of survival, beyond what can be explained by chance.
Note, however, that unlike controlled basketball shooting, the complexity of financial markets makes it difficult to find potentially Bernoulli random variables. In addition, turning continuous data (basis points of relative performance) into binary data also risks losing helpful information. Therefore, this exercise serves only to illustrate the bias correction methodology and is not meant to replace more rigorous or comprehensive analysis of this topic.
In our view, failure to recognize the finite sample bias or its significance relates to the belief in the law of small numbers. The law describes the erroneous tendency to regard a small sample as having similar essential characteristics as the population. This bias is similar in nature to a well-documented finite sample bias in time-series models and may have been identified earlier if the researchers had relied less on intuition and more on computation.
That’s easier said than done, however. In 1971, Tversky and Kahneman showed that statistical intuitions can be systemically biased, and, unfortunately, education may not make one less susceptible to these biases. The solution is to maintain a healthy skepticism of one’s intuition and rely instead on computation whenever possible.
Asset managers frequently must make decisions amid uncertainty. And we believe that identifying and rectifying our own cognitive biases has the potential to enhance PIMCO’s investment process and client outcomes. It’s one reason why we decided last year to partner with the Center for Decision Research at the University of Chicago Booth School of Business.
We all have cognitive biases. And we must avoid overconfidence in our intuition to mitigate the negative impact of biases on our decisions.
For investment professionals interested in a detailed mathematical analysis of the argument presented in this article, please read “Thoughts on the Hot-Hand Debate in Basketball.”
The PIMCO Decision Research Laboratories at the University of Chicago Booth School of Business Center for Decision Research enable academics to conduct the highest impact behavioral science experiments where people live and work. Through this innovative partnership with the University of Chicago, PIMCO supports diverse and robust research that contributes to a deeper understanding of human behavior and decision-making and helps empower leaders to make wiser choices in business and society.