Don't Teach Statistics in High School

In recent years, there have been more and more calls to prioritize statistics in high-school math curricula. The rationale is that unlike, say, calculus, statistics is supposedly much more applicable in everyday life. Sounds nice, but what does that really mean? Computing averages? Hell yeah, that’s useful1! But most of the things that are taught in a typical (AP, A-level, IB) high-school stats course are not only of little practical value, but also—and more importantly—misleading. Statistics is one of very few subjects where studying it for a short time leads to poorer intuition about the real world than not having studied it at all.

It’s Real Hard

Analyzing noisy data and making decisions based on that is incredibly difficult, but we have somehow convinced ourselves that this is something any 17-year-old should be able to do with the help of a few simplistic tools. It’s delusional and it should be obvious by now. Even most of the people whose job is to utilize statistics every day don’t really know what they are doing.

In 2002, academics and students from psychology departments of several German universities were asked to fill out a questionnaire [1]. It consisted of 6 statements about the concepts behind statistical significance in hypothesis testing; the respondents had to mark each statement as either true or false. The bar chart below shows what proportion of each group marked all 6 statements correctly:

A bar chart. y axis: 'Proportion that marked all 6 statements correctly (%)'. x axis: 1) 'Academics teaching statistics (n = 30)': 20%, 2) 'Academics not teaching statistics (n = 39)': 10.3%, 3) 'Psychology students (n = 44)': 0%.

The fact that all students who had taken statistics courses made at least one mistake is not as scary as the fact that so did most of the instructors who teach these courses. But upon reflection, this isn’t surprising. Statistics courses in high schools and universities aren’t meant to develop understanding. They are all about “useful” procedures that make little sense but are supposed to make data analysis more rigorous2. One of such procedures has been half-jokingly dubbed the “null ritual” by Gerd Gigerenzer [2]:

  1. Set up a statistical null hypothesis of “no mean difference” or “zero correlation.” Don’t specify the predictions of your research hypothesis or of any alternative substantive hypotheses.
  2. Use 5% as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p < 0.05, p < 0.01, or p < 0.001 (whichever comes next to the obtained p-value).
  3. Always perform this procedure.

Better Than Nothing?

Misusing statistical methods is worse than not using them at all. They have become a way of presenting results based on small sample sizes as solid evidence. We know this happens all the time because a large number of studies cannot be replicated; in fact, many more than one would expect given the assumptions about randomness in them. In 2018, an effort was made to reproduce 28 famous psychology experiments [3]. Only half yielded significant results when repeated with large sample sizes.

That’s terrible. It’s not just that these studies are wrong. Other scientists build on top of that, using them to explain their own questionable findings. Pretty soon you have people tweeting “New study shows that…” or, worse, politicians introducing new legislation because TRUST THE SCIENCE™.

Things to Teach

The fact that people who take university-level statistics courses still misuse it, should make it obvious that this is not something we should be teaching to high-school students. Sadly, that won’t change the minds of new-wave educators who are all about making math “more useful in the real world”. But teaching students about instantiations of concepts, rather than their abstractions is simply a bad strategy. I almost never see students who understand the theory well struggling to apply it to a specific problem in the real world. But I observe the opposite all the time—students learn some algorithm (“trick”, “hack”), they know how to use it to solve a particular problem, but when they are asked to explain it or to apply it in a different context, they almost always fail. Statistics education in high schools and universities is the most unfortunate example of this.

I believe math education should be all about enabling students to think abstractly. But if I had to compromise and develop a syllabus for high-school statistics, it would be very different from what we have now. It wouldn’t instruct students to use hypothesis testing or correlation coefficients because these can be easily misused if not understood properly. Instead, it would be a course about being skeptical, about always questioning conclusions based on data:

  • Could this be explained by randomness?
  • Was this discovered by testing a hypothesis or looking for a pattern in a large data set [4]?
  • Even if observed effects are significant, could they be explained by some other variable3?

Knowing how to explore these is not enough to perform statistical analyses, but asking such questions will certainly make students more confident while navigating the world where dubious claims are being made all the time. And for those who wish to use statistics the right way, a long road lies ahead, with lots of concepts to be mastered first. It will take time. But that’s OK.

References

  1. H. Haller and S. Krauss, Misinterpretations of significance: A problem students share with their teachers, Methods of Psychological Research, vol. 7, no. 1, pp. 1–20, 2002.
  2. G. Gigerenzer, Mindless statistics, The Journal of Socio-Economics, vol. 33, no. 5, pp. 587–606, 2004. doi:10.1016/j.socec.2004.09.033
  3. R. Klein, M. Vianello, F. Hasselman, B. Adams, R. Adams Jr, S. Alper, M. Aveyard, J. Axt, M. Babalola, Š. Bahnı́k, et al., Many labs 2: Investigating variation in replicability across samples and settings, Advances in Methods and Practices in Psychological Science, vol. 1, no. 4, pp. 443–490, 2018. doi:10.1177/2515245918810225
  4. G. Smith and S. Ebrahim, Data dredging, bias, or confounding: They can all get you into the BMJ and the friday papers, BMJ, vol. 325, no. 7378, pp. 1437–1438, 2002. doi:10.1136/bmj.325.7378.1437

  1. It’s also something that students already know how to do by the time they reach high school anyway. ↩︎

  2. I, too, used these procedures in my undergrad years, thinking that this is what science is all about. ↩︎

  3. “Oh boy, all these drownings seem to be really driving the ice cream sales!” ↩︎