Saturday, March 19, 2016

The Normal Lie

Normal Lies Under the Curve

There is no such thing as “normal.”


Sadly, this is a lesson that I learned once I was an adult, and I think it behooves us all to ensure our kids know this. Not just that they’re told this, but that we show it in as many ways as we can.

I could write many thousands of words about how I struggled with being “weird,” “strange,” and “not normal” when I was younger. But I suspect that we all have these types of stories: of struggling to fit in, of worrying about being too different. It’s nice to see society developing an appreciation for “weirdness,” even if such efforts can be sadly misguided.
Image: Wikimedia Commons

Let me instead take a mathematical perspective. In technical terms, a normal is a line which is perpendicular to the surface it intersects. An average is “a number expressing the central or typical value in a set of data, in particular the mode, median, or (most commonly) the mean, which is calculated by dividing the sum of the values in the set by their number.” 1

These measures are fine if your numbers are all central (that is, pretty closely grouped together). The average will be a good indicator of an overall trend. But the big lie is that our numbers aren’t all central. When we’re talking about measuring learning, there are so many different ways of measuring, and different skills to measure. Gardner’s multiple intelligences2 are a great example of how we understand that intelligence (this big, messy idea) isn’t something that’s easy to quantify, and doesn’t have a “typical” characteristic.

The Average Family?

The mathematical model doesn't work on humans.


Instead, what is “average” or “normal” intelligence is simply a line-of-best-fit, that approximates what we have measured. When you look at how big the spread of data becomes, the average represents a theoretical rather than a practical piece of information. After all, who can have 1.1 children? In this context, the average can tell us a bit about the population, but nothing about the individual families that make up the statistic.

Normalizing against a population can tell us more about an assessment than it can about any individual student. I love using assessment as an educator to gauge my efficacy on a macro level, but in terms of individual students, I much prefer comparing growth over time. A grade of 70% can tell you much more about a student’s ability when compared to earlier results: Have they shown improvement? Are they regressing? Is the measure consistent? If not, what was different about this assessment?

These are all time-intensive questions, but they can inform teaching and learning in a deep and meaningful way. In my experience, however, spending time creating and evaluating group-wide assessment eats into the available time to spend reflecting on these questions.


There are so many problems with standardized assessments.


I could write an awful lot about standardized testing in education, but I'd much rather watch this video. (Warning: some content included in the video may be inappropriate) Jump to the 12:13 mark for yet another reason why these assessment are an invalid indicator of student learning (aside from notable issues with the selection of evaluators).



If we’re differentiating learning for our students, why are we assessing them against a normalized measure? Should we not be assessing using differentiation? When we’re looking for consistency, we should be thinking of a longitudinal sample for each child, rather than a population sample. Statisticians would riot in the streets, and parents would likely have a hard time making the change from what they experienced in school, but sometimes change is a good thing, even if people complain about it.

Further Reading




Willis, John O, Ron Dumont, and Alan S Kaufman. "Factor-analytic models of intelligence." The Cambridge handbook of intelligence (2011): 39-57.

1 comment :

  1. Your style and passion for blogging is contagious. Thank you for sharing this way!

    ReplyDelete