On Vacations and “Learning Loss"
- Steve Nuzum
- 2 days ago
- 6 min read
BY STEVE NUZUM
Since the early days of the pandemic, pundits and researchers have fretted about “learning loss”. In practice, “learning loss” is really better described as a decrease of, or lack of improvement on, standardized test scores. Doubtless, someone somewhere is fretting about all of the learning that may evaporate from children’s heads over the winter holidays.
And there’s even a specific phrase used by testing companies for the “learning loss” that allegedly happens every summer vacation: “Summer slide” refers to the perception that students get worse at school over a long summer holiday. Early projections on the potential costs of virtual or hybrid schooling during the pandemic were often based on the assumption that we could map “summer slide” onto days of in-person school missed due to the pandemic.
For example, in July 2020, SC Senate Education Chair Greg Hembree claimed that, “The data is telling us that in mathematics, on average, students have lost, not just that semester, but a complete year, and that in English, students have lost a complete semester.”
Hembree’s data came from an Education Oversight Committee report which actually contained no data about e-learning losses during the pandemic
Instead, it included two graphs from an article on EdNote, dated April 27, 2020, called “The Covid Slide and What It Could Mean for Student Achievement.” The provided graphs “forecast” nationwide learning loss due to COVID-19 based only on test scores for English and mathematics students in grades 3-8. The authors, rather than making claims about current learning loss, merely state, “Preliminary estimates suggest impacts may be larger in math than in reading and that students may return in fall 2020 with less than 50% of learning gains, and, in some grades, nearly a full year behind what NWEA would expect in normal conditions”
NWEA is the company that makes the MAP assessment. The company has a vested interest in promoting the idea that test scores consistently represent “learning”.
Hembree awarded the state’s teachers at the time “an A+ for effort” in teaching students, but a “D-” for “results”. But of course, this is was before we knew almost anything about the impact of the pandemic on “learning”; as questionable as test scores can be as proxies for what students have actually learned (and what they have learned to do), we didn’t even have those scores yet when Hembree and others were using this “projected learning loss” to argue for a return to in-person instruction.
Hembree’s extrapolation stretched the plausible limits of learning loss data in a way that unfortunately is all too common.
Learning loss alarmists often conflate test achievement and learning, which are, theoretically, closely related, but are objectively not the same thing. (It is possible for a student to score poorly on a standardized test even though they have mastered the content or skills the test is supposed to measure. It is possible for a student to be very good at taking standardized tests without learning the material.)
In other words, the test might be invalid. The student may perform poorly under stress. The student may fall asleep during the test. The student may be taking an electronic test on a broken computer. (Shortly after we did return to in-person learning, I watched a student take a state test on a Chromebook with a missing J key. As she was completing the essay component of the test, every time she needed to type J, she had to pick up a pencil and insert the tip of it into the keyboard.)
Secondly, like Hembree, the alarmists during and after the pandemic have conflated “summer slide”-style “learning loss” with an unprecedented situation that may or may not duplicate the conditions of being on summer vacation. (How similar is a global pandemic to a summer break?)
Of course, many students did face academic obstacles during the pandemic.
In particular students with fewer resources outside of school likely struggled the most.
Past research on “learning loss” addresses this:
“One commonly cited explanation for this phenomenon is a concept called the ‘faucet theory.’ This idea suggests that children attending the same school have relatively comparable resources available to them during the school year. However, during the summer, students from higher economic backgrounds continue to have access to materials and experiences that allow them to maintain higher rates of educational retention than their peers from resource-poor backgrounds. In other words, these children continue to have a running ‘faucet’ of resources even in the summer months—an advantage that students from lower socio-economic backgrounds lack.”
Indeed, leading up to the pandemic, South Carolina officials often heard from teachers who felt ongoing resource disparities between wealthier and poorer districts needed to be addressed in order to improve educational outcomes. These were central complaints in the May 1, 2019 rally in which 10,000 public education supporters marched on the SC capitol.
I recall telling Hembree after one particular Senate hearing, about a year before the pandemic began, that some of the test data he had been sharing was racially biased (something that was later confirmed in another Education Oversight Committee while Hembree was a committee member). His response to me was that he often heard that complaint, but that many of the districts with the poorest test performance had been seeing that kind of performance “for fifty years”.
Of course, it is likely that in the heavily segregated South Carolina system, the same districts would perform poorly on racially and socioeconomically biased tests decade after decade. And it’s hard to take a “learning loss” panic seriously from anyone unwilling to acknowledge the systemic issues that might help not only explain the data, but point clearly to larger problems.
The argument against fixating on “learning loss”
As education writer and teacher Peter Greene has often pointed out, much of the learning loss panic traffics in the fallacy that correlation automatically implies causation.
As Green explains,
“For maximum panic, some folks are claiming that a drop in test scores due to Learning Loss indicates a future loss of earnings for individuals and economic strength for countries (for example, this from one of the leading promoters of test scores = future earnings, Eric Hanushek). All of this is based on a correlation between test score and life outcomes, except that there are problems with using this correlation.
“The big one is that it is just a correlation, like noting that kids who wear larger shoes in fourth grade tend to be taller as adults. There is a connection--it's just not cause and effect.”
This doesn’t mean we shouldn’t be concerned when we see students performing badly on valid metrics of their academic performance, but it does mean we should be honest with ourselves about 1.) whether or not those metrics are actually valid, and 2.) what they are really supposed to tell us if they do.
If we authentically cared about “lost” learning, we would care first and foremost about providing equitable learning opportunities for all students.
It’s telling that for all of Senator Hembree’s probably sincere concern about improving test scores in South Carolina, the biggest policy effort he has supported in recent years has been a series of school voucher schemes that have attempted-- and repeatedly failed-- to get around the state constitution’s prohibition against funding private schools with state money. (Indeed, one of Hembree’s first prefiled bills of the new session adjusts the language in the state’s current voucher bill.)
The argument behind these schemes is generally some variation of the same pitch: because our schools are “failing” (again defined almost exclusively through subgroup performance on state tests), we should spend more of our tax revenue on private schools (which, incidentally, don’t have to require those same tests, even if they accept state tax money, or really provide any kind of accountability to the people funding them through tax dollars).
In other words, the argument for vouchers in a way recognizes the “faucet” theory as valid, but determines that the solution is to create a separate “faucet” for private education providers, one which research has repeatedly demonstrated is less available to the students who need it most-- the students with fewer resources who are the primary victims of “learning loss”.
It obviously makes more sense to improve the access of all students, especially those who need it most. But South Carolina’s current twin obsessions: vouchers and anti-diversity initiatives, both suggest that we’re continuing to move rapidly in the opposite direction.





