BY STEVE NUZUM
Disclaimer - The views expressed in all CEWL Views articles are those of the author and do not necessarily reflect those of The SCEA or NEA. We encourage you to share your comments and feedback.
South Carolina recently released its annual “school report cards.” State Superintendent of Education and noted Moms for Liberty ally Ellen Weaver wasted no time in using apparently positive growth in literacy (based solely on state test scores, which we’ll get to below) to support a pet talking point. Weaver argued that what we need is to embrace “the science of reading,” a phrase which, as Moms for Liberty generally uses it, refers to something we weren’t systemically doing while the scores were going up.
Weaver telegraphed this “science of reading” talking point heavily before the report cards ever came out, including during her campaign over a year ago, and during her appearance at the Moms for Liberty “Joyful Warriors” conference, where she appeared along with Oklahoma Superintendent of Education Ryan Walters. Walters argued with a straight face that we knew phonics instruction worked because there used to be lots of “Hooked on Phonics” commercials on television.” Weaver added later in the same conversation, “Instead of woke nonsense, we have got to get back to basics. We have got to teach phonics.”
Beyond the destructive “reading wars” rhetoric, Weaver’s use of report card data to support something she was already supporting anyway, illustrates the limitations and consequences (both intended and not) of the report cards themselves. When we boil down complex human behaviors into a single word or letter or number, we create a lot of space to insert our own preconceptions and biases. Weaver isn’t alone in doing this; that’s the nature of this kind of descriptive statistic.
So why do we present school data this way?
The federal Every Student Succeeds Act (ESSA), like No Child Left Behind (NCLB) before it, is a reauthorization of the Elementary and Secondary Education Act (ESEA). ESSA requires that states regularly release school data (and the way it suggests explicitly to do this is through what it calls “report cards”). The minimum reporting requirements under federal law for state report cards are, roughly paraphrased:
a count of students in each of the “subgroups” defined by the law;
“long-term goals” for each subgroup and “measurements for interim progress” (usually, state tests) towards these goals;
a “system” for “meaningfully differentiating all schools in the state” and a description of the methods used to come up with this ranking;
a list of schools that have been identified as needing “support and improvement;”
information about family and homelessness status of students in identified subgroups;
(test) data for each of the subgroups;
measures of “school quality, climate, and safety, including rates of discipline and disciplinary infractions, including violence;”
the number of students in programs like advanced placement and preschool;
professional qualifications of teachers;
per-pupil expenditures for each LEA (usually, this means a school district);
“Results on the State academic assessments in reading and mathematics in grades 4 and 8 of the National Assessment of Educational Progress carried out under section 303(b)(3) of the National Assessment of Educational Progress Authorization Act (20 U.S.C. 9622(b)(3)), compared to the national average of such results.”
graduation rate
ESSA requires this info to be presented in a way that is easy for the public to understand, and which compares schools to one another. But what ESSA authors seem to want is a way to rank School A against School B to see which one is “better.”
If I step into the shoes of a non-educator, particularly a parent or concerned taxpayer, I can understand the appeal of this approach. After all, when we think of ourselves as consumers, it’s perfectly natural to rank things that we intend to consume. And if anything has gained broad appeal across the political spectrum in the past several decades, it’s the idea that somehow public schools are less infrastructure, like a post office or fire department, and more a customizable “customer service” business, where you read the reviews online, drop what you want in the cart, and taxpayers collectively pay for it.
But who else benefits from ranking schools in this way? And why do we like these simplistic scores and rankings so much?
Schools and school systems are extremely complicated. Measuring academic “progress” and tying that to what schools do or don’t do is a controversial undertaking in itself, particularly because ranking schools based on student outcomes on standardized tests rests on the assumption that there aren’t major factors outside of school that play a large part in these test scores.
But schools don’t just produce “academic progress.” If we choose to try to measure their outputs, we also must recognize that schools are supposed to contribute to students’ mental, physical, social, and emotional development. They are supposed to keep children safe during the day and teach them skill to help them to be safe at other times; to look for and report signs of child abuse, neglect, and other issues that aren’t directly related to academics; to train them on how to deal with active shooters, fires, and weather threats. Regardless of whether individuals, including “parent’s rights” activists, agree this should be the role of schools, many of these duties are required by existing laws and regulations.
ESSA requires school systems to attempt to measure some of this complex web of “outputs,” but requiring the collection of data doesn’t guarantee that it will be collected effectively or consistently (if at all), and ESSA doesn’t require much measurement at all of many of the central functions schools serve.
The South Carolina Education Oversight Committee (EOC)—which was, until her run for office last year, chaired by Ellen Weaver, a passionate advocate for vouchers— puts it this way:
This ratings system provides parents, teachers and stakeholders a simple way to evaluate overall whether a particular school or system is exceeding, meeting or not meeting the criteria required in the Profile of the South Carolina Graduate.
Interesting, if true!
But the Profile of the South Carolina Graduate requires a lot of great “outputs” that simply are not measured by state tests. For example, how do we test “World Class Skills” like “Creativity and innovation,” or “Collaboration and Teamwork” or “Knowing How to Learn,” which are all included in the Profile?
The Profile of the South Carolina Graduate (screenshot from the SC Department of Education website.)
What are the other drawbacks to this approach?
Perhaps the most important one is that school report card grades in many states— like South Carolina— are based very heavily on standardized test scores. A low report card grade for a school or district often has more to do with the socioeconomic status of tested students than the academic progress the grade is supposed to represent. As UMass Amherst school researcher Jack Schneider and his colleagues wrote a few years ago in “Building a better measure of school quality:”
Because low-income and minority students generally score lower on standardized tests, their schools remain more likely targets of highly disruptive intervention. Further, those low scores often substantiate the fears of quality-conscious, well-resourced parents who have departed in greater numbers for districts with better reputations, exacerbating segregation by race and class (Owens, 2016).
If a school educates a higher number of poor and minority students, it is more likely to be limited in how it can spend funds, may be taken over by state officials, and may be required to do lots of other things of questionable educational value. This can create a cycle of negative consequences, making it even more likely that higher-income families with more resources will move out of the district, funding will decrease further, and students will be left behind in the under-resourced schools.
Perhaps anticipating this criticism, the SC EOC writes, “The economic status of students in a school does not hold students back from growing as learners.” Again, this would be great, if true, but there is research to suggest that it’s not.
Ranking schools also assumes that the best way to improve schools is to pit them against one another as if they were profit-motivated participants in a “free market”.
Of course, even if you’re a big fan of free market competition, a clear downside to this approach is that many of the measures required by ESSA, and the additional ones selected by many states, are not really under LEAs’ control, so there isn’t really any way for them to “compete” in these areas. Districts can’t—and obviously shouldn’t—select students based on poverty, socioeconomic class, race, or other factors that are highly correlated with test scores. They can’t necessarily ensure that students— who have free will, and lives outside of school— can or will be successful in the terms that federal and state definitions require, and they can’t force students to graduate. And they can’t even completely control the conditions under which the tests are taken.
What LEAs can do, of course, is to collect and interpret data in ways that make it seem like they are improving on these metrics, and that’s often exactly what they do. A 2020 study conducted by Brown University and Brookings Institute made fairly positive conclusions about the connection between “graduation rate accountability” and actual graduation rate, but even the authors of that study acknowledge that “there are also other ways schools could be lowering standards, such as simply making it easy for students to pass their courses.”
It’s easy to imagine schools and districts changing discipline referrals to reflect lesser offenses, hyper-fixating on only the issues which are reported and scored, teaching excessively to standardized tests while ignoring non-tested skills and standards, and pressuring students and staff in ways that make cheating more likely.
What could we do instead?
Schneider and his co-authors identified metrics that people actually care about, which were generally related to the following domains:
Teachers and the teaching environment
School culture
Resources
Academic Learning
Character and Well-Being
Some of this information was already available, and other data could be collected easily. To find ways to accurately measure what communities actually wanted to know, Schneider and his co-authors polled a representative sample of community members and found that participants felt that the new data was more useful than existing state data. They seemed to have fewer negative perceptions of unfamiliar schools than they did after viewing the state data. And they also seemed to influence people who were relying on the state data system to view schools more positively.
Ideally, the public can and should provide input that shapes how data is shared, in compliance with ESSA, but it’s not always easy to do so. For example, the SC Department of Education currently has a link on its official “School Report Card” page soliciting “public comment.” The link takes you to an error page, which reads “Sorry, the file you requested has been deleted.”
School report cards in South Carolina are a relatively unaccountable, inadequate, and often misleading picture of “school quality” and educational progress. They pit schools against one another based on largely apples-to-oranges comparisons, making them fight for resources and the autonomy to make local, personalized decisions for students, based on homogenous tests which are likely to be demographically biased. If what you want is to continue a narrative that poor schools with a large proportion of students of color are “failing” and that only a combination of state intervention and school privatization can “fix the problem,” this works out well for you.
If, on the other hand, you want all students to have access to a high-quality education that prepares them for life and citizenship, the school report card offers you very little. After all, what do you do with the information that your local school is failing? Particularly in places like South Carolina, where traditionally “conservative” legislators with a laissez-faire approach to most kinds of regulation, have now embraced a culture-war demand to increasingly meddle in local decisions, it seems like your primary option is to simply move to an area with “better” schools, as research shows affluent families are already doing. Obviously, this is no option at all for most families.
The idea of providing the public with lots of information about schools is a good one, if the goal is improving all schools, and if the information is valid and reliable. But just because we have a test score, that doesn’t mean tying it to an evaluation of an entire school or district is going to tell us anything about the realities in that district or school, and that even if it does tell us something, test scores are not designed to tell us many of the things we actually want to know.
Step one is educating our neighbors about the relative meaninglessness of many of those state report card rankings and metrics. Having new and better metrics is a great long-term goal, but short-term, I would encourage anyone who wants to understand more about a school to talk to people who attend school there, work there, and send their children there. Try to avoid the loudest voices, no matter what they’re saying, and listen to people who have a vested interest in the quality of specific schools. Generally, assume good intent from most people who have chosen to devote themselves to low-paying, increasingly thankless, and difficult careers; there are bad educators (and administrators, and board members, and other elected officials) out there, but they are not the norm.
We invite you to join the only movement in South Carolina that can create the kind of schools educators and students deserve.
Comments