Category

Investing

Category

Neal McCluskey

Last month I wrote about creating an updated K‑12 productivity chart. The difficulty of any chart is balancing ease of understanding with accuracy and nuance, and that applies here.

A major concern with the trial chart I produced was complication. It was not intuitive, especially using outcomes of high school seniors surpassing certain performance benchmarks. The goal was to have roughly comparable spending and outcome measures—percentage change in spending versus percentage point change in student performance. The primary source of confusion is that the “main” National Assessment of Educational Progress (NAEP) exam, which was added to extend the performance trend from 2012 to 2019, has a “proficiency” benchmark, but the long-term trend (LTT) exam does not. 

I used the change in the percent of students scoring “proficient” or above to demonstrate progress on the main NAEP, and passing the middle cut-score on the NAEP reporting table for the LTT. But that score could seem arbitrary (also, it is actually the second-highest cut-score overall). The NAEP does have descriptions of the benchmarks, but those would be difficult to explain on the chart.

A simpler version of that chart can be seen next, still using students surpassing cut scores, but with spending and scores compared to a base year, which varies for each trend because some tests were first administered later or earlier than others.

To remedy the benchmark problem, next is essentially the original productivity chart that ended in 2012, except rather than using inflation-adjusted total funding for a high school senior’s K‑12 career, it plots percentage change in NAEP scale scores against percentage change in inflation-adjusted per-pupil spending, and adds main NAEP scores to extend the trend to 2019. 

The problem is that a percentage change in a scale score has indeterminate meaning; a small percentage change could indicate small or large learning increases. That said, as a rule of thumb, a year of learning is equal to about 10 scale score points. This is itself controversial, but using it, the 6‑point scale-score increase from the baseline of 300 on LTT math is a 2 percent rise between the first administration in 1978 and the last in 2012, suggesting 17-year-olds had 3.6 more months’ worth of learning in the last year versus the first, assuming a 180-day school year. 

Was that a good return coinciding with a 76 percent increase in inflation-adjusted per-pupil spending—from $8,879 to $15,588? That’s a judgment call.

While the ways of presenting the productivity evidence have meaningful differences, the ultimate story seems to be the same. For high school seniors—essentially our education system’s “final” products”—outcomes in reading have been stagnant or falling. In math, they rose until 2012 but largely stagnated after, while spending has ballooned.

Of course, as bears constant repeating, NAEP—and standardized testing generally—likely does not capture nearly all that people think education should be about. Meanwhile, numerous variables impact NAEP scores beyond spending, which is one reason a version of the chart (see below) includes a line for changing GDP; other things equal, an increase in material well-being should foster an increase in academic outcomes.

In the end, life is complicated, so any chart should be taken as just one piece of evidence in analyzing broad educational outcomes.