Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

School & District Management Opinion

Education Reforms Should Obey Campbell’s Law

By Rick Hess — June 11, 2018 4 min read
  • Save to favorites
  • Print

The first time I heard of Campbell’s Law was in a college class in public policy. The professor asked, “Can data ever cause problems? Can it ever hurt?” It seemed like a trick question. Pretty much in unison, the class uncertainly mumbled a version of, “I don’t think so.”

The professor then asked, “What if a police department decides to evaluate officers based on the number of traffic tickets they write? Could anything go wrong?” Someone observed that cops would try to write lots of tickets—including for people who might not deserve them.

The professor asked, “Okay, so what if they flip it? What if they reward cops who issue fewer tickets?” Well, duh. Police might turn a blind eye to real problems.

The instructor smiled and said, “See, you can think of lots of ways where data might hurt.” With that, he introduced us to Campbell’s Law. Formulated in 1976 by social psychologist Donald Campbell, it reads, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” Put simply: “When a measure becomes a target, it ceases to be a good measure.”

One student said, “But you’re not showing that the data itself actually hurts. These are all examples of where it’s only a problem if you use it in dumb ways.” She was right, of course. Data itself can’t usually hurt you (unless a stack of printouts falls on your foot), but it does turn out to be remarkably easy to use data in dumb and destructive ways.

When you put a lot of weight on a given measure, people react. When they do, it can mess up the measure while creating incentives you never anticipated. What at first seems like a smart, clever, and cutting-edge way to use data can look very different in hindsight.

As I relate in Letters to a Young Education Reformer, this happens in all walks of life. In 1999, a flight from New York to Seattle took 22 minutes and 48 seconds longer than it had a decade earlier—even though planes were more sophisticated and stayed in the air for the same amount of time. What had happened? In the interim, Washington had started reporting airline “on-time arrival rates.” Airlines responded by boarding earlier and pushing back expected arrival times in order to make it easier for flights to be reported as “on time.” So passengers spent more time sitting on the tarmac. That emphasis on “timeliness” lengthened scheduled travel time by 130 million minutes between 1989 and 1999. Whoops.

The former Soviet Union was another kind of case study in Campbell’s Law, with factories under the gun to meet arbitrary government production targets. Factory directors were judged on whether they hit their targets, rather than on things like customer satisfaction or product quality. When five-year plans set targets in terms of tonnage, factories made things that were comically heavy—chandeliers that pulled down ceilings and roofing metal that collapsed buildings. At other times, auto factories hit production quotas by manufacturing cars without key components—like engines.

In education, improvement efforts have frequently been blindsided by Campbell’s Law. Attempts to evaluate schools and teachers using a few simple metrics, primarily reading and math scores, have given educators cause to do everything possible to boost those results—to the point that it has sometimes brought to mind those old Soviet factories.

Campbell’s Law applies to a lot more than test scores, of course. The No Child Left Behind Act stipulated that school violence be tracked in order to identify “persistently dangerous” schools. What happened? In hundreds of districts, reported incidents of school violence dropped nearly to zero. Had schools suddenly become much safer? Nah. A lot of schools just stopped reporting incidents. And we’ve seen this same concern arise in places that had managed seemingly miraculous leaps in high school graduation.

Today, there are those eager to incorporate measures of “social and emotional learning” into state accountability systems. These measures, of things like persistence and growth mindset, are just fine. But building them into an accountability system can give educators incentive to do anything they can to juice the reported outcomes—even if the strategies are silly, boring, or a waste of class time. The same problems await when we start trying to set the right benchmarks for career and technical education or kindergarten readiness.

Now, I can hear you thinking, “Wait a minute. We want kids to know reading and math and we want them to be persistent. Shouldn’t we measure these things?”

Sure, we should. But here’s the thing. There can be a big difference between a school being safe and officials reporting that a school is safe, or being persistent and doing well on a persistence assessment (just as there is between making cars and making cars with engines).

None of this means that we should shy away from accountability or measuring outcomes. But it means that we need to be a heck of a lot more thoughtful and attentive about how measures get used than has been the norm. How we use those measures matters immensely, which means that it’s not enough for accountability policies or district evaluation practices to be “directionally correct.” They need to be designed with an eye to ensuring that they don’t break the law. Campbell’s law, that is.

Related Tags:

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.