Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Education Opinion

More on “The New Stupid”

By Rick Hess — December 21, 2011 6 min read
  • Save to favorites
  • Print

Note: This week, I’m giving RHSU readers a look at my essay from Educational Leadership entitled “The New Stupid.” For day one, see here.

The second element of the new stupid is Translating Research Simplistically. For two decades, advocates of class-size reduction have referenced the findings from the Student Teacher Achievement Ratio (STAR) project, a class-size experiment conducted in Tennessee in the late 1980s. Researchers found significant achievement gains for students in small kindergarten classes and additional gains in 1st grade, especially for black students. The results seemed to validate a crowd-pleasing reform and were famously embraced in California, where in 1996 legislators adopted a program to reduce class sizes that cost nearly $800 million in its first year and billions in its first decade. The dollars ultimately yielded disappointing results, however, with the only major evaluation (a joint American Institutes for Research and RAND study) finding no effect on student achievement.

What happened? Policymakers ignored nuance and context. California encouraged districts to place students in classes of no more than 20--but that class size was substantially larger than those for which STAR found benefits. Moreover, STAR was a pilot program serving a limited population, which minimized the need for new teachers. California’s statewide effort created a voracious appetite for new educators, diluting teacher quality and encouraging well-off districts to strip-mine teachers from less affluent communities. The moral is that even policies or practices informed by rigorous research can prove ineffective if the translation is clumsy or ill considered.

When it comes to “research-based practice,” the most vexing problem may be the failure to recognize the limits of what even rigorous scientific research can tell us. For instance, when testing new medical treatments, randomized field trials are the research design of choice because they can help establish cause and effect. Efforts to adopt this model in schooling, however, have been plagued by a flawed understanding of just how the model works in medicine and how it translates to education. The randomized field trial model, in which drugs or therapies are administered to individual patients under explicit protocols, is enormously helpful when recommending interventions for particular medical conditions. But it is far less useful when determining how much to pay nurses or how to hold hospitals accountable.

In education, curricular and pedagogical interventions can indeed be investigated through randomized field trials, with results that can serve as the basis for prescriptive practice. Even in these cases, however, there is a tendency for educators to be cavalier about the elements and execution of research-based practice. When medical research finds a certain drug regimen to be effective, doctors do not casually tinker with the formula. Yet, in areas like reading instruction, districts and schools routinely alter the sequencing and elements of a curriculum, while still touting their practices as research based.

Meanwhile, when it comes to policy, officials must make tough decisions about governance, management, and compensation that cannot be examined under controlled conditions and for which it is difficult to glean conclusive evidence. Although research can shed light on how policies play out and how context matters, studies of particular merit-pay or school-choice plans are unlikely to answer whether such policies “work"--largely because the particulars of each plan will prove crucial.

A third and final element of the new stupid is Giving Short Shrift to Management Data. School and district leaders have embraced student achievement data but have paid scant attention to collecting or using data that are more relevant to improving the performance of schools and school systems. The result is “data-driven” systems in which leaders give short shrift to the operations, hiring, and financial practices that are the backbone of any well-run organization and that are crucial to supporting educators.

Existing achievement data are of limited utility for management purposes. State tests tend to provide results that are too coarse to offer more than a snapshot of student and school performance, and few district data systems link student achievement metrics to teachers, practices, or programs in a way that can help determine what is working. More significant, successful public and private organizations monitor their operations extensively and intensively. FedEx and UPS know at any given time where millions of packages are across the United States and around the globe. Yet few districts know how long it takes to respond to a teaching applicant, how frequently teachers use formative assessments, or how rapidly school requests for supplies are processed and fulfilled.

For all of our attention to testing and assessment, student achievement measures are largely irrelevant to judging the performance of many school district employees. It simply does not make sense to evaluate the performance of a payroll processor or human resources recruiter--or even a foreign language instructor--primarily on the basis of reading and math test scores for grades 3 through 8.

Just as hospitals employ large numbers of administrative and clinical personnel to support doctors and the military employs accountants, cooks, and lawyers to support its combat personnel, so schools have a “long tail” of support staff charged with ensuring that educators have the tools they need to be effective. Just as it makes more sense to judge the quality of army chefs on the quality of their kitchens and cuisines rather than on the outcome of combat operations, so it is more sensible to focus on how well district employees perform their prescribed tasks than on less direct measures of job performance. The tendency to casually focus on student achievement, especially given the testing system’s heavy emphasis on reading and math, allows a large number of employees to either be excused from results-driven accountability or be held accountable for activities over which they have no control. This undermines a performance mindset and promises to eventually erode confidence in management.

Ultimately, student achievement data alone only yield a “black box.” They illustrate how students are faring but do not enable an organization to diagnose problems or manage improvement. It is as if a CEO’s management dashboard consisted of only one item--the company stock’s price.

Data-driven management should not simply identify effective teachers or struggling students but should also help render schools and school systems more supportive of effective teaching and learning. Doing so requires tracking an array of indicators, such as how long it takes books and materials to be shipped to classrooms, whether schools provide students with accurate and appropriate schedules in a timely fashion, how quickly assessment data are returned to schools, and how often the data are used. A system in which leaders possess that kind of data is far better equipped to boost school performance than one in which leaders have a pallette of achievement data and little else.

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.