Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Education Opinion

How to Better Connect Research With the People Who Use It

By Guest Blogger — November 20, 2018 5 min read
  • Save to favorites
  • Print

For a couple more weeks, Rick will be out discussing his new edited volume, Bush-Obama School Reform: Lessons Learned. While he’s away, several of the contributors are stopping by and offering their reflections on what we’ve learned from the Bush-Obama era. This week, you’ll hear from Bob Pianta, dean at the University of Virginia’s Curry School of Education, and Tara Hofkens, a postdoctoral research associate at the University of Virginia. They will discuss the attempts by the Bush and Obama administrations to enhance educational research, what those efforts yielded, and what lessons we should learn.

In our analysis of the Bush-Obama years, we concluded that the unprecedented amount of money flowing to educational research yielded mixed results. We attributed the lack of traction in part to a top-down model of science and dissemination, concluding that the research enterprise needed to reorient around understanding and replicating the local conditions that foster students’ learning and educators’ success.

One obvious follow-up question to our chapter concerns the infrastructure required to support a reoriented form of science. Currently, education science capacity is centered primarily in academia and its adjacencies, with very little of the work ending up in the hands of educators in usable form. In response to this gap, there is growing interest in reorienting information flow in education research, with local practitioners, local conditions, and local phenomena in the driver’s seat.

Localized capacity for scientific inquiry is radically different from capacity centered in universities and labs, and requires an infrastructure to build and sustain networks and distributed capacity that does not exist. Yet precisely because education decisions, designs, and actions are so highly localized, one can see the appeal of research and development models designed around that reality.

What are the enablers of establishing authentic links between research and its use? One necessary (but not sufficient) condition will be to elevate and invest in measurement. We know this sounds boring and out of touch, but if we want the authors of a future analysis about education research to conclude something other than “mixed results,” then measuring well will be key.

In nearly every other industry or sector in which we have witnessed notable advances in knowledge, tools, and performance, measurement of the core phenomena in that sector has been fundamental to progress. Measurement creates the common lens required for focusing attention and resources, and the common language needed for transmitting information across stakeholders and activity (a core requirement for networks); it anchors evaluation of progress and change; and it contributes to building a taxonomy of relevant processes and their relationships, which is at the heart of building new knowledge and understanding. Measurement creates the basis for moving from arguments and policies based on rhetoric and opinion to advancements based on evidence.

In the many sectors in which science has a more direct role in driving progress (engineering, medicine, environmental change), the ability to generate and use knowledge to solve problems is directly tied to having the ability to measure the important processes.

Here’s a test—ask an education researcher, parent, student, teacher, school leader, school board member, and chair of the state assembly committee on education to identify the indicators they use to know if a student has mastered algebra or has the social skills to perform well on teams. I suspect the result will be a hodge-podge of definitions, metrics, and terms not held in common across the stakeholders, most lacking a precise, observable anchor. This is a problem.

Education is woefully underdeveloped in measuring the phenomena relevant to knowledge generation or practice and application. For example, although it is widely recognized that the success of education investment in student learning is tied to the instructional and social skills of teachers, there are virtually no standard measurements of instruction, much less of teachers’ social interactions with students.

Measures that do exist vary widely in their technical properties, ease of use, and adoption. The consequences of this include: little agreement on what makes teachers effective; principals don’t know what to look for when they enter a classroom; teacher preparation isn’t aligned to any industry standards for teacher performance; and most professional development wastes time and money.

As another example, there is general agreement that tests currently used to measure student learning (the goal of our education system) are poor approximations of the skills and knowledge that enable young adults’ success in life, work, and family. This is true despite 30 years of time, effort, and dollars in standards-based assessment and reform.

And to add a final take on the importance of measurement, consider that socioemotional learning (SEL) has emerged recently as a focus for helping solve the problems of educating children and youths—and yet ask any educator to define it, measure it, or identify whether their school is effective in promoting it, and you would soon conclude (wrongly) that SEL may be the next education fad.

We have not invested enough in measuring the things we care about. For this reason, we lack the very tools that can bind research with application; with development, implementation, and evaluation; and with networked stakeholders. Whether in a networked or top-down model, scientific progress takes time, and measurement is at the core; it forces definition, clarity, precision, focus, and informed debate. In a networked model of distributed research capacity, measurement will be essential because of the need for a common language and may be the only tool for ensuring the knowledge generated has any value beyond the hyper-local.

The Bush-Obama years greatly advanced the amount and quality of educational research, and still yielded mixed results and too little impact. A lot was invested in finding “what works,” and too little was invested in measuring the “what” and the “works” and most everything in between. Getting more precise, efficient, accurate, and grounded in measuring can be a big step toward progress.

Bob Pianta and Tara Hofkens

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.