« Why the Common Core Will Be Declared a Failure | Main | U.S. Workforce Skills: Even Worse Than We Thought »

Why Education Research Has So Little Impact on Practice: The System Effect

Thomas Kane has a piece in Education Next in which he concludes that American education research has failed to have any positive effect on American education policy or practice.  He would, "...characterize the past five decades as a near-complete failure."

Kane seems to suggest that the problem is not with the quality of the research but with the lack of an effective way to get the research to practitioners. "The What Works Clearinghouse," he says, "is essentially a warehouse without a distribution system."  But then he goes on to say something a little different; that there is no effective demand for the product the researchers are producing.  He wants the federal government to "support a culture of evidence discovery within the school agencies."  If there were such a culture, he concludes, then school people would be looking to the research and researchers for solutions to the problems they face.  The proposals Kane makes are mostly made in the service of this goal.

So this reader gathers that the problem is some combination of lack of effective demand for the research and lack of an effective way to get the research to the market if demand could be produced.  But earlier in his article, Kane also says, "There is little consensus among policymakers and practitioners on the effectiveness of virtually any type of educational intervention.  We have learned little about the most basic questions...."  If that's the case, it is not clear why we should expect any demand. The problem is the research itself.

This is puzzling. When those of us who do international comparative research on national and state education systems visit the top-performing countries, their policymakers and practitioners often point to American researchers as among the most important sources of the ideas they are using.  So there is demand, but it is from foreigners.  And it must be of some use, because their systems are performing at high levels. And there is evidently no problem with distribution, because the infrastructure for international distribution of our research results is even weaker than the national structure we have here, but it seems to be strong enough to produce results in those countries. In this context, Kane's guesses about what the problem is don't seem to hold up.

I conclude that Thomas Kane is right about the disappointing influence of American education research on American education policy and practice over the last forty years, but wrong in his diagnosis of the problem and the solutions he proposes to deal with it.

In my view, the problem lies in the failure of our researchers to adequately deal with education system effects and the lack of interest among our researchers in comparative methods of research.  I will explain what I mean by each of these terms.

Foreigners use our research to their advantage because they have more effective education systems than we do. What do I mean by systems?  Suppose you have five different keys and one lock.  You insert each key in the lock and none will turn it.  It is possible, however, that one or more keys has one point that fits the lock exactly.  But those keys won't turn because the whole profile of the key must fit the profile of the tumblers in the lock if the key is to turn.  The lock and key are designed as one integrated system, all the parts of which must be right if the key is to work at all.

Now let's apply the metaphor.  Suppose that your aim is to create a new assessment that will match your new standards. The standards are meant to embody your desire to have students deeply understand the conceptual structure of the subject to be studied and to use that deep mastery to solve complex problems.  The assessments are designed to capture the full range of knowledge and skills implied by the standards.  You pilot the new assessments and standards and find that the assessments show very little gain for the students.  Both the standards and the assessments are deemed failures.

Key Lock.jpg

Yet it turns out that both the standards and the assessments are very similar to those used in countries in which student performance is far higher than in the United States.  How could the same standards and assessments be judged failures in the United States and successes in these other countries?  If you were to visit the countries in which virtually the same standards and assessments are used with great success, you would discover that the state has specified curriculum frameworks that describe the order in which the topics in the standards are to be taught.  Our states, by and large, have no such curriculum frameworks.  These other countries have developed course syllabi based on the standards and frameworks.  Because all schools use the same syllabi set in the same frameworks, the range of student performance in any one school and any one classroom is far smaller than in the typical American classroom, so many fewer students are falling behind because they cannot follow what is going on in class.  The assessments are directly based on the syllabi.  So teachers in these top-performing countries are being held accountable for student performance on courses they have been trained to teach.  The teachers have the knowledge and skills needed to teach an intellectually demanding curriculum, because they have been recruited from the top half of high school graduating classes, rather then the bottom half, as in the United States.

You get the point.  The model American research paradigm is designed to gauge the independent effect of particular, defined interventions.  It is as if the assignment is to gauge the independent effect of the first big point on the key coming back from the tip.  Well, it turns out that, even if that point is exactly the right shape and located at just the right distance from the end of the key, it won't open the lock unless all the other parts of the key profile are right too. 

But the American style of research was never designed to compare the effectiveness of systems or to enable educators to make informed judgments about which systems designs are most likely in their particular context to work the best for them.

But it is worse than that.  Few American researchers have any interest in doing research comparing American systems with the systems used in the top-performing countries.  We think of the 50 American states as encompassing enormous variation.  And that is true in many respects.  But it is not true in the arena that matters most.  The basic design of the system is very similar across the United States, and, crucially important, that design is performing badly and it is very different from the basic design found in the top-performing countries. 

Why would we be doing all our research in a country with a poorly performing, largely dysfunctional system when we have the option of doing research on systems that are performing at much higher levels?  Sad to say, not only will we not find out what really works, we will find practices that actually will work in high performing systems but do not work in our poorly performing system, and we will not even recognize those features of our systems that could work if the rest of the system was designed to support them. 

We know that features that work very well in high-performing systems typically work badly in dysfunctional systems.  But the converse is true, too.  What is likely to produce better results in a dysfunctional system will work badly in a well-functioning system.  The outstanding principal in a poor-performing inner city system is usually a highly charismatic, driven person who is impervious to the incentives that ordinary mortals usually respond to in such a system and, marching to their own drummer, burns her candle at both ends until she burns out and leaves and the school reverts to its former self.  Well functioning systems have high-performing schools that are run by ordinary mortals. We learn the wrong lessons from studying modest success in dysfunctional systems, nor do we learn how to build successful systems that way. The best way to learn how to build successful systems is to study them.  But few American researchers do that.

You will never find out in the United States what would happen if a state had an oversupply of teachers most of whom were among the top half of college-going high school graduates, or what would happen if the resources behind each student were not a function of their zip code.  Or what would happen if a state had internationally competitive student performance standards, a well thought through curriculum framework based on those standards, course syllabi based on the framework, world class examinations based on those syllabi and teachers all of whom went to research universities to learn their content and were trained to teach the courses using the state-provided syllabi. There is no such U.S. state. 

The best American education researchers are highly competent, often remarkable people.  But, in my view, they are caught in a trap that makes them far less effective than they could be.  There is no single intervention of the usual sort that will make much difference for most students at scale, no matter how good in principle, because no intervention of limited scope can overcome the effects of a dysfunctional system.  If that is so, then what American education researchers should be studying is what makes for an effective education system.  And they should be using the comparative method to do that. But they are not doing that.  When they do, American teachers and policymakers will start demanding and applying their research. 

You must be logged in to leave a comment. Login |  Register
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in Top Performers are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments