Opinion
Assessment Opinion

The Brown Center Report: When Will American Researchers Take on the Issues That Really Matter?

By Marc Tucker — February 29, 2012 4 min read
  • Save to favorites
  • Print

Last week, I wrote about one aspect of Tom Loveless’ report titled “How Well Are American Students Learning?” In that post, I addressed his assertion that the academic standards set by a state or nation have no bearing on the academic achievement of its students. This week, I examine another part of Loveless’ report, in which he comments on what he takes to be typical errors made by pundits, policy makers, reporters and others with an ax to grind when interpreting international test scores.

At first shot, Loveless’ comments on this score are nothing if not reasonable. He reminds us that attributing cause to any combination of factors that might have been involved in some outcome of interest is difficult to do; that it is neither useful nor plausible to attribute cause to some feature of a successful system if that feature can be found in all systems, whether successful or unsuccessful; that factors not studied at all might be as responsible for significant performance changes as those that were studied; that there is a difference between rankings when the field is all PISA participating countries and when the field is just the OECD countries; that the rankings in the OECD/PISA reports do not always imply statistically significant differences in performance among countries that are close to each other on the rankings; and that differences in actual performance among countries adjacent to one another in the rankings may be very different at different points along the ranking list.

All true. But the PISA reports make it quite clear, right up front, that they are not claiming cause and effect and they make it quite clear which differences they show are statistically significant and which are not. And the PISA reports also make it clear how large the field of countries is from which each league table is constructed.

But more importantly, I wonder what the difference is, when you get right down to it, between showing United States students score right in the middle of the distribution of the advanced industrial (OECD) countries, and higher up on the distribution when a number of developing countries are included in the rankings. Whichever way they are presented, these results amount to pretty much the same sorry story.

The PISA rankings are reported the way they are because the participating countries want to know where they come out, whether or not the reported differences are statistically significant. If that is the way they feel, is it then wrong for their journalists and education policy analysts to report it that way?

Loveless makes much of the point that, since (he maintains) all countries have national curriculums, one cannot attribute the success of some of those countries to their use of national curriculums. And, anyway, he says, those countries with federal systems of education--Australia, Germany and the United States--have been taking steps toward developing national curriculums or reducing differences among state or provincial curriculums. So what’s the big fuss anyway? He does not tell you that Germany only started down this road when it discovered that it was far behind its peers in the PISA rankings (a discovery that the Germans still refer to as “PISA Shock”). They came to the conclusion that not having national standards closely linked to national curriculum and national assessments was a primary cause of their poor performance and they worked hard to do something about it. There is considerable evidence that they were right, judging by the performance of German students on PISA after these reforms were implemented.

Loveless’ statement that all countries have national curriculums is misleading. What appears to make a difference is not national curriculums per se, as I pointed out in my last blog, but well crafted national or state or provincial instructional systems, of which curriculum is just one component. It is most certainly not the case that all nations have such things. Few do, and the best of them are concentrated among the top-performing countries. That is a fact that should concern the United States greatly, because, Loveless notwithstanding, this country is a long way from having either state systems or a national system that is comprehensive and well crafted.

The tone of Loveless’ critique suggests that he sees little to like in international comparisons of student achievement and even less to like in the analyses of the causes of national success in the creation of high performance education systems.

To my way of thinking, the most important research question in the whole field of education is why some national, state and provincial education systems produce both more equity and higher student performance than others. Nations that figure out how to create and maintain mass education systems that consistently perform at high levels are likely to have higher standards of living, more political stability, more freedom and more opportunity for more people than nations that fail to do so. The United States is rapidly falling behind in this race.

Loveless’ cautions are all in order. They are not wrong. But they are also pretty obvious and not particularly helpful. It is time, in my view, for the American education research community to start mobilizing its considerable intellectual resources to figure out what makes the world’s most effective and most efficient education systems tick. The research techniques most highly regarded by the Brookings team and many other American researchers are of little value in teasing out the factors that are most responsible for the success of the best national and state education systems. Maybe it is time for them to get out from behind the curtain, admit that they know less than they would like, and start to develop techniques that would be more useful for finding out what we really need to know.

The opinions expressed in Top Performers are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.