What do PISA and TIMSS Tell Us?
Today's guest contributor is William Schmidt, University Distinguished Professor, Michigan State University; Director, Center for the Study of Curriculum.
Growing awareness of the crucial role of education in a country's economic competitiveness has made the results of international assessments a major public event. The Trends in International Math and Science Study (TIMSS) and the Program for International Assessments (PISA) have been used as fodder for political and policy debates, in particular that the mediocre performance of U.S. students in mathematics on these tests is evidence that significant changes are required in the American educational system.
Unfortunately a lot of discussion of the TIMSS and PISA results involves hasty generalizations based on country-level averages and an obsessive focus on international rankings. Far too often we find simplistic imitation of a high-ranking country's educational policies, or sweeping assertions that U.S. performance is determined by student poverty or the false belief that higher-performing countries only test their brightest students. The tests are also often confused with one another, partly because they happen to have similar scales with averages around 500. Yet there are important differences between the two. The different rankings of the U.S. on the TIMSS and PISA are in part due to the different countries that take each test - there are fewer rich countries that take the TIMSS, which makes the U.S. look a bit better - but the tests themselves have different content. TIMSS assesses mathematical knowledge, but PISA assesses mathematical literacy (how math is applied). Although most countries' performance is similar on both tests, there are important exceptions where a country does quite well on the TIMSS but poorly on the PISA, or vice versa. Finally, it is important to remember that the international average is the mean across countries, not the mean of all students in every country. The latter is a bit lower because of the below-average performance of some larger countries like the U.S.
However, we should be paying a lot less attention to country averages and rankings anyway. It's a mistake to define a country's students based on a single number. The reality is much more complicated. In fact most of the variation in student performance on both the TIMSS and PISA is within countries, not across them. Simply because Japan's students have a higher average test score doesn't mean that every Japanese student does better than every U.S. student - in both there is a wide variation in student outcomes. These variations exist between schools but also within schools. We are used to talking about "good schools" and "bad schools" as if every student attending them is performing at the same level. Yet the TIMSS and PISA demonstrate that this just isn't true. Research based on the TIMSS suggests there is also a wide variation in performance across classrooms. Unfortunately PISA doesn't sample by classroom, concealing this important source of inequality.
The key lesson to be drawn from international assessments is that the system of education - the package of educational policies - has a major impact both on the average performance of students and the inequality among students. Student poverty is an important contributor to both of these, but U.S. performance cannot be attributed solely to the number or distribution of poor and disadvantaged students. Some countries do a much better job at mitigating the effects of student poverty, and poverty cannot explain why even affluent students trail their peers in other countries. Other nations have much greater equality in educational outcomes, and we should study their approaches and adapt them to our own circumstances.
There is a great deal to learn from international assessments like the PISA and TIMSS, and the overall performance of a country can give us a clue as to which nations to focus our attention on. However, we must take care in how we interpret the results of these tests if we are to avoid drawing misleading conclusions.
Center for the Study of Curriculum
Michigan State University