Opinion
Assessment Opinion

Back to the Future in International Assessments

By Oren Pizmony-Levy — April 10, 2014 4 min read
  • Save to favorites
  • Print

Today’s guest contributor is Oren Pizmony-Levy, Assistant Professor of International and Comparative Education, Teachers College, Columbia University.


International Large-Scale Assessments (ILSAs) are everywhere. You can find them in front-page stories (with dramatic headlines) and in opinion and editorial sections. Although ILSAs involve quite dense and heavy technical issues, you find the studies mentioned all over in the popular culture. For example, The Onion report titled “Chinese Third-Graders Falling behind U.S. High School Students in Math, Science” (with 158K likes on Facebook), and ExxonMobil’s Campaign -- Let’s Solve This -- ads that repeatedly feature the ranking of countries based on student achievement in math and science. ILSAs are so embedded in our life that we often take them for granted and overlook important questions of how and why they came about.

Fascinated by the immense growth and visibility of ILSAs, I spent the past few years exploring the socio-historical roots of ILSAs through archival materials and interviews with key-informants. Here, I discuss two major changes that took place in the world of ILSAs over the past 50 years. This perspective allows us to better understand the phenomenon of ILSAs and perhaps its intended and unintended consequences.

First Change: From Researchers to Governments
The International Association for the Evaluation of Educational Achievement (IEA), which was established in 1958, is the first organization to conduct ILSAs. The IEA emerged from a working group of scholars under the auspice of the UNESCO Institute for Education in Hamburg, Germany. Key figures in the group included Professor Benjamin Bloom (University of Chicago), Professor Torsten Husén (Stockholm University), and Professor Arthur Foshay (Teachers College, Columbia University). For these scholars, comparing educational systems using large-scale quantitative data was driven by an intellectual objective and clear research questions:

If custom and law define what is educationally allowable within a nation, the educational systems beyond one's national boundaries suggest what is educationally possible." (Foshay 1962)

For many years, a majority of the countries were represented at the IEA General Assembly by individuals affiliated with an academic or research institute, while the rest of the countries were represented by individuals affiliated with governmental agencies.

Since the mid-1990s, that pattern has been reversed. In 1986 the proportion of representatives from governmental agencies was 43.3 percent; this figure jumped to 59.6 percent in 1998 and to 73.4 percent in 2012. Representatives from academic and research institutes have correspondingly declined from 56.7 to 40.4 percent and then to 26.6 percent. In an interview, a high ranking official at the IEA commented:

When the first math study started [1964], these were researchers who became interested in exploiting the variance between countries. And then, more countries joined, because they thought that would be a good thing. Now, it became much of a more governmental thing, not so much a research thing. Governments, misguidedly, I think, jumped in on the bandwagon to see who is better or worse than Americans."

This transformation is also evident in other organizations that conduct ILSAs. The Programme for International Student Assessment (PISA) is run by the Organisation for Economic Co-operation and Development (OECD), which is an inter-governmental organization. Moreover, the Governing Board of PISA is mostly populated by individuals affiliated with governmental agencies.

Second Change: From Research to Educational Quality Indicators
In the early decades, the leading rationale for conducting ILSAs was basic educational research. Indeed, reading the IEA mission statement from 1968 we can find strong emphasis on scholarship:

[The] IEA is an international, non-profit-making, scientific association [...] whose principal aims are: (a) to undertake educational research on an international scale; (b) to promote research aimed at examining educational problems common to many countries, in order to provide facts which can help in the ultimate improvement of educational systems [...]"

At that time the IEA had a diverse portfolio of studies on topics such as literature education, English and French as foreign languages, classroom environment, and computers in education. The IEA publications of that time avoided the “horse race” discourse and results were presented in alphabetical order of countries.

The aim is to develop a systematic study of educational outcomes in the school systems of the cooperating countries. The question we wish to ask is not "Are the children of country X better educated than those of country Y?" To us this seemed a false question begging all the important issues we need to study." (Minutes of The IEA Project, 17-22 October 1960)

In the past two-decades, however, this research-oriented rationale was replaced by a more policy-oriented rationale, one that is more linked to accountability, educational quality indicators and - sadly - to international competition. The current version of the IEA’s mission statement emphasizes the provision of “international benchmarks to assist policymakers,” and the collection of “high-quality data” for various policy purposes. The portfolio of topics covered by ILSAs is less diverse and tends to focus on the basics: mathematics, science, and reading. Finally, publication of ILSAs reinforce the “horse race” discourse by presenting results in “league” tables and “report cards” of ranked countries that improved and declined in student achievement, regardless of the technical defensibility of such inferences.

So where do we go from here? The two inter-related changes in the world of ILSAs are important because they unmask the link between politics (in a broad sense) and practices of ILSA programs.

We should aim to bring back research and researchers to the front lines of ILSA programs. Indeed, there are some preliminary signs that this is happening. For example, the IEA is now organizing a biennial International Research Conference (IRC) that is intended to create a forum for scholars from different countries who are interested in further exploring ILSAs data. The 5th IEA IRC was held in Singapore in 2013 with over 140 participants. Another example is the OECD’s Thomas J. Alexander Fellowship, funded by the Open Society Foundation, that support innovative analysis of data from PISA and other OECD sponsored ILSAs.

Oren Pizmony-Levy
Teachers College, Columbia University

The opinions expressed in Assessing the Assessments are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.