How the Brown Center Report Got it Wrong: No Relationship Between Academic Standards and Student Performance?
This is the first of two pieces I intend to write on Loveless' report. In this one, I will address his assertion that the academic standards a state or nation sets have no bearing on the academic achievement of its students. Loveless comes to that conclusion by analyzing the relationship between the average scores of students in several states on NAEP assessments and the ratings of the official academic standards adopted by those states as rated by the Fordham Foundation. He finds that there is no consistent relationship between these two sets of data. Students in states with high ratings for standards achieve at widely differing levels on the NAEP assessments and students with high average scores on the NAEP assessments have adopted standards that vary widely in the ratings they have received from the Fordham standards raters. Ergo sum: the achievement of your states' students is unrelated to the rigor of the state's formal academic standards. The reasonable conclusion to be drawn from this finding, of course, is that we can add school reform strategies based on setting internationally benchmarked academic standards to the large and growing scrap heap of failed education strategies. "Despite all the money and effort devoted to developing the Common Core State Standards--not to mention the simmering controversy over their adoption in the several states--the study foresees little to no impact on student learning," Loveless writes in the introduction of the report.
Few among those of us who have advocated the use of academic standards to raise student achievement over the years have ever suggested that raising standards alone would accomplish that objective. In my own case, my commitment to this strategy came from years of study of the countries with the most successful education systems in the world. Virtually all, I noted, included a focus on academic standards as a core strategy for improving student performance. But I also reported that raising academic standards was, without exception, an integral component of a much more complex web of strategies adding up to the development of a powerful state instructional system.
Countries that have had success with this strategy--and I know of none that have been successful that have not employed it--have carefully documented the standards of the leading countries and set their own standards in the light of those adopted by their principal competitors. They have defined the end-point of the common curriculum, usually at about the end of what we call 10th grade or age 16, and then worked backward, grade by grade, or by groups of grades, to define a curriculum framework describing the topics to be studied in each subject for each grade or for each small group of grades. At the high school level, they have specified syllabi for each course across the whole required curriculum (far more than their native language and mathematics). They have developed very high quality examinations for each of the courses defined by those syllabi. And they have made a major investment in making sure that their teachers are well qualified and trained to teach those courses.
What I am describing is a whole instructional system, each part of which is essential to the success of the others, of which standards are only a part.
No one in his right mind who has studied this system would believe that doing only one part of it would result in any significant change in student performance. It is the system, not its individual components, that produces the effects we see in the international league tables of student performance, in nation after nation.
But the entire discussion in "How Well Are American Students Learning?" is about dyadic relationships--between standards and student performance, between student cut scores and student performance, between standards and within-state variation in student performance.
Toward the end of section one of his report, Loveless asks why standards don't make much of a difference to student achievement. Answering his own question, he says that, in the United States, standards are actually mostly aspirational and are not operationalized in a way that is likely to make a difference. To illustrate his point, he cites the work of the International Association for the Evaluation of Educational Achievement, which distinguishes among the intended, implemented and achieved curriculums. The intended curriculum is what the state says it is. The implemented curriculum is what teachers teach and the achieved curriculum is what students learn. And he goes on to say that, in the United States, the implemented and achieved curriculums bear very little relationship to the intended curriculum.
That is true. So what we are left with is a very interesting problem. If standards continue to be largely aspirational, then they are likely to continue to be largely unrelated to student achievement and mostly a waste of time. However if our aim is to produce the kinds of improvement in student performance we see in the most successful nations and if we want the standards to have an important effect on student performance, then they have to be operational, not aspirational, and that means that we will have to put in place whole aligned instructional systems of the kind I described above, because there is no model anywhere in the world of a national or state education system that has achieved high standards that has not developed such a system.
Loveless got the question wrong. It is not whether standards can by themselves greatly improve student performance. It is what other elements must be combined with what features of a standards system to make them a powerful tool for improving student performance. Loveless is hardly alone. Americans are in a never-ending search for silver bullets, single-factor solutions for problems that will only yield to systemic solutions.
The search for silver bullets is chimerical and doomed. We are not likely to move beyond the current impasse until American researchers refocus on methodologies designed to assess the effectiveness of systemic solutions. Right now, that seems to be a distant prospect.