Do Quality Reviews Lead to Increased Student Achievement?
Do quality reviews lead to increased student achievement? There’s been surprisingly little research that addresses this question. Most research on quality reviews has examined the school inspection process in Great Britain managed by the Office for Standards in Education (Ofsted), a national agency which reports to the Parliament. Since school inspections for primary and secondary schools were instituted in 1993, there have been several iterations in the school inspection process. But I haven’t found any persuasive evidence that inspections improve student achievement. Some teachers and administrators report that they intend to change their practices in response to the inspection report, but I’ve not seen studies which examine whether those intentions translate into improved practice.
You might get the impression from my postings this week that I think that quality reviews are a bad idea. Not necessarily! But there are some things that I think are essential for quality reviews to be a good idea. Here’s a brief list:
The purpose of the review must be clear. Sociologist Gary Natriello has written about four potential purposes for evaluations in schools: motivation, direction, certification and selection. The first two can contribute to school improvement, whereas the latter two are more concerned with regulation, accountability, and control; and it’s desirable to confront the tensions between improvement and control directly. If the purpose of a quality review is to improve how schools work, then all phases of the review process need to be oriented towards this purpose.
Definitions of quality must be clear and transparent. If there are clear criteria and standards for what constitutes school quality, then both educators and inspectors can orient their activities towards these criteria and standards. Unclear standards and definitions undermine the legitimacy of the quality review process. My impression is that the Ofsted criteria are a lot clearer than those that I’ve seen stateside. Quality teaching is a particularly challenging phenomenon to articulate; but if the goal is to improve teaching, we’ve got to be able to do it.
The quality review process must be designed to collect a sufficient amount of data on quality. If, for example, the purpose of the quality review is to improve teaching, then presumably there should be sustained collection of data on teaching quality, primarily through direct observation, but perhaps in other ways as well. Ms. Frizzle recently commented that in her New York City school, the quality reviewer was planning to observe 9 different classrooms in 30 minutes. Not much data on teaching quality will come from such a process. The intensity of data collection is a recurring challenge in evaluation research that involves site visits, because they are labor-intensive. “Drive-by” site-visits just aren’t very useful, even if conducted by well-trained observers, because they don’t gather enough data on the things that matter.
The frequency of quality reviews should be synchronized with a theory of how fast school quality is changing. This is Social Research 101: phenomena that change more quickly need to be measured more frequently to detect such changes, and phenomena that change more slowly don’t need to be measured as often. How frequently should we assess school quality? The school year is an arbitrary metric, and it may be wasteful and counterproductive to conduct school quality reviews on an annual basis. (In Great Britain, Ofsted inspects primary schools every three years.) Given a choice, I’d rather have less frequent, but more intensive, quality reviews.