Reviewing External Quality Reviews, or: Consultant Whack-a-Mole!
Most external reviews begin with a self-study, which typically has three major dimensions: (a) What are your unit's goals? (b) How well are you meeting these goals, and what's the evidence? (c) What are you going to do about it? This is then followed by the proverbial "site visit," in which an individual or team from outside of the institution reviews the self-study, comes to the campus for a day or two, pokes around and asks questions, and retreats to write a report which is shared with the institution and its leaders. Often, the institution then will write a response to the report. Then the report goes on the shelf.
The composition of the site visit team can arouse some passion. In postsecondary institutions, site visitors typically are conceived of as peers of the faculty; but who counts as a peer is a matter of debate. How can someone from Eastern Podunk College ever understand how we at Elite University do business? Is a site visitor who studies 18th-century English literature really a peer of the faculty in an English department that focuses on contemporary American fiction?
I'm intrigued by the fact that in New York City and Washington, DC, the site visitors are external management consultants who are not educators within the system, and in fact may not be teachers or administrators in other systems. Consultants such as these would be laughed out of the room in a review of a college department; but nobody's laughing in large urban districts. I think this is because college faculty are assumed to have stronger claims to disciplinary knowledge and expertise than do K-12 teachers and administrators, and because the shared governance model in colleges and universities give faculty more control over academic decision-making than K-12 educators are typically granted.
Scholars of organizations make sense of external reviews by drawing on institutional theory. Institutional theory focuses on the relationship between organizations and their external environments, including the ways in which organizations are perceived to be legitimate by their external environments. An organization (e.g., school, district, or college) that is perceived to be high-performing generally doesn't have to worry about its legitimacy. But many educational organizations are not seen as high performers. In this case, they have to rely on some other way to be seen as legitimate than a demonstration of good outcomes. A common strategy is to imitate the practices of other social institutions that are seen as legitimate, in the hopes that the legitimacy will "rub off."
Many cases of education imitating the business world can be explained in this way. (Not that the business world has such a great track record to warrant serving as the ideal standard.) So, for example, because it's seen as rational for organizations to set goals and measure progress towards them, this is an integral part of most external review processes-much more so than direct inspection of what the organization is actually doing to meet those goals. This would account for the use of management consultants as external reviewers in New York City and Washington. In this sense, external reviews are mostly symbolic, rather than substantive.
This is, of course, a highly cynical view of external reviews-perhaps more than is warranted. I'd like to pose a couple of questions to eduwonkette's readers: (1) What are some legitimate purposes of external reviews of K-12 schools? (2) Based on these purposes, what should the composition of an external review team look like? The purpose in asking these questions is not to play whack-a-mole with consultants (although that may be a consequence), but rather to introduce a topic that I hope to post a bit more about over the next couple of days. I'm also curious if readers know of any evidence of external reviews actually improving teaching and learning in K-12 schools. Please feel free to e-mail me at skoolboy2 (at) gmail (dot) com to point me in a fruitful direction.