Opinion
Education Opinion

The Letter From: “In short, I see no problem with research INITIALLY becoming public with little or no review.” (II)

By Marc Dean Millot — July 17, 2008 12 min read
  • Save to favorites
  • Print

An Update. Yesterday, I examined the arguments offered by Jay P. Greene in defense of the Manhattan Institute’s release of his unreviewed report (Building on the Basics: The Impact of High-Stakes Testing on Student Proficiency in Low-Stakes Subjects) with the same fanfare that education policy organizations provide for reports subject to peer review. Greene has commented on that post, accusing me of being “fundamentally dishonest” because I left out the word capitalized in the title of this post.
I’ve responded to the charge in the comments section of yesterday’s post. I’ve conceded that it’s possible I misquoted Greene, although there are other potential explanations. I stated that I don’t see how the omission advantaged me and so reject Greene’s label. More important, I don’t see how it affects the matter at hand. Readers can decide if Greene’s accusation is reasonable or relevant.

Last week’s Letter From covered why I found Greene’s statement - absent the word “INITIALLY” - incredible. I addressed his substantive case for the release, as well as other plausible substantive arguments, and found them wanting. My response to Greene’s comment yesterday explained that nothing about the word “INITIALLY” would change what I wrote last week or in the post on which Greene was commenting.

Back to Plan. Today’s subject is why I was not surprised to see Greene write either In short, I see no problem with research becoming public with little or no review or In short, I see no problem with research INITIALLY becoming public with little or no review.

I ended last week’s Letter as follows:

Two unreasonable arguments to forgo review come to mind. Researchers may not want their work to be subject to review prior to publication because they fear it might not be published sufficiently close to “as is” to support their conclusions and recommendations. Alternatively, they might be “above the law” in fact - subject to no real world penalties for disregarding professional norms…

The Market for Research Assisted Policy Advocacy. Greene and others have asked readers to consider the questions raised by Manhattan’s release of the unreviewed report in the context of a market. I agree with the proposition, but I’d choose a different definition than my colleagues. I don’t see the relationship between the purveyors of research-assisted policy advocacy and the broad range of people who “consume” their products as much of a “market,” and definitely not the best framework for market analysis.

The reporters who make others aware of this research, the policy wonks who analyze it and incorporate it into their own work, and the decision makers who consider its relevance to policy do incur search costs, but they do not buy the product from providers. I know that these visible relationships suggest the market characterization, that the best nonprofit leaders aspire to treat consumers like customers, and that it is very convenient fiction for some. Nevertheless, the real customers are individuals and foundations that purchase research services to pursue their own social objectives.

There’s nothing unusual or wrong about this. It has been common practice for industries, unions, political parties, and a wide range of membership organizations to establish similar “research” operations working on their behalf. Private citizens and foundations are no less entitled to employ these means of advancing political interests as broad as the general diffusion of knowledge or as specific as opposition to or support for school vouchers. Like advertising, it’s a form of free speech protected by the Constitution.

Nevertheless, it’s important to understand that sellers respond first and foremost to the demands of those paying the bills, not the third party “beneficiaries” of these transactions. Irrespective of the missions pursued by private foundations, the people targeted to receive the information produced by policy advocacy organizations are no more the clients of these outfits than you or I are the clients of advertising agencies or trade group’s.

The “Value” of Avoiding Review. This is not to say that the reactions of those consuming the information are irrelevant - after all, many are the objects of persuasion. This fact imposes some practical limits on the production of research by policy advocacy organizations. Specifics vary by time line, topic and audience, but the logic must be plausible on its face, arguments must be stated in ways that attract more people than they turn off, and supporting evidence must withstand some level of scrutiny. On the other hand, and the extent of this varies by purchaser, buyers with a social agenda do not jump at the possibility of purchasing arguments that are nuanced in ways that make logical weaknesses clear, give equal opportunity to opposing perspectives, and explain the various shortcomings of evidence marshaled to make the case.

It is probably also worth pointing out that while persuasion of the undecided is one objective - maybe the prime objective (or not), buyers also seek to 1) mobilize their base and arm it for political purposes, and 2) undermine the credibility of their partisan and ideological opposition. These objectives reinforce the buyers lack of interest in the purchase of programs of research leaning more towards the Marquis of Queensbury rules of boxing than Shakespeare’s adage “all’s fair in love and war.”

Readers will recognize where the concept of peer review intersects this discussion. Formal review brings logical errors, ambiguous writing and weak evidence to the attention of authors. If the authors are academics, professional norms require them to address the questions and, if necessary, adjust their findings to fit the facts. Having been through quite a few at RAND, I’d say the result is almost always a better report, but one with less rhetorical dash. Dispensing with formal review favors an outcome where the report’s discussion of evidence supports the purchaser’s hopes for punchy, unambiguous conclusions and recommendations.

The minimum standards of production for research-assisted policy analysis have implications for policy advocacy organizations and their staff. Buyers of policy research are attracted to organizations that have either the highest level of general consumer credibility after discounting for their political leanings or that have the highest level of credibility within their base. The first handles persuasion, the second mobilization. On the political left, the Center on Education Policy is an example of the first; FairTest, the second. Analogues on the right might be the American Enterprise Institute and the Center for Education Reform. There are organizations in the first category that offer funders higher credibility but a bit more risk – the Center on Reinventing Public Education and EdSector come to mind. As far as I know, organizations built for mobilization do not straddle the left-right divide. For what it’s worth, I would place Manhattan in the same category as AEI.

Author brand is also important. This is particularly true when the product will be released to the media with some fanfare as a report addressing some policy matter. The value of a report resides not only in its content, but in the publicity for the cause that it can create and the organization can leverage. Factors bearing on this, not encompassed by the value-added of organizational brand identity, include the public affairs plan for release and, of course, the author. An ideal “talent package” might include a doctorate relevant to the discipline from a well-known university, employment as tenured faculty at the same, honors suggesting the respect of academic peers regardless of their leanings, a skill set that matches the analytical requirements, a following within the political base the purchaser supports, and the willingness and ability to write in ways that meet both the buyer’s broad political objectives and the specific political purpose of the report itself.

Different People Bear Different Costs. If you are an academic in the business of selling education research to foundations directly, or, more likely, through policy advocacy organizations, you must balance satisfying the client against your professional norms. If the organization has a review process, depending on its rigor for the particular product you write, you may have no problem. If there is no review process, or if it is mere window dressing, you either take the work and accept the consequences, or you walk away from the offer.

Jay P. Greene took the work, and he’s hardly alone. But what are the consequences?

Aside from being harassed by eduwonkette, me and others, so far, precisely none. Greene practically grew up with Manhattan and my guess is that he constitutes the quality control process for his work. He has been well-rewarded for his effortson behalf of the Institutes agenda. Today he is a tenured professor at the University of Arkansas, holding a chair endowed by the Walton Foundation and directing a research center supported by the same patron. In short, he’s very much what the mafia called a “made man” - untouchable. He literally need “see no problem with research INITIALLY becoming public with little or no review.” There are no problems.

How does he rationalize his approach? You may recall Greene’s reference to two dozen peer-reviewed publications, many of which were earlier released directly to the public without review. This process suggests the Manhattan Institute “Report” actually serves the function of a working paper, but here the working paper gets the publicity that would normally accompany a report. What most of us would consider a legitimate report is published in journals for peers, and gets no publicity. This attempt at “having your cake and eating it too” is pretty darn convenient, a significant departure from the professional norm - what the Army might call “back asswards.” It’s almost funny. It doesn’t help reduce confusion about the work at the moment it has it’s broadest impact, but I’ll admit it is better than nothing. On the other hand, it’s not the best example for those seeking a doctorate and planning to teach in any subject.

Money talks, but some academics do walk, and many do not play the game. There was a time when few young academics intent on a teaching career would do so, for fear of harming their long-term prospects in the academy. My sense is that they are more likely to do so today. For one thing, they can find tenured professors who have done so and follow in their footsteps. It is probably still the case that the academic committees of the more prestigious universities considering an assistant professor’s elevation to full professorship would counsel against what Greene did, and consider a situation similar to Greene’s disturbing, but probably not dispositive. And that’s how standards begin to decline.

What Is To Be Done?
There is value in pointing out the problems created every time some policy advocacy institution invites the press to the unveiling of a report lacking peer review. We need to make those who engage in this behavior uncomfortable. That’s the professional responsibility of policy analyst who values the role of research in policymaking. We’d like the press to ask why such publications should be given the same fanfare and credibility as “real” reports, and why they should pay more attention to this than a routine working paper, the work in progress some professor left on her website, or the latest blog post. Our protests are necessary to raise the consciousness of education reporters.

Publicizing reports that aren’t subject to peer review may also push policy advocacy organizations that value credibility to honor the value of truth in advertising. Fixing this problem is not expensive. “Reports” should be subject to peer review. Documents not subject to review should not be released in ways that mislead the consumer by silence in labeling. They might call it a working paper, or stamp it “unreviewed.” Indeed, the inside cover of every publication should include an explanation of the organization’s review procedures for the various kinds of publications, the true funder(s) of the work being released, and any interests the authors have in the findings that might be seen to conflict with impartiality.

A Final Point. My own experience as a reasonably savvy consumer of quantitative research for the purpose of informimg consequential decisions in and out of public education is that the findings of well-constructed studies rarely supply “the answer.” Certainly in public education, the state of the quantitative arts, the tendency of data collected for one purpose to fit analytical constructs imperfectly, the limitations study budgets place on study designs, the imprecise and indistinct nature of whatever “intervention” under review, and the dynamic nature of the education environment at whatever level of analysis, do not lend themselves to crystal clarity in the end product. Moreover, if one draws on a series of studies on the the same topic, the studies that do seem to supply clear answers are generally the outliers.

My point is not that research is pointless. I would not want to have been the member of teams advising the President on nuclear warhead requirements, the pursuit of ballistic missile defense, or the adequacy of warning systems with no quantitative analysis to inform our recommendations. Nor would I have felt better about investments I recommended New American Schools made or declined to make in its Design Team. Decisions should always be based on the best available information. My point is that there are still very significant limits on any research relevant to the policy decisions that must be made in public education regardless of research quality.

If we want better research - and I certainly do, we have to value it as a decision forum rather than a political weapon. Those most involved in public education debate - partisans on the left and right - are generally far from internalizing this norm. The value of peer review is not that the result will be the true answer. It’s that we will be moving out of a kind of dark age where analysis is a ritual and studies are totems to circumstances where it will set rational bounds on policymakers’ decision space.

Note: At this point, I’ve made all the substantive arguments I consider relevant. I’ve reached the knee of the curve (the point of diminishing returns on what I can add for the time I could put in) and plan to leave this topic for edbizbuzz readers to decide. I encourage you to go back to eduwonkette’s and Greene’s first posts on the matter, noted in my initial post, and follow the links until you feel you have enough evidence to arrive at your own conclusions.

Other relevant edbizbuzz posts:

Uberblogger Alexander Russo asks: What is the role, impact or benefit of education think tanks? (Series begins here.)

Deconstructing a “Social Keiretsu” in Public Education Reform (Series begins here.)

Legislative Staff and Education Research

Education Reporters Don’t Understand Education Research

Report from AERA’s 88th Annual Meeting - Is k-12 Research Relevant?

The opinions expressed in edbizbuzz are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Achievement Webinar
How To Tackle The Biggest Hurdles To Effective Tutoring
Learn how districts overcome the three biggest challenges to implementing high-impact tutoring with fidelity: time, talent, and funding.
Content provided by Saga Education
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Student Well-Being Webinar
Reframing Behavior: Neuroscience-Based Practices for Positive Support
Reframing Behavior helps teachers see the “why” of behavior through a neuroscience lens and provides practices that fit into a school day.
Content provided by Crisis Prevention Institute
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Mathematics Webinar
Math for All: Strategies for Inclusive Instruction and Student Success
Looking for ways to make math matter for all your students? Gain strategies that help them make the connection as well as the grade.
Content provided by NMSI

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide — elementary, middle, high school and more.
View Jobs
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
View Jobs
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
View Jobs
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.
View Jobs

Read Next

Education Briefly Stated: March 20, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: March 13, 2024
Here's a look at some recent Education Week articles you may have missed.
9 min read
Education Briefly Stated: February 21, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read
Education Briefly Stated: February 7, 2024
Here's a look at some recent Education Week articles you may have missed.
8 min read