Stop! Using Research as a Weapon (sung in the style of Pat Benatar)
Note: Heather Harding, vice president for research and public affairs at Teach For America, is guest-posting this week.
First, let me thank Rick Hess for lending me his space while he's vacationing. It's been a lot of fun to get these things off my chest and an interesting experience to be so publicly vocal and visible (thinking back to my first post). As I said in a comment responding to the hotly debated TFA as a residency post, my role at Teach For America blends facilitating research on the impact of Teach For America with a responsibility to engage various stakeholders that care about empirical evidence, teacher education and development, and a few stray policy issues. Today, I'd like to say a few brief words about the standards of quality regarding research in the policy debates and the constant questioning of methodology.
Having been trained as a qualitative researcher, the current state of policy research often troubles me. It's not that I don't appreciate a good statistical model as much as the next guy; it's just that when I was in graduate school I learned that the question should define the methodology. Currently, it feels like the narrow focus on test scores has created a context where the tail is often wagging the dog. As we all engage in the what-intervention-has-the-greatest-effect-size game, we forget to consider the more comprehensive impact of our efforts. A lot of research that is actively pursued is about impact on student test score gains--and while this is an important indicator, most of us would agree that it's only one piece of the puzzle. Secondarily, because this is the currency of policy influence, we are worried about research that doesn't have this focus. If you are a program in this environment, it behooves you to facilitate a robust body of evidence of your positive impact. This makes sense to me. As a vice president of research, I want to know every way that TFA is impacting the closing of the achievement gap. I want to promote measurement of that impact and I want to aid my internal colleagues in their cycles of continuous improvement for our admissions process, our teacher training, our teacher development, and our alumni support.
Unfortunately, the larger public discourse is out of sync with this goal because there is a fair amount of defense being played. The back and forth nature of the debate obscures our ability to consider what was learned in any given study--especially if it is perceived as negative. It cripples our desire to be collaborative in research. It also exposes the lack of trust that has developed between practitioners and researchers. We have all suffered at the hands of a political think tank research culture that has manufactured findings as opposed to seeking truth.
To make matters worse, programs are then compelled to engage in a defensive strategy, especially in the current media world, that I'll call 'attack the methodology.' See KIPP's response to Gary Miron's study. I point to the KIPP example not to begrudge KIPP's right to have a fair and accurate accounting of their work, but to highlight the difficulty of having a critical discussion regarding impact in an environment where a single bad result means a challenge to one's existence. We also must acknowledge that all research isn't equal and that despite the imperfect ways we rank and assess the quality of studies, we must aspire to something that reviews the match between question and methodology and reveals the underlying motives and biases of research instruments--whether that is methods or people.
I strongly believe that in education, it's important to understand our impact broadly. That student learning and the learning environment are important factors for realizing the day when educational inequity is not so prevalent. But if we don't stop using research as a weapon, we will continue to learn what we already know and be doomed to have the same arguments over and over with little advancement.