« All Dressed Up... And Heading Out | Main | Music to My Ears - Principals as Instructional Leaders »

Research Says...

| 4 Comments

"Research says…"

How often have you heard that phrase tossed about? As teachers, we certainly want good information about how to help our students and our profession, but too often, we struggle with the gap between what “research says” and what we experience in our own schools and classrooms.

Friday morning of the National Board for Professional Teaching Standards 2009 Conference kicked off with a panel of National Board Certified Teachers discussing a report they helped write. Measuring What Matters was commissioned by NBPTS and published last year by the Center for Teaching Quality, (parent organization of Teacher Leaders Network). In their review of all the available research on NBPTS, these teacher authors found, (as did the National Research Council study published at about the same time) that National Board Certification does identify accomplished teaching.

panel.jpg

However, of greater concern to this morning's panelists is the fact that researchers tend to rely on inferior indicators of student learning. The panel was introduced and moderated by CTQ President Barnett Berry, and included NBCTs Nancy Flanagan, Andy Kuemmel, and Patrick Ledesma. Flanagan pointed out that part of the problem specific to this body of research is that investigators don't always have a complete understanding of the certification process, which may compromise their interpretation of their findings. Then, standardized test scores are often treated as if they tell us more than they do, and debates ensue sometimes with total disregard for the actual design of the test and its intended purpose. All of those flaws led to the challenge from the panel and their co-authors, to measure what matters.

Audience member Christy Khan, an NBCT and doctoral student at the University of Kansas, has seen both sides of the research-practice divide. She described the problem as a gap between the quantitative terms preferred by researchers and the qualitative terms preferred by most teachers. The question posed by Khan was what specific measures of student learning could be used. The panel did not delve into details about the issue, but Ledesma was quite firm in reiterating a key point: the problem of finding quality data for broad research to measure real student learning is the researchers’ problem, not the teachers'. If they settle for limited and compromised data, they should be honest with themselves and others about those limitations.

This is a topic that I’ve written about in the past as well. When it comes to test score analysis, there is a tendency towards gross oversimplification that ignores the almost countless factors in student performance, some that can barely be identified, and most of which cannot be controlled for research purposes.

Another concern in the policy arena is whether or not NBCTs can improve high-needs schools. Reformers who suggest simply picking up NBCTs and dropping them in different schools miss the point, Nancy Flanagan noted. The problem of few NBCTs in the neediest schools is a reflection of the working conditions in those schools. Simply changing the teachers won’t solve the underlying problems, and I hope I’m correct in sensing a growing consensus around the idea of “growing your own” when it comes to developing a quality teaching force. Good teachers don’t just arrive as finished products to start work; they start off with a certain potential and either thrive or struggle in large part based on the circumstances in which they find themselves.

But thankfully, the overall tone of the morning was not about generating anger towards researchers and policymakers. Yes, the frustration is there, but the panelists made it a point to stress what teachers should do to improve the situation. Suggestions included having teachers speak directly to researchers, form partnerships, conduct action research, take on leadership positions throughout education, and even run for public office. Kuemmel summarized it this way: “Don’t wait for something to happen. You have to make it happen.”

4 Comments

I love the idea of "having teachers speak directly to researchers, form partnerships, conduct action research, take on leadership positions throughout education, and even run for public office." I think the forming of partnerships between researchers and classroom teachers has some of the greatest potential, given the dialogue, mutual understanding, and fresh ideas and hypotheses that could result. Addtionally, this could bring about a greater focus on measuring what matters, keeping in mind that not all the matters can be measured, and not all that can be measured matters. University-lab school partnerships are one obvious model, but even a person-to-person partnership could bring great benefits.

At the same time, as it seems as if teachers are being systematically excluded from dialogue when making policy based on what research says, or seems to say, perhaps we do need more quality teachers running for public office - except that we still need quality teachers in the classroom! If there were ways to form genuine partnerships between policy-makers and classroom teachers, that would seem to be a necessary corollary to ensuring the highest possible quality of research.

David,
This must have been a wonderful discussion and I'm thrilled that you are letting us "in" on what happened since I couldn't be there.

I think this is a huge issue. I have spent the past two years working with college professors doing research using my classroom. I have to tell you that I think it has been a fruitful exchange of ideas and ways of thinking.

In principal I found that professors and practicing professionals agree on what they want. It's in the implementation that our ideas seem to diverge. Here's what I mean...we would develop our research protocols for how I would do things with students. And I would always start off doing exactly what we planned and how we planned I'd do it. But sure as the earth spins, something would arise (a student question, an fire drill in the middle of the lesson, students would learn faster than we thought they would, students would learn more slower than we thought they would) and the protocol would be shot. I would improvise (also called adjusting to the situation) trying to stay within the guidelines we'd set forth. Sometimes I could. Sometimes I couldn't.

In the end, it made the data "messsy" and difficult for them to use.

I see why it is messy....our controls weren't always exactly adhered to and the experimental group(s) sometimes did more than we thought. I always guided myself by saying...I'm to honor my student's needs first and then the researcher's needs. I'm still fine with that and so are the professors...but it made their study very difficult and publishing their findings will be hard.

I guess my personal experience backs up what this discussion was about...we need more co-partnered research projects. I think both sides of the isle could learn more and ultimately the students would benefit from those increased insights.

Thanks again, David, for posting. I'll keep my eyes open for something else.

David,
Thank-you so much for sharing the daily experiences of the conference. Your stories give us an insider's view in such a deeply personal way, making it so rich meaningful for those of us who wish we could be there, but are unable to.

I especially enjoyed your comment, "Good teachers don’t just arrive as finished products to start work; they start off with a certain potential and either thrive or struggle in large part based on the circumstances in which they find themselves." This is an important message for people to hear.

I also want to extend my deepest appreciation to Barnett, Nancy, Patrick,
and Andy for sharing the words of Measuring What Matters, a document that should be required reading by all school systems hoping to make a difference in their communities.

Thanks for explaining the work of this panel again. I just read an article in Education Week that confirms that basing teacher effectiveness on test scores is much more problematic than it appears according to a study done at Harvard. And just after that I read about a study done by teachers of history on how end of course tests are limiting the definition of what it means to achieve in history.
It is more than time to "measure what matters" in the classroom and the effectiveness of teaching. The profession needs to wrestle this definition away from the test makers and the policy wonks.

Comments are now closed for this post.

Advertisement

Recent Comments

  • mary Tedrow: Thanks for explaining the work of this panel again. I read more
  • Laurie Wasserman: David, Thank-you so much for sharing the daily experiences of read more
  • Marsha Ratzel: David, This must have been a wonderful discussion and I'm read more
  • Bill Ivey: I love the idea of "having teachers speak directly to read more

Archives

Categories

Technorati

Technorati search

» Blogs that link here

Tags

Pages