Mathematics

What Works for Teacher Math Coaching? A First Attempt to Find Out

By Stephen Sawchuk — November 19, 2018 4 min read
  • Save to favorites
  • Print

A few months back, my colleague Madeline Will wrote about exciting evidence that teacher coaching, as a form of professional development, can indeed improve student learning.

One of the open questions about coaching, however, has to do with the nitty-gritty details of the programs: Is online coaching as effective as face-to-face? How does coaching an early grades reading teacher differ from, say, a secondary math one?

Now, two of the top researchers are chipping way at some of these conceptual questions, hoping to find out more about which contexts and structures best support effective coaching. And in a new study, they find promise in an online model they’ve developed for math teachers—but not a silver bullet.

In a random-assignment, they found significant changes to teacher practices thanks to the online coaching program, but the coaching didn’t immediately translate into improved student test scores.

“Developing and refining coaching models takes time,” Matthew Kraft of Brown University and Heather Hill of Harvard University concluded in their analysis. “Compared to the decades-long history of literacy coaching and its rich evidentiary base, math coaching practice and research is still in its infancy.”

Building a Math Coaching Program

The new study exists in direct dialogue with Kraft’s earlier meta-analysis of coaching research. That study found, on balance, coaching programs did improve student learning. But it also hinted that program details could be make or break effectiveness: Programs with fewer teachers tended to be more effective than those with more.

And the meta-analysis was limited in other ways. It was restricted mainly to literacy programs, on which there were simply more published studies. (In fact, Kraft and Hill write, they could find only one random-assignment math-coaching study before conducting their own.)

So they set out to create and test a web-based math coaching program. They based the program on the Mathematical Quality of Instruction, or MQI, framework which outlines a way of observing math lessons based on levels of “richness,” such as whether teachers engage students in understanding errors and conceptual misunderstandings and the extent to which the instruction helps students make meaning and reason mathematically.

They also decided to use a web-based coaching method, in part because a handful of other case studies showed that on-site teacher coaches were often asked to pick up non-coaching duties, like attending field trips or coordinating testing. “Together, this evidence suggests that a drawback to site-based coaching models may be, ironically, limited time for coaches to actually engage in one-on-one work with teachers,” they wrote.’

Online coaching is also, potentially, a lot cheaper for school districts than hiring full-time coaches.

Each teacher participated in a coaching cycle in which they chose an MQI indicator to work on, taped their math lesson, and then received feedback from their coach on ways in which the lesson reflected or didn’t reflect some of the indicators, and help planning future lessons. In all, the study included about 150 teachers across two districts, and 24 coaches. Half the teachers were assigned to the coaching program, and the others were in a control group that participated in their regular district-sponsored PD.

Analyzing Results

Here were the results after two years: Teachers did indeed double the rate at which they used MQI practices in their lessons, and that was sustained into a second year, long after the coaching stopped. In the first year, students also perceived that their teachers’ instruction improved.

But neither year-end state test results nor a second standardized assessment showed a significant increase in student learning.

That’s not a particularly surprising finding for PD research, which has in general long struggled to move the needle on test-score-based measures. The findings could be interpreted a few different ways, Hill said in an interview.

For one, the researchers were designing the program while testing it, and so a more worked-out version of the program might have had a stronger impact. The second issue concerns measurement: It could be that the state tests, which often prioritize easy-to-measure computational thinking, simply don’t do a good job picking up the kinds of math instruction that the MQI calls for.

Still, the program does create a potential template for how districts might think about structuring a coaching model—a benefit for a field that has struggled to boost effectiveness while controlling PD costs.

“I think the lessons learned—and this is going to sound navel-gaze-y—but we did a lot of work developing our coaches and a lot of time monitoring and improving their own practice. That’s maybe why we found the instructional impact we did,” Hill said. “We didn’t sort of just say, ‘Here’s an instrument, go have fun.’ ”

Related Tags:

A version of this news article first appeared in the Curriculum Matters blog.