Pilot Will Explore Early Supports for Urban Teachers
While huge randomized controlled trials have become the paragon of education study, a group of researchers led by the Carnegie Foundation for the Advancement of Teaching are using an opposite tack—short, intense, micro-trials in real classrooms—to tackle one of the thorniest problems in urban schooling: the recruitment and retention of great teachers.
Stanford, Ca.-based Carnegie partnered in 2009 with the Washington-based Aspen Institute and the American Federation of Teachers to launch the Building a Teaching Effectiveness Network, intended to increase the number and quality of new teachers and enhance their ability to improve student achievement in those first, most often frustrating years in the profession. This spring, backed by a $4 million grant from the Seattle-based Bill & Melinda Gates Foundation, researchers will begin a three-district pilot program to identify and test best practices in the field, rather than in a lab.
The BTEN project will use a 90-day "quick-dive" cyclical research method that I reported on in January. The Institute of Healthcare Improvement originally developed the method to solve problems of practice in medicine, but it recently has caught the interest of education researchers, too. A training seminar held earlier this month by AdvancEd and the Knowledge Alliance drew leaders of both federal and private research labs, and follow-up training seminars are in the works.
"So many of these innovations die on the vine as it were," said David Yeager, associate researcher in applied social psychology at Carnegie. "The goal is to find innovations that are effective and figure out how to make them work in the classroom."
An example of the process
Yeager and Alicia Grunow, Carnegie associate partner, have already conducted one 90-day cycle to identify and test ways to convince community college students to persist in a remedial math class, a common roadblock for new and returning students.
Deciding on the specific question to be answered will be the "trickiest" part of the process, Mr. Yeager told me. "It's easy for the question to be too big," he explained. "It needs to be super-concrete, rooted in practice, doable in 90 days. The entire 90-day cycle is driven by what you want to do; there's an actual audience that wants to know this."
In the first 30 days, researchers conducted a review, using experts in the field to guide a targeted literature search. Those experts included mathematicians and psychologists researching academic persistence, as well as longtime teachers, students, and so on. The team identified the critical areas that kept coming up across studies and expert interviews. By the end of the first phase, the team narrowed down the question and proposed an intervention to be tested.
In the community college study, the team identified a few ways that teachers interact with students in the first weeks of class that can encourage them to attend class, ask questions, and ask for help, three key predictors of whether a student will complete the class.
In the second 30 days, the team partnered with remedial math teachers in community colleges to test the proposed intervention. For example, a teacher might be asked to praise one or a handful of students in a particular way during one class, and then report back. The intervention was tweaked constantly through these micro-tests during the course of the testing period.
"It's not that we don't value the experimental model," Yeager said, but the 90-day cycle can allow researchers to gather small bits of data quickly and steadily. "Normally you have to wait to get funding for a big [randomized controlled trial], and then it can take years. Why not test it in small ways, measurable ways and with measurable outcomes. If it keeps working and keeps being effective, you know you've got confidence in it."
In the last 30 days, the team finalized its answer to the initial question and planned dissemination and next research steps. You can see an example of this here.
The team expects about 25 percent of the cycles to fail, Ms. Grunow said—either the question was too broad, the interventions failed, or other issues—but because each is part of a continuous cycle, researchers can learn from one failed cycle and apply it to the next. Ms. Grunow said one of the best parts of the method is that administrators and educators who participate in the cycles also learn to continue iterative improvements within their own schools.