Opinion
Equity & Diversity Opinion

Partnering to Assess Teacher Equity Gaps in Massachusetts

By Urban Education Contributor — January 22, 2018 5 min read
  • Save to favorites
  • Print

This week we are hearing from a partnership between the Massachusetts Department of Elementary and Secondary Education (@MASchoolsK12) and the Center for Analysis of Longitudinal Data on Education Research (CALDER, @caldercenter) at the American Institutes for Research (AIR, @Education_AIR). This post is by James Cowan, Dan Goldhaber (@CEDR_US), and Roddy Theobald from CALDER.

Today’s post is written from the researcher perspective. Stay tuned: Thursday we will share the practitioner’s perspective on this research.

Teacher quality is the mantra of the day, and with good reason. In Figure 1 below, we show that repeated exposure to more effective teachers (defined here as a teacher who is more effective than 60% of other teachers) can have a profound effect on the learning trajectories of children. Given this, it makes good sense that policymakers interested in closing well-documented achievement gaps should pay attention to inequities in the distribution of teacher quality across districts, schools, and classrooms.

SOURCE: Cowan, J., Goldhaber, D., & Theobald, R. (2017). Teacher equity gaps in Massachusetts. //www.doe.mass.edu/research/reports/2017/10teacher-equity.pdf.

Concerns about equity in teacher assignments have been fueled, in part, by academic research from states like Florida, North Carolina, and Washington that document substantial “teacher equity gaps” between advantaged and disadvantaged students. These findings are echoed in the recent policy brief we prepared for the Massachusetts Department of Elementary and Secondary Education (ESE) that examines student assignments to teachers for K-12 students and illustrates that low income students are systematically exposed to less experienced and less effective (as measured by performance evaluations and value added) teachers than their peers. For instance, low income students in Massachusetts are more than twice as likely as non low income students to have teachers earning the lowest two ratings on their teacher evaluations. Given the importance of teacher quality (illustrated in Figure 1), this inequity has important implications for the educational opportunities of disadvantaged students in Massachusetts.

But our work with Massachusetts differs from the various academic publications documenting teacher distribution in key ways. That’s because we work with Massachusetts within a research-practice partnership context, having formed a long-term partnership with ESE’s Office of Planning and Research. This means that—different from academic research—the work was driven by a clear interest in this topic from ESE and findings are being used to help find solutions (more on this in their blog post to follow on Thursday). It also means that the development and dissemination of this policy brief differed substantially from how academic research is often carried out. Indeed, we can say this first-hand, having recently published an academic paper on the exact same topic (using data from North Carolina and Washington) in American Educational Research Journal (AERJ). For this brief, we worked with ESE to identify possible policy solutions that had backing in empirical research and would most resonate with their audience of district leaders. And the brief had a built-in dissemination channel through the state’s reporting on teacher equity.

The goals of the two types of work also differ. When we prepare academic research, our goal is often to produce results that are sufficiently novel, robust, and generalizable that they contribute to the larger body of knowledge in a given area. In the case of our academic research in North Carolina and Washington, we were fortunate to receive expert reviews from three external reviewers at AERJ that pushed us to ensure that our results were robust to different models and assumptions. For example, we were asked to estimate value added in a number of different ways (e.g., controlling for classmate characteristics, controlling for multiple years of prior performance, etc.) to ensure that our results reflected true differences in teacher effectiveness and were not driven by the way we were estimating teacher effectiveness. These specification checks all pointed to the same conclusion—i.e., that disadvantaged students consistently had less effective teachers in both states—and thus strengthened the generalizability of the resulting academic paper.

In partnership research, on the other hand, our goal is produce results that are useful to the partnering organization. This does not mean that this policy brief did not get the kind of careful statistical attention and robustness checks that went into the AERJ paper, but the various econometric models are not nearly so well-documented in the research brief. Instead, we sought to ensure that the partnership work with ESE met their needs through regular back-and-forth throughout the research process that led to subtle but important changes in the questions, analysis, and framing of results. For example, the convention in academic work is to display results in terms of standard deviations of student performance, but in response to feedback from our partners in ESE, we re-framed these results in the policy brief to focus on “weeks of student learning” (e.g., see Figure 1) because they believed this would be more interpretable to policymakers and practitioners in the state.

As researchers we generally seek out opportunities to contribute to public debates through novel empirical evidence. But the partnership work served as an important reminder to us that research does not need to be novel to be useful. In this case, while the policy brief largely replicates existing findings, this type of evidence is vital to our partners in ESE because they want to ensure that their policy decisions are informed by research that is specific to the Massachusetts context. Partnership work represents a different type of contribution, but it may be a type that is every bit as (or perhaps more) likely to affect the lives of the students that we hope to help through research.

The opinions expressed in Urban Education Reform: Bridging Research and Practice are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.