DCPS Wisely Presses Pause on IMPACT's Use of Test Scores
Late last week, Chancellor Kaya Henderson announced that the DC Public Schools would be "pressing pause" on using value-added as part of its IMPACT teacher evaluation for one year. She explained, "We're doing this because of the transition to the PARCC. We have good reason to believe that the data from the PARCC won't be available until late in the summer - or early fall - which means we won't be able to tell teachers what their final IMPACT scores are until school has already started. That's just not acceptable for a variety of reasons." The pause also offers an opportunity to deal with any problems with PARCC testing that may show up next spring. Right now, Henderson is planning just a one-year pause, but she obviously has the flexibility to make a decision that will support an effective, sustainable evaluation program going forward.
There are three things worth noting here. First, this is the kind of sensible decision that we expect from level-headed, hands-on managers. The fact that some of my more excitable reform friends have been insisting on "full speed ahead" for a couple of years has put some impassioned state chiefs and superintendents in a tough place. Happily, Henderson's got the guts and smarts not to get railroaded.
Second, Henderson also enjoys a critical advantage that makes it easier for her to make these kinds of decisions. DCPS built and runs its pioneering teacher evaluation system (DC's state education agency doesn't make laws and policies like its brethren do). That's the nice thing about being a district that really answers only to itself: it means that the folks making policy are the same ones responsible for putting policy into effect--and the ones who will be accountable for the results. This leads to more nimble management, and also aligns authority and accountability in a way that leads to disciplined decisions.
Third, you want a contrast? On Capitol Hill and in other states, we have well-meaning lawmakers and politicos designing evaluation systems that someone else is responsible for putting into effect. When the systems and guidelines turn out to be problematic, the people who dreamed these up in state legislatures or state education agencies turn out to be remarkably unaccountable. After all, they don't have to actually do any of what they propose. This means they can push policies that sound nice or feel good, and then dump their handiwork on Henderson's peers--and blame the results on "leadership" or "implementation challenges" when problems result.
Ultimately, this is why I tend to think there are stark limits on policy-driven reform. If I'm making rules for you, it's just too easy for me to focus on what I think is nifty and nice rather than on what makes sense and will prove to be workable.