« Teacher 'Residencies' Get Federal Boost | Main | UPDATED: New Haven's New Contract Shrouded in Secrecy »

D.C. Unveils Complex Evaluation System

| 4 Comments | No Recommendations

I've finally had a chance to take a look at Washington, D.C's new teacher-evaluation system, known as IMPACT, which generated a lot of buzz for being among the first in the nation to incorporate student test scores as part of the teacher rating. (Race to the Top, anyone?)

To be fair, IMPACT is not all about test scores: the evaluation system also includes other pieces, such as scores on a "Teaching and Learning Framework," an extensive set of observational measures similar to Charlotte Danielson's Framework for Teaching, or the rubrics used by the New Teacher Center or the Teacher Advancement Program. Teachers in the district will be observed five times before a final rating is generated, three times by a building administrator and twice by an outside "master evaluator" who is a subject-matter expert and does not report to the building administrator.

There is even a "core professionalism" component to measure whether teachers show up on time and to ensure they don't go missing without an excuse, no doubt to counteract problems with chronically absent teachers.

The Washington Post has a pretty good write-up on IMPACT here, but one thing the story doesn't really convey is that IMPACT really is more a composite of 20 different evaluation systems. There are standards for teachers who teach in tested subjects, and those who do not, standards for counselors, for instructional paraprofessionals, for non-instructional paraprofessionals, even for custodians.

So, if you're teaching in grades 4-8 in reading or math, 50 percent of your rating is based on an "individual" value-added score and 40 percent on observational ratings aligned to the Teaching and Learning Framework. But if you're in a subject without an accountability test in place, then 80 percent of your rating is based on the TLF and 10 percent is based on a non-value-added assessment chosen by the teacher, such as a unit test from an approved textbook. Special-ed teachers are rated in part according to their ability to turn out well-crafted individualized education plans. You get the idea.

Almost all of the teachers will have at least 5 percent of their evaluation based on schoolwide (not individual) growth.

The American Federation of Teachers and the Washington Teachers' Unions are already on the record as not liking this new system. It's probably worth noting that the district was not obligated to consult with the WTU in crafting it. But the AFT has expressed discomfort with using test scores beyond the building level, and research is certainly not unequivocally supportive of individual value-added measures.

Though IMPACT was not collectively bargained, Rhee did meet with a bunch of focus groups of teachers while she was developing it. In the preamble to the IMPACT guidelines, she says that the system is first and foremost supposed to provide a pathway to teacher-effectiveness growth, and not just serve as an accountability measure.

But with hundreds of layoffs going on right now—many of which the union says could have been avoided had the district not hired so many new teachers this summer— I wonder how many teachers are going to believe her.


That last line is a gem - though I'm not a D.C. teacher, from the perspective of an interested observer, it's hard to see Rhee getting credit for saying the right things after so often saying and doing the wrong things.

This other line, however, is not such a gem: "research is certainly not unequivocally supportive of individual value-added measures." Double negative and double adverb? I suggest this replacement: "research does not support value-added measures for teacher evaluation."

Nice article. My doctoral project is about using student achievement data in teacher evaluations. Is there any specific research you used for this article and if so what?

DCPS IMPACT Guidebooks for the 20 different school staff positions being evaluated are located here:

Did you get a coherent explanation from Mr. Kamras (or anyone?) of how the "value-added" growth component will actually be calculated? So far, no one I know of in the city could not explain it nor could Mr. Kamras when asked point blank (or so I'm told). He refers such niggly details to the Harvard consultants he's paying to generate it.

Additionally, what is currently printed in the high gloss IMPACT manuals is apparently inaccurate concerning this calculation (as to how students are identified and grouped for comparison). Will the growth on one's classroom be based on the particular kids in that classroom this year and compared back to their test results of last year? (Current rumor) Or will one's current classroom growth compared against a composite/average of all similar students in direct proportion to one's classroom? (As published) It's apparently a moving target and not firmly decided.

Furthermore, the "growth" calculation is supposed to use the DC-CAS "scale scores" but if you ask any testing expert or McGrawHill they will tell you that you can NOT add, subtract and average "scale scores". But that is the way the Guidebooks explain it.

The IVA component (50% of total evaluation) has no benchmarks (interim calculations or reports) during the year to be able to tell if one is making reasonable progress. Instead, it's calculated after the school year is over.

Make below 175 points and you are terminated. Make the next category up, you do NOT get any step or COLA increases. Make next to the top level and you do get any step or COLA pay increase. The top level qualifies one to be considered for an unspecified bonus which may or may not exist from year to year. Current year is not specified (contract negotiations ongoing, sort of)

I, for one, would not want to be evaluated by a system that cannot be plainly explained up front and have no benchmarks during the year and have my job and pay increase directly dependent on my final score calculated after the school year ends!

But, hey, in the DCPS system full of fear and distrust (rightly or wrongly) so everyone is probably just going to play along because all the TFA types know they have the right answer and no real criticism is appreciated nor tolerated and we know that Ms. Rhee, Mr. Kamras and their cohorts are all about control from the top down so why make an effort?

So, it may look good on paper and may sound and read like there are lofty goals and the approach is fair, but, as usual, it will all depend on the actual implementation details and program management. (And whether this IVA thing can really be accurately calculated. Major doubts here.) It will depend of very good and experienced local school administrators. DCPS replaces 25% of it's administrators each year and many of them are very inexperienced or completely green.

Read Guidebook #1 for an explanation of the IVA(Individual Value Add) based on test scores and see if you can make sense of it.

the other strange part of this system lies in the fact that principals are allowed to decide which of the IMPACT groups each employee belongs to.

i am a bilingual teacher, and am being evaluated as a regular ed teacher. this is not a true evaluation of my performance as i only am reponsible for teaching half of the standards and my teaching partner is responsible for teaching the other half. how can this be an adequate individual assessment of teaching performance if the same system is not used across the board for all bilingual teachers?

Comments are now closed for this post.


Most Viewed on Education Week



Recent Comments