The cost of new teacher-evaluation systems is likely to vary based on how states and districts choose to establish student-growth measures for all teachers, according to an analysis from a researcher at the Value-Added Research Center, a research evaluation firm and contractor located at the University of Wisconsin-Madison's Wisconsin Center for Education Research.
The analysis compares three different ways of creating these growth measures, something nearly all states are facing because "value added" measures, based on math and reading standardized tests, only cover a fraction of the teaching population.
Under option 1, states expand the number of grades and subjects in which students are assessed annually. This model's costs would come from procuring and administering commercially available standardized exams in those subjects. The development and administration of new data systems also add to the cost.
Under model 2, states would convene educators to develop assessments in the non-tested grades and subjects. Costs here would come from hiring facilitators to train educators on the process of developing the tests, the actual test development, and the cost of a platform to host the assessments so that districts can administer them.
Finally, under model 3, states would implement student learning objectives, a particular kind of goal in which each teacher sets growth goals with his or her principal, and selects a way of measuring growth on those based on some examination of student work. (For a discussion of the research on SLOs and some of the tradeoffs associated with using them, see this blog item.) This option, the paper notes, has fewer direct costs associated with procuring or developing tests, but higher indirect costs to provide districts, principals, and teachers with guidance and training on how to craft and score the SLOs.
So how much are we looking at? The paper doesn't give actual dollar estimates, since it would vary greatly for districts and states depending on how many teachers and principals they have, their salaries, and how much staff time is allotted to training and test development. It's clear, though, that the SLOs would be the most costly option for districts, since they would need to take more of an active role in the development of the measures than if they were simply administering state-selected ones. It's also clear that, assuming a mid-size district with 50 schools or so, evaluation costs could easily cross the $1 million mark for the first year alone. (I'd be interested in seeing some number-crunching based on these estimates, if any district official is so inclined to make use of these outlines.)
New evaluation systems are more that simply developing these measures, the paper notes. They include the development of new systems to manage all that data, the development of observation protocols, and the training for observers, it says. In other words, the cost estimates in the paper are probably only the tip of the iceberg.