'Value Added' Data Need Careful Analysis, Consideration, Statisticians' Group Says
If you've been following this blog, then you know that there are few more controversial policy shifts of late than the integration of "value added" methodolgies, which are based on students' standardized-test scores, into teachers' performance evaluations.
Some scholars believe that they models are too "noisy" and error-prone to be used for such purposes. Other researchers are have argued that they're akin to other widely accepted statistical estimates of performance, such as a baseball player's batting average. Still others think they could be used informally, as a "check" or screen on evaluation results rather than as a component.
There are qualitative objections, as well, on the grounds that doubling-down on test scores could affect teaching in unproductive ways, i.e., by promoting "teaching to the test."
Adding its perspective to this volatile mix, the American Statistical Association today issued a position statement on the use of value-added models, or VAMs. Its takeaways for the K-12 community: Proceed carefully and make sure teachers, adminsitrators, and parents are aware of VAM's limitations.
For instance, the group notes, test quality and the types of controls used in the formulas will affect the end results. Moreover, the ASA says, the focus on individual teachers' performance could detract from needed attention to system improvement.
"VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1 percent to 14 percent of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions," the statement says.
That said, the group does believe that the information obtained through VAMs can be put to good use in education.
"For example, the models can provide information on important sources of variability, and they can allow teachers and schools to see how their students have performed on the assessment instruments relative to students with similar prior test scores. Teachers and schools can then explore targeted new teaching techniques or professional-development activities, while building on their strengths," the statement reads.
Find the full statement here.