New ESEA May Use Noncognitive Traits in Accountability. Is That a Good Idea?
Boosters of school climate, student engagement, and social-emotional learning should be closely watching the debate over replacing the federal No Child Left Behind law with a shiny, new version of the Elementary and Secondary Education Act.
That's because a late-stage draft for the proposed bill, which is due up soon for votes in both chambers of Congress, calls for states to incorporate factors beyond standardized test scores into their accountability systems.
But is that putting the noncognitive cart before the horse? In the past, big-name researchers of traits like grit, persistence, and growth mindsets have all said such factors should not be measured for accountability purposes, at least until more sophisticated measures can be developed. It's an issue worthy of further exploration.
The proposed ESEA bill would give states big discretion on accountability.
If you haven't been reading the amazing, comprehensive coverage of my colleagues at Education Week's Politics K-12, I recommend you check it out (as soon as you finish reading this post, that is). Politics K-12 posted a great cheat sheet this morning, breaking down what we know about the proposed new ESEA.
That post includes a link to draft language of the bill, which says state accountability systems would be required to include "not less than one indicator of school quality or student success that allows for meaningful differentiation in school performance and is valid, reliable, comparable, and statewide, which may include measures of - (I) Student engagement; (II) Educator engagement; (III) Student access to and completion of advanced coursework; (IV) Postsecondary readiness; (V) School climate and safety; and (VI) any other indicator the state chooses that meets the requirements of this clause."
That seems pretty broad. And it's not unreasonable to assume that states could use the "any other indicator" language to support inclusion of students' social and emotional skills, grit, or growth mindsets in their accountability models. Under a unique local-level waiver from the current version of No Child Left Behind, a group of California districts is developing a system that does just that.
This is a big win for 'whole child' advocates, right?
It's easy to see why some would frame it that way. Some education leaders and policy folks have argued for years that test scores provide too narrow a view of school success and ignore all of the hard work teachers must do to nurture the non-academic traits students need to succeed in school and in life outside of the classroom. Recently, some philanthropy groups have funded work to develop new measures for these traits, arguing that, in education, what's not counted doesn't count.
But some may also argue that the proposed new federal education bill may encourage states to judge schools with flawed measures.
That's because researchers who've popularized this work have consistently warned that current measures of student traits are imprecise, imperfect, and subject to all kinds of biases. Or, as researchers Angela Duckworth and David Yeager put it in a May essay "perfectly unbiased, unfakeable, and error-free measures are an ideal, not a reality."
Duckworth, a University of Pennsylvania associate professor of psychology known for her research on grit, and Yeager, an assistant professor of developmental psychology at the University of Texas at Austin who focuses on growth mindsets, detailed an array of biases that can affect current measures of student traits and skills, making them less accurate.
In surveys, a common tool for measuring student growth in these areas, the comparative examples respondents use to gauge personal growth in some areas may lead to different results from similar respondents, an effect called reference bias, the researchers wrote.
"Current data and theory suggest schools that promote personal qualities most—and raise the standards by which students and teachers at that school make comparative judgments—may show the lowest scores and be punished, whereas schools that are least effective may receive the highest scores and be rewarded for ineffectiveness," they wrote.
And Carol Dweck, the Stanford professor whose work on growth mindsets has caused many schools to rethink their approach to student engagement, told me earlier this month that she believes using mindset measures in accountability will lead to the use of shallow interventions that don't really help students.
Education leaders and researchers seem more comfortable using measures of school climate in accountability, or at least as a tool to compare the strengths and weaknesses of various schools. After all, when it comes to ensuring if a student feels safe and supported, perception is reality. That's why districts like Cleveland use data from "conditions of learning" surveys to track their school climate and social-emotional learning efforts. But those efforts don't include penalties or additional requirements for schools that don't perform well.
And there are climate and culture-related data that schools already measure consistently that could be used to trace these factors for accountability purposes: suspension rates, reports of bullying, and chronic absenteeism to name a few.
The big question seems to be what factors states will use in this big, ambiguous category, how they will choose to measure those traits, and what criteria will be used to determine if those measures are "valid, reliable, and comparable."
What do you think? Is the inclusion of "other indicators" in the draft bill language a good thing? Or is it too much too soon?
Related reading on school climate, student engagement, and social-emotional learning:
- Study Measures Which Teaching Traits Boost Student Agency, Mindsets
- Walton Family Foundation Invests in Research on Measuring Grit, Character
- Carol Dweck Revisits the 'Growth Mindset'
- 'Nation's Report Card' to Gather Data on Grit, Mindset
- Urban Districts Embrace Social-Emotional Learning
- What Do Students Need to Succeed? Guide Helps Educators Navigate the Research