Zap! Pow! The Amazing Powers of Data
[David Brooks, in the New York Times] In every other job in this country, people are measured by whether they produce results. For decades, that didn't apply to schools, where people were rewarded even as student achievement stagnated. This administration has sided with reformers who want to change that -- by measuring teacher performance.
[ A Blueprint for Reform: The Re-authorization of the Elementary and Secondary Education Act] Performance targets, based on whole-school and subgroup achievement and growth, and graduation rates, will guide improvement... States, districts and schools will look not just at absolute performance and proficiency, but at individual student growth and school progress over time.
["Glitches Delay FCAT Scores," The Miami Herald] Nearly all of the major testing companies have had problems since the federal No Child Left Behind law made standardized testing a national priority, said Robert Schaeffer of the National Center for Fair & Open Testing. Schaeffer said companies are eager to get in on lucrative contracts -- and sometimes bite off more than they can chew. While standardized tests are used to hold schools, students and educators accountable, "there is absolutely no accountability for the corporations who make those tests."
Figures don't lie. We need hard numbers. Data-driven accountability.
For some people, anyway. Test makers evidently get a pass, at the moment...because they have so much to do, so little time? Because we're all human? That can't be right.
And--does David Brooks seriously believe that statistical outcomes are the driving force in hiring and advancement decisions made in every American workplace? That qualities like perseverance, reliability and engaging personality--not to mention agreeing with the boss--are less important to employers than the numbers on the spreadsheet? That it's only in the backwards, benighted education sector people aren't relentlessly pursuing scientifically based data?
Has David Brooks ever come face to face with an array of student testing numbers that makes no sense whatsoever--delivered months after a point where it could do some good? I have. Sitting around a table, wondering why our brightest kids didn't do well on something is instructive, but wondering why kids we'd written off (or assigned to the Learning Disabilities room) were suddenly producing amazing results is beyond challenging. And it happens.
The goal is certainly not more data. Contrary to popular opinion, having all American kids taking identical tests will not give us more guidance, either--the numbers may be aligned, but they won't tell us what to do next. Creating assessments that ask students to apply what they've learned to real-world tasks links assessing to complex learning. In using performance assessments, however, the element of human judgment creeps in. Do we elevate informed evaluation over standardized numbers?
That's the thing about data: it's only as good as the storyline and implications that accompany it. Without a knowledgeable human to think about why we got those results-- and develop a plan for moving forward using the information it provides--data is just a statistical beauty contest.
And, as Claus von Zastrow points out, it's easy for a number to get "...ripped from its context, cleansed of sometimes dubious origins, echoed and amplified in scads of policy reports, and finally enshrined in stories put out by major news outlets."
It was Lewis Carroll who said "If you want to inspire confidence, give plenty of statistics."
A hat tip to Roxanna Elden for the Miami Herald piece.