Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

Education Opinion

Is All of This Testing Really Necessary?

By Trenton Goble — July 16, 2012 4 min read
  • Save to favorites
  • Print

Note: Trenton Goble, Chief Academic Officer of MasteryConnect, is guest posting this week.

I want to thank Rick for giving me the opportunity to guest blog this week, and hope you will join in on the conversation. Over the last 20 years, I have worked as an elementary school teacher, principal, and district director. In recent years, I have become very interested in how we, as educators, assess and monitor individual student performance. Three years ago, I helped cofound MasteryConnect, an online solution geared towards helping teachers monitor student mastery of essential skills and concepts. My posts this week will focus mainly on assessment, and I hope you will find them both provocative and solution-oriented.

The other day, I was having a conversation with a teacher who was passionately expressing her frustration that she didn’t have enough tests to give her students. She lamented the fact that she just didn’t have enough charts, graphs, and data points to make informed decisions about her students’ progress. Sure, she had district benchmark tests, high-stakes end-of-level tests, predictive tests, computer-based adaptive tests, practice tests, nationally-normed summative tests, and a grade book full of tests and quizzes given throughout the year, but she still had a little time left for teaching, which meant she surely had time to squeeze in a few more tests. Okay, okay, I made most of that conversation up -- at least the part about the teacher wishing she had more tests to give. What she actually said was, “Is all of this testing really necessary if the results are not telling me anything I don’t already know? Wouldn’t it be better to spend more time focusing on the types of assessments that would help me determine which of my students mastered the concepts I taught, and which students needed more help? Wouldn’t that make more sense?”

If their intent is to truly improve student outcomes and inform teacher practice, I would argue that the vast majority of the tests given today do neither. If their intent is to build vast data sets that can be used as carrots and sticks to arbitrarily reward and punish, one might find real value in our current testing practices. I can remember as a first-year principal naively trying to organize the mountain of test data that had accumulated on my desk. I spent hours organizing the data into spreadsheets, and worked with teachers on the district-assigned Data Day to try to find meaning in all of the numbers. We actively sought to target students who needed additional support, and looked for ways to engage our advanced learners. Year after year, we went through this process; some years it seemed we did a little better and others, we did slightly worse.

With the introduction of No Child Left Behind, the pressure to improve increased, and our focus narrowed to cover the material that would be tested. I recall sitting in a principals’ meeting with 50 of my peers as we were handed our Annual Yearly Progress (AYP) results. The relief and pride I felt when my school made AYP was most certainly a validation of our hard work. I am sure it helped, at least a little bit, to have the district’s gifted and talented magnet program at my school. The fact that my school was also located in one of the more affluent neighborhoods probably didn’t hurt either. The reality was that every principal in that room could predict, with nearly 100 percent accuracy, which schools would pass and which schools would fail. The results of the end-of-level assessments seemed to correlate more closely with the schools’ demographics than with the amount of work a school invested in improving its results. At the time, it felt as though the only real reward for making AYP was a great sense of relief, and the consequences of failure were limited to public humiliation when the failing schools were listed in the local newspaper.

When I was asked to be the principal of a school that had rarely, if ever, made AYP, I jumped at the chance. I poured through the data, and made charts and graphs to share with teachers during our Data Day discussions. If my last school could make AYP, so could this one. Once again, we narrowed our focus and invested heavily in tested subjects. After we felt the sting of not making AYP my first year, we managed to achieve a passing grade every year thereafter. Granted, we never actually met the standard, but we showed enough improvement to be granted a waiver each year. It was clear that we were doing all the right things to improve our test scores, but something didn’t feel right. All of our charts and graphs were pointed in the right direction, but when we looked more closely at our results - when we looked at individual student progress - there were still far too many students at risk of failure.

When we gathered the data from all of the tests our students were subjected to each year, we were left with nothing more than a mountain of papers that only reinforced what we already knew: This student struggles with math... that student struggles with reading. When you consider all of the time, energy, and money invested in the multitude of tests given to our students, it would seem reasonable to expect a little more information. Shouldn’t we at least know the specific skills and concepts our students know and don’t know? Shouldn’t we get that kind of information when we still have time to go back and provide interventions for those students who have failed to master the skills or concepts? Why do we continue to use tests and gather data that does neither? To answer the teacher’s question from the beginning of this post, no, all of this testing isn’t really necessary. And I can’t help but believe that I am not alone in thinking that it is well past time to completely reevaluate our testing practices.

--Trenton Goble

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.