Learning From the Reform Mistakes of the Past
Today's guest contributor is James W. Pellegrino, Liberal Arts and Sciences Distinguished Professor and Distinguished Professor of Education, University of Illinois at Chicago; Co-director of Learning Sciences Research Institute.
Those who ignore the past are destined to repeat it. We can equally well anticipate that those who ignore the present implementation challenges of the Common Core and Race to the Top assessment program are destined to repeat these mistakes when it comes to implementing the assessment of the Next Generation Science Standards. One lesson of the current experience is that it has been too fast, especially in a climate of high-stakes testing and accountability. Such are the arguments and evidence emerging from various quarters, including New York State.
Let's be clear: The problem isn't that new and better common standards aren't sorely needed, nor is the problem that with new and more demanding standards the assessments must change accordingly, regardless of their proposed use. The problem is that large-scale change can't come all at once and it can't be implemented solely from the top down. Educators need time and resources to make changes in their practice, and assessment developers need time to properly design and validate the assessment materials and tools needed by teachers and others to effectively and fairly evaluate student learning.
Previous entries in this blog from contributors such as Edmund Gordon and Steven Ladd have pointed out that assessment should be designed to support teaching and learning not to undermine it. This is not to say that what the two assessment consortia have been developing these past three years and are field testing this spring will not be of high quality relative to the standards they purport to assess. The jury is still out on that. We will have to wait and see what the four-year Race to the Top assessment program has wrought and how it will impact students' learning and teachers' lives. The point is that we started in the wrong place. Our policy mandates at the federal level dictate that we build large-scale assessments for purposes of accountability and then ask the system to make the transition while struggling under the weight of mandated testing using assessments not aligned to the Common Core. We have seen the results and arguments in multiple states from New York to California. The latter basically choose to throw down the gauntlet by choosing to move forward not backwards. And many states and educators, including associations representing teachers and administrators are now calling for delayed implementation of high stakes decisions based on performance on the new assessments. Note, they are not necessarily rejecting the Common Core nor the need for high quality assessments that can inform and support the system as it tries to make progress towards college and career readiness.
But what if things were different? What if instead of leaping to develop large-scale tests as PARCC and SBAC have been funded to do, our government had taken the $350 million and used it in a different way? What if they had invested that money in developing and validating assessment tools and resources to help teachers in the classroom focus on the student performances and forms of deeper learning that are at the heart of the Common Core standards?
Well we might have a chance to answer that "what if" question in the case of science education. In spring 2013 Achieve issued the Next Generation Science Standards (NGSS) grounded in work done by the National Research Council to develop a K-12 Science Education Framework. And in December 2013 the National Research Council issued a report focused on Developing Assessments for the NGSS. The NRC report argued that the competence called for substantially changes the game for science teaching and learning. Assessment poses a major design challenge given that the standards focus on students' capacities to reason with and apply core disciplinary ideas in various areas of science.
And while the design challenge is great, as it is for the Math and ELA standards, the NRC Report does not argue that we should abandon developing assessments that monitor how well the system is doing in educating our youth. What it does argue for is a balanced system that includes three components: classroom assessments to support teachers and students; monitoring assessments for use at the state policy level; and indicators of opportunity to learn. Of equal significance is the argument that the assessment system should be built from the bottom up or inside out - starting with the classroom level and working towards the monitoring level -- just the reverse of what we have done with ELA and Math.
Have we learned enough from our experience with Common Core and Race to the Top that the teaching, learning and assessment of science can profit from hindsight and a bit of foresight? Many of us certainly hope so.
James W. Pellegrino
University of Illinois at Chicago
Learning Sciences Research Institute