Blog

Your Education Road Map

Politics K-12®

ESSA. Congress. State chiefs. School spending. Elections. Education Week reporters keep watch on education policy and politics in the nation’s capital and in the states. Read more from this blog.

States

This Is Not a Test: One State’s Assessment Pilot Seeks to Grow Up and Out

By Andrew Ujifusa — November 09, 2015 7 min read
  • Save to favorites
  • Print

Through the work of a few districts that have the federal government’s blessing, New Hampshire is moving closer to answering a question it’s been asking for a while: Can schools and districts collaborate to create standardized methods of student assessment, without deploying typical standardized tests?

I got a small taste of what that answer might look like last week, when I tagged along with members of the Council of Chief State School Officers, among others, to visit a few districts in the Granite State. What’s clear is that teachers and administrators are in many cases thrilled with the results of the pilot project—but there’s a tremendous amount of hard and often finely-balanced work that goes into this new assessment system.

So, what’s going on in New Hampshire right now? Eight districts in New Hampshire are engaged in a pilot of the Performance Assessment of Competency Education, or PACE, in which schools administer performance-based assessments throughout the year, in lieu of the summative statewide assessment, Smarter Balanced, in certain grades. Districts are working with each other, as well as the state, to develop at least some common performance assessments.

The U.S. Department of Education has given approval this pilot for local assessments for two years. The 2015-16 school year is the second year of the experiment, the first of its kind during the Obama administration. (At right is the schedule that was used for the 2014-15 school year for Smarter Balanced, performance assessments, and other exams, via the New Hampshire Department of Education.) The PACE program builds on New Hampshire’s extensive record with competency-based education, in which students are allowed to demonstrate proficiency in a subject area without any prerequisite seat-time. The state is also working with the CCSSO’s Innovation Lab Network as it fleshes out PACE.

The goals here are to get more in-depth and current information on what students know and can do than the schools would with traditional summative exams, help students tackle material in more meaningful ways, and also have the assessments inform and shape instruction.

One other intangible point supporters make: Students taking the performance assessments don’t think they’re being tested, at least not in the way they’re used to.

But not everyone likes the idea of creating these kind of locally driven performance assessments. Perhaps the biggest criticism is this: Scaling up the assessments in particular is extremely difficult if not impossible. Critics argue you’re essentially asking dozens if not hundreds of districts to act like a flawless synchronized swimming team in how they develop, administer, and judge these performance assessments, without becoming just a herd of drones, or the work succumbing to variability. That, in turn, undermines any meaningful statewide accountability system in which academic performance is truly comparable across schools and districts, they say.

What are some other questions or concerns that have cropped up on my trip and elsewhere? Some question whether performance assessments capture enough individual-level work in situations when many of the activities emphasize collaboration among students. There’s the possibility that only districts with decent financial backing, and with relatively few demographic challenges or other socioeconomic stressors, can do performance assessment well. (The districts I visited weren’t particularly diverse racially, and one administrator noted that her high school was a “well-resourced” one.) And parents and other community members can sometimes be puzzled if not irritated by the shift, at least at first.

‘A Lot of Patience’

So what does it take to develop and review just one performance task in PACE?

Spaulding High School math teachers Lee Sheedy and Steve Prescott talked about their work with teachers in other districts to develop a geometry performance task that asked students, using certain geometric shapes, to develop a water tower that could hold about 45,000 cubic feet of water. The performance assessment, or task, also asked students to include a cover page explaining the goal of the proposal, models or scale drawings, calculations and mathematical strategy, and an analysis of the final recommendation. And they needed to create at least two different designs for the tower, among other specifications.

Prescott said that when he agreed to collaborate with other teachers (including those from other districts) to create a performance task at the start of the 2014-15 school year, he initially planned on only taking five school days to create and assess it. He ended up needing about twice that amount of time.

“At the first meeting, we kind of had no idea what a performance assessment really was,” said Sheedy, who called the work involved “pretty staggering.”

Ultimately, over the course of several months, the team of teachers needed about 20 revisions to the water-tower performance task before putting it in front of students. Those revisions covered concerns about clarity for the teachers as well as the students. And in the end, the teachers cut in half the assessment that was put in front of students.

“It took a lot of patience with the four different districts, with teachers having very different opinions,” Sheedy said.

In addition, teachers also had to develop formative assessments to prepare students along the way for the water-tower task, which was intended as a summative task. Through these formative tasks and other work, teachers wanted students to implicitly understand the most efficient design for a water tower without explicitly being told what it was.

Before they tackled the water-tower assessment, students got to ask questions about the expectations of the assessments (without getting too much assistance or information from teachers), and teachers in turn clarified the scoring rubric for how they would be judged. Then, over two days, students worked on the task for three hours, on an individual basis.

Experimentation Amidst Stability

That entire process, Prescott said, was designed to ensure “that the administration of this was exactly the same in all classrooms.”

And then teachers had to agree on a common method for scoring the performance tasks. In one day, for example, the team of teachers managed to score just five of the assessments. Teachers had to learn to rely on the scoring rubric that specified what students had to do in order to achieve each score.

Even after months of work developing the task, Prescott noted, another teacher “might have had completely different expectations than I had. The [scoring] process made that uniform.”

But what do Prescott and Sheedy say that they got out of the entire experience? Sheedy says that last year he adjusted his curriculum to match the timing of the performance task, and liked it so much that he’s kept the adjustment in place this year. Sheedy said that students had a better understanding of their learning trajectory and how they would be judged.

And perhaps mostly importantly, according to Sheedy, “In all the years I’ve taught, I’ve never seen the students so engaged as when they engaged with these performance tasks.”

“It definitely made you stretch your mind,” said Emily Benson, a junior at Spaulding High School who took a performance assessment in science as a sophomore.

A few other impressions from remarks by teachers at Rochester Middle and other PACE pilot schools is that administrators in many cases are willing to let teachers experiment. What’s more, superintendents and principals have also been able to count on relatively stable district- and school-level leadership, even as some teachers might decide to move on because the new system for learning isn’t for them. (In the Rochester district, 47 percent of the 4,400 students live in poverty, according to district statistics.)

New Hampshire isn’t reinventing sliced bread here. Nebraska tried a similar kind of local-assessment system beginning in 2000, but pulled the plug about seven years ago. But with a different, arguably more challenging political environment for the usual crowd of summative standardized tests, New Hampshire’s budding approach might appeal to a broader audience. And in fact, states such as Colorado and Kentucky have expressed interest in doing something quite similar to what the Granite State is developing.

What if New Hampshire can’t convince the U.S. Department of Education that its local assessment system can be scaled up? It will drop PACE and administer the Smarter Balanced tests in English/language arts and math the same way states have traditionally given summative exams. Unless, of course, a reauthorization of the Elementary and Secondary Education Act is passed by Congress and signed into law by the president. That might create some more breathing room for these kind of local assessments.

For more on performance assessments, read this interview on our “Learning Deeply” blog in September with a New Hampshire teacher, Jennifer Deenik.