Opinion
School Choice & Charters Opinion

What Assessment System Would Serve Students & Society?

By Joe Nathan — February 17, 2015 12 min read
  • Save to favorites
  • Print

Joe Nathan opens this week’s discussion. Deborah Meier responds, and Nathan offers a brief reaction.

Dear Deb,

You asked last week whether human judgment has a place in education. My response, going back over more than 40 years, is “Absolutely yes.” To help show how that could be done, I’d like to discuss a national report entitled “What Should We Do: A Practical Guide to Assessment and Accountability in Schools.” Our Center coordinated this project on assessment in 1999-2000. We worked with some of the most thoughtful evaluation authorities in the country, and some of the most creative, innovative public schools in the country.

The report strongly encourages using multiple measures including human judgment. I think the report describe the kind of assessment system that makes sense both for helping students grow, and for helping the broader society understand what’s happening in a school.

First, we convened a number of evaluation authorities, to discuss what was vital, and what valuable, in assessing a school. These experts included Professor Lauren Resnick, then president of the American Educational Research Association, Professor Jim Ysseldyke, University of Minnesota, one of the nation’s leading authorities on assessment of students with special needs, Dr. Edward De Avila, a national known authority on assessing students for whom English is the second language, Professor James Catterall, University of California, Los Angeles, who has studied the role of arts in education, and assessing students with whom traditional schools have not succeeded, and Dr. Al Ramirez, former teacher, principal, district superintendent, Iowa Superintendent of Public Instruction and then professor at University of Colorado, Colorado Springs.

Before producing a final report, we invited reactions and examples from district and charter all over the US. We did this via various networks and a November 1999 article in Education Week. Eventually we shared information from 11 district and 10 charter public schools from all over the US. (One of them was Central Park East, which you founded). We also talked with authorities from the American Federation of Teachers, American School Counselors Association, Council of Great City Schools, Charter Friends Network, Massachusetts Charter Schools Resource Center, National Association of State Boards of Education, North Central Regional Education Laboratory, Rural Trust, and Small Schools Network.

I list these individuals and organizations, not to be boring, but to show that we tried to listen to and learn from a variety of thoughtful people with wide-ranging insights and experience.

Together we developed 6 “vital” and three “valuable features that the report suggested be part of any and every school’s evaluation process.

The six vital features were:


  • Clear, measurable outcomes for each school
  • Goals that are widely understood and supported by families, students and educators
  • Multiple measures, including use of standardized tests and applied performance measures


  • Measurement of all students’ work, not necessarily by using the same assessment.
  • Assessment of students’ growth, including students who don’t speak English at home; again not necessarily using the same assessment.
  • Explanation of how information gained from assessments is being used to inform school improvement efforts.

We concluded that the following were valuable:


  • Using a person or persons outside the school to help assess student work.
  • Measuring experiences and attitude of school graduates.
  • Creating a parent/educator/community committee to supervise assessment effort.

Speaking specifically to your point about human judgment’s place in assessment, the report cited graduation programs at Central Park East, Urban Academy, St. Paul Open School (now known as Open World, and Minnesota New Country School. Each of these schools uses a portfolio approach to high school graduation. The first three are district schools. The fourth is a chartered public school. Though the details vary, each school relied in part on assessments of students by adults. Some included both educators and community experts. The report also cited a number of other performance measures in K-12 public schools.

The report also used experience from Alverno College in Milwaukee. For decades, this college has used a rubric developed by faculty to measure a student’s abilities to speak in public. Alverno keeps a record of how each student progresses toward various public speaking standards, measure by humans, not standardized tests. It’s a great example of measurement that uses a mixture of standards and human judgment to determine whether students are making progress, and in what ways.

To sum up, yes, I think there is a very important role for human judgment in assessing students and in the overall assessment of a school. We’ve tried hard to provide examples of how this can be done.

Deborah Meier responds:

Dear Joe,

What an amazing 10 days I’ve had - to Lima for a week with my granddaughter and then Texas at the always amazing meeting of the North Dakota Study Group! Our focus this time was issues related to the good fortune of having a home language that isn’t English.

The NDSG met in North Dakota in 1972 at the request of one my heroes, Vito Perrone, to lend support to Head Start parents in their effort stop the use of standardized tests--IQ tests at first-- to measure their children.

They found it insulting.

And we agreed, including many testing experts who joined us--like Ted Chittenden, Walt Haney to name just two. So now 40 plus years later we’re back fighting an even more pervasive testing system.

I’d pay more heed to those weighty organizations and experts you mention if they had done more, louder challenging about the testing mania that has undermined serious and useful education for the past 20-25 years. We’ve needed them and other experts to just plain say the truth: this is not science. I like some points more than others.

That they start with proposing that we use clear measurable tools to judge institutions and children seems sad. Of course, it could be that they are using the term “measurement” in a way we’re not accustomed to. Measurement has become, for me, synonymous for a system for differentiating; ranking those with most to least “academic” smarts objectively. In fact, they do neither. If they are suggesting a new paradigm, that does not require a ranking order, or a pretention of precision, and designed with a particular purpose and audience--then I’m arguing semantics, and I apologize. Another query. Does using multiple bad tools any better than one? Or are they recommending using a range of very different “tools"--including observation, taped reading samples, student work, etc.? What defines, in their terms, either reliability or validity? Both of which presently rest on what I deem to be built-in race and class biases--and ever more shall do so. That it predicts success on similar tools given in the future says as much about the real world as the student’s competence! We’ve abandoned the normal curve system of scoring by percentages (which I’m not a fan of) for an even sillier one--which I call a politically set scoring system. It’s set to insure that just the right number succeed and fail based on the latest politically “rigorous” agenda. Not new, Joe, the NYC DOE did something similar for decades--to make each new superintendent and/or Mayor look good.

So many of the issues involved in the assessment discourse speak to how we define what it means to be a well educated citizen of a democracy--useful to oneself and the common good. If education is a preparation for the weighty tasks of deciding on matters of earth-shaking repercussions to our common future, we’d better spend more time thinking together, community by community about what we want to judge or measure. The beauty of the kind of work I did with Ted Sizer is that we each worked to build schools that could learn as they were developing--from themselves and others. I borrowed from The Parker School (charter) and they from us, and on and on. We also benefited from the idea of visiting teams of colleagues. The Center for Collaborative Education in Boston is doing some interesting work--based on the original Pilots (1/3 of Boston’s schools are Pilots) and now being proposed for all Boston schools.

It might help us see when online learning can or cannot be useful and when classrooms are too large (to do x)? And why ranking everything is profoundly undemocratic perhaps?

I’d like to reverse the process and have that body of experts respond to proposals put forth by teachers and parents and even kids, as they did when the Coalition schools, some 20 years ago, developed their graduation criteria. The designers are known as the New York Performance Standards Consortium. The bottom line: the designers should be as close as possible to the real data"-- the kids!

Deb

Joe Nathan responds:

Deb,

In this brief response, I’ll comment on a few of the concerns that you mentioned above. I hope we can continue this discussion and that many others will join in.

First, yes, Deb, you and I agree that it would have been valuable to have some families, educators and students helping develop features of each school’s assessment system. The report I described above recommended that every school have a committee of educators, family and community members helping develop and supervise its assessment program.

Second, you’ve recommended, and I agree, that educators should examine the work of the NY Performance Standards Consortium. In fact, the “What Should We Do” report included examples from one of the founders of that Consortium, Urban Academy in New York City. The report also included examples from Central Park East, which you founded.

In developing the “What Should We Do” report‘s recommendations, we asked for and received terrific responses and suggestions from educators all over the U.S. Many of their insights are included in the report.
At the same time, I’m glad we included people like Lauren Resnick, then president of the American Education Research Association. She has had a distinguished career. She and others involved have devoted decades to helping improve public schools.

Third, you wrote we should have started with people in schools, and asked the researchers to react to recommendations from schools. If I were doing the project again, I would include, from the beginning, both researchers and people working every day in some outstanding schools. (And by outstanding, I don’t mean just those with high test scores.)

Fourth, yes, I believe that each school should have some clear, measurable goals that are well known to and supported by the faculty, families and students. Progress toward many of those goals would not be measured by traditional standardized tests.

You disagreed with this recommendation.

Why should schools have some clear, measurable goals? Families deserve to know what a school is trying to accomplish. So does the broader community. Moreover, a school is more likely to reach its goals if educators, families and students involved in the school know about and agree on what the institution’s goals are, and how they will be measured.

Dr. Wayne Jennings, founding principal of the St Paul K-12 (district) Open School stressed the value of having multiple goals, multiple measurements and annual reports. You may recall Jennings as a member of the North Dakota Study group for its first 15 years. Jennings has been a wonderful mentor for many people, including me. He is a visionary educator who has helped create several exciting public schools, district and charter.

Jennings was and is no fan of heavy reliance on standardized tests. He was very clear when he joined with hundreds of parents and community members in 1970. Together, they convinced the St. Paul Board to establish the Open School.

He emphasized: “It’s not enough to describe what you oppose. You have to explain what you are for.”

So with educator, family and student participation, the Open School did a yearly report, something I’d suggest every public school produce. After several years, the U.S. Department of Education recognized this school as a “carefully evaluated, proven innovation worthy of national replication.”

The Open School annual report reflected its values and goals. For example, the school believed in learning from families and students. It believed in listening to graduates. It used some suggestions from these surveys to improve the school. For example, one survey of graduates recommended that the school increase the amount of writing students did. Educators agreed and added more writing to the curriculum.

The school also valued learning in the community, not just learning inside the building. It believed students should help improve the community. So in addition to standardized test scores, the annual report included, for example:


  • Results of parent and student surveys (identifying strengths and areas needing attention)
  • Results of surveys of graduates (once the school had some)
  • Examples of how educators used surveys mentioned above to improve the school.
  • Examples of student’s community service projects.
  • Examples of local and national field trips.
  • Number of students who took college classes.
  • What graduates did after high school.

The annual report included measures that parents, students and community members suggested be part of the annual report.

Part of an assessment system for each publicly funded school should be use of a standardized test. But as you know, there are many such tests. I think the best measure progress over a year, rather than just being given once a year. And yes, we agree that the NCLB expectation of all students being proficient by 2014 was absurd.

You asked, “Is using multiple bad tools any better than one?” Of course not. But like you, I’m all for giving power to schools to select several ways to measure what’s happening in the school. Schools should not rely just on standardized test scores and in the case of high schools, four-year graduation rates.

If I were working in Open World School today, I’d suggest adding the Hope Survey. This survey measures whether students feel they are learning to set and work toward goals. It measures whether students are developing a sense that they can accomplish things they value. A University of Kansas study found this was a better predictor of college graduation than high grades in high school or high test scores. Students who work on “real world” projects develop the kind of skills and attitudes that the Hope Survey measures. The Hope Survey is available from EdVisions (which in the interests of full disclosure, serves as the fiscal agent for our center).

Assessing student achievements, and assessing an overall school, are big subjects. I’m glad we’re discussing how this should be done. From my perspective, using a variety of measures, including some selected at the local school level by educators, families and students, is the best way to capture the broad array of things that each school is trying to do.

Joe Nathan has been an urban public school teacher, administrator, PTA president, researcher, and advocate. He directs the St. Paul, Minn.-based Center for School Change, which works at the school, community, and policy levels to help improve public schools.

The opinions expressed in Bridging Differences are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.