« Campbell's Law and testing | Main | Treasuring Stubborn Interest »

Should Data Matter?

| 5 Comments

Dear Deb,

As you know, Mark Twain (or Disraeli or someone) once wrote that there are three kinds of lies: Lies, Damn Lies, and Statistics. Everyone in education, or so it seems, has learned how to present them in ways that bolsters whatever they want to do. Either, the sky is falling, or things have never been better.

Let me suggest that the statistics you offer are open to different interpretations, to be generous. I don't really know how anyone can say how students in Singapore or Sweden would perform IF they took a NAEP test, which they did not take. Why not just look at the international tests that were taken by students in the United States and many other nations? At least one need not hypothesize about what would have happened, but can look at the results of taking a common test (by the way, this makes my point about the value of having a common national test, rather than 50 different state tests).

On the TIMSS test of fourth grade science in 2003, our students scored significantly behind Singapore, Chinese Taipei, Japan, and Hong Kong. They scored significantly ahead of children from the Netherlands, Australia, New Zealand, and 13 other countries.

On the TIMSS test of fourth grade mathematics in 2003, our students scored significantly behind Singapore, Hong Kong, Japan, the Netherlands, Latvia, Lithuania, the Russian Federation, England, and Hungary. They outscored Cyprus, Moldova, Italy, and 10 other countries (including Iran and Tunisia).

On the TIMSS test of eighth grade science in 2003, our students scored significantly behind Singapore, Chinese Taipei, Korea, Hong Kong, and Japan. Our students outscored most other countries, but many of them were not comparable, industrialized nations (e.g., Jordan, Iran, South Africa).

On the TIMSS test of eighth grade mathematics in 2003, our students scored significantly behind Singapore, Korea, Hong Kong, Chinese Taipei, Japan, Belgium-Flemish, the Netherlands, Estonia, and Hungary. Our students were tied with several other nations, including Malaysia, Latvia, the Russian Federation, and the Slovak Republic, and outperformed 25 others (including many Third World nations, such as Botswana, Lebanon, Ghana, and Morocco).

On the PISA tests (Program in International Assessment), which assesses 15-year-old students, the U.S. scores about average among the 27 nations on reading literacy. The highest-scoring nations in reading are Finland, Canada, and New Zealand. Among the nations that do better than the U.S. are Australia, Ireland, Korea, the United Kingdom, and Japan.

On the PISA tests of math and science literacy, the U.S. students are average. In math, our students score significantly behind Japan, Korea, New Zealand, Finland, Australia, Canada, Switzerland, and the United Kingdom. In science, our students score significantly behind Korea, Japan, Finland, the United Kingdom, Canada, New Zealand, and Australia.

The American Institutes for Research reviewed the TIMSS and PISA data and concluded that if one looked only at the 12 nations that participated in all the assessments, then American high school students' performance is consistently "mediocre," consistently ranking 8th or 9th among the 12 nations. That study can be found here.

Our fourth-grade students begin well (although the AIR authors say their performance is also "mediocre"). By the time they are 15, however, they are smack dab in the middle (or worse, according to AIR). Is that okay? Presumably we want to know whether our students are learning as much as their peers in similar nations. Maybe we can learn something from the results of these tests and improve our own curriculum and pedagogy. Maybe we can just shrug and say we don't care how they score in comparison to any other nation and that test scores don't tell us anything that we need to know.

Again, what I take away from all this is the value of a common test at different grade levels. And by the way, NAEP is not a "low-stakes test." It is a no-stakes test. It tests a sample of students. No student learns his or her score. No teacher learns his or her students' scores. You can't get closer to the definition of no-stakes than that.

As for Campbell's Law, I find it confusing. I take it to mean that any measure or information that is used for policymaking tends to be corrupted by its use. What should we base policymaking on then? Hunches, ideology, hopes, fears? I think we have to rely on some sort of reliable data. We should be talking about how to get data that can improve decisionmaking, how to make it better, how to refine it, how to make it more reliable, rather than disparaging any possibility of ever measuring anything meaningful.

Best,

Diane

5 Comments

Dear Diane & Others,

Here are some observations from your post:

1. It appears that U.S. students become less inspired by math and science as they move through the system. That would seem to mean that our middle school and h.s. math and science curricula (and perhaps teachers) are, in general, less than inspiring.
2. Inspiration is not solved by more measurements and tests. We need creative curricula in math and science that motivate students beyond the numbers. Teaching to a test won't cut it.

I firmly believe we'll start seeing higher test scores in those areas when students are made to understand, at an early age, that numbers-based disciplines are grounded in exciting humanistic or cultural questions. More students might get interested in math and science courses if they were allowed more reading time with magazines like Smithsonian, Popular Mechanics, or those related to the electronic gaming industry. I hate to sound like Dewey, but with the young you start with subjects they can relate to, and then branch into the more abstract.

Sincerely,

Tim Lacy
Chicago, IL

Tim,
I certainly agree with you that measurements and tests are no substitute for instruction. Teachers need a first-rate preparation to learn how to teach science in the K-12 years, as well as continuing professional development to learn about more effective pedagogy.
And by the way, the AIR report set out to debunk the idea that kids in the 4th grade do swell, then see their performance decline in 8th grade and then again in high school. The AIR study claims (with supporting data) that there is no such decline; that our fourth graders, our eighth graders, and our high school students are all woefully behind international benchmarks in science and mathematics.
Diane Ravitch

The mediocre standing of America's students is of no surprise to me for one reaps what one sows. NCLB seeks only a basic level of proficiency. One can watch the gifted stagnate as the nation turns its back on them while "leaving no child behind". My state accountability program then awards bonus money and shiny reputations for movement towards proficiency, adding a second impetus to
make plans which focus/target a select group of students which would favorably impact the teacher, principal, and district. What about the students???
My state's school grading system compounds the dismal situation by masking of poor
growth in high proficiency schools, which keeps parents unaware that they should be screaming from the rooftops rather than supporting educational practices which lead to their child's
underachievement.Low proficiency schools creating high growth are sanctioned.An uninformed and complacent parent base allows this "deception" to continue, schools to profit based on faulty criteria, and our best young minds to be sacrificed as per a national policy titled No Child Left Behind.

Coming from another culture to America, I can say that what distinguish the math and science education from the other countries is Teachers Competency. If you like make an experiment: Send specialist in the classrooms as teacher assistants and just see how it goes. It is amazing to note how low conceptually the teacher's level is and how many mistakes they make. It is astonishing. I say this because I have tried that experiment. So as an upshot: You can make as many improvements as you want to the curriculum, if you have teachers that don't know the content, that mission is gone fail for sure. What can be done. First, the universities should not grant degrees to students that have a very poor of understanding in their own areas. Especially the institutions that are responsible to produce teachers have to raise their standards. Failing should be an option for those that do not demonstrate mastery, otherwise this problem is going to continue. Second, schools should be very careful in hiring people to teach subjects on which that don't have a degree. This should be avoided or raise the bar for teacher certification. Maybe the certification tests more than standardized ones for the students are responsible for this mediocre level in math and science education. Last, the professional development for math and science teachers need to be done from really profesionals, not from those that themselves have a very shallow understanding of the subject.

Ilirjan Cane makes an excellent point. No amount of good curriculum--and no amount of teacher "freedom"--can substitute for a solid preparation in the subject that is taught. We need far higher standards for admission to teaching of math and science and other subjects as well. Of course, it would be difficult or impossible to have higher standards without knowing what teachers will be expected to teach, i.e., a curriculum.
Diane Ravitch

Comments are now closed for this post.

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments