« NCLB's Impact | Main | Testing and the Arts »

# U.S. Math and Science Skills: Improving, or Not?

Recent international assessments shed new—and seemingly constrasting—light on U.S. students' grasp of mathematics and science.

-->

« NCLB's Impact | Main | Testing and the Arts »

Comments are now closed for this post.

TIMSS does not measure early grade (2-3) performance where other nations children gain their advantage in Math. China does not participate in TIMSS they would be ranked at or near the top. We are making tiny improvments compared to some of the world but falling further behind in many areas due to our over commitment to traditional class schedules, classroom practice and school organizational structure.

You site "traditional" class schedules, classroom practice, and school organizational structure as the reasons for US poor performance. I disagree. What we need is more emphasis in the early grades on teaching basic mathematics--not so much group and discovery learning, but actual traditional teaching of number concepts that students don't ordinarily "discover" on their own. Mathematics should be taught in a logical, heirachical fashion--not hodge-podge skipping around. Higher concepts should build on the basics. For the past 50 years or so math has been presented largely without structure, which is why we need to return to a more traditional method. "New" isn't necessarily improved.

I am a computer teacher in an elementary school, and run a computer lab that uses software to instruct across the curriculum. I have made the following observations over my years of teaching the mathematics strand to grades 1-6:

1)When I present computer-generated problems that cannot be presented well in the "paper and pencil" offline world, students really perform the required abstract thinking and strategizing to set up the problem, but when asked to do the final computation, they are crippled or handicapped by not knowing or retaining their math facts or by the inability to count accurately.

2) I think we teachers should spend more time on how to teach the students to memorize and to retain whatever they need to master--be it poems, prose, phone numbers, passwords, screennames, or the basic facts. We assign them to learn the facts, but assume they know how to sit, practice, repeat,develop mneumonics, and self-test until they know their math facts. These students need to know these skills as well.

3) We as teachers do not adequately teach the students to use the addition and multiplication tables to learn their math facts, number properties, and number patterns. More importantly, we do not use the tables to teach or present the inverse operations of subtraction and division.

4)We as teachers do not use number lines to teach fractions, except for equivalencies. Fractions in the early grades are associated with pizza pieces, and do not seem to have an address or existence between whole numbers. In fact, this "betweeness" concept is overlooked in the early grades. Because of this deficit, the students enter fifth and sixth grade without a sound base for understanding the product and quotient of two fractions--mixed or proper. They are exposed to the algorithm without any models or proofs of why the algorithms work. As a result, their test scores reflect this poor achievement in fractional computation.

When we compare SAT scores among states, we generally do the comparisons with other states who test about he same percentages of their high school students because the lower the percentage tested, probably the higher skill levels of the students tested.

When I see international comparisons I see no attempt to be sure it's apples to apples. As an example, I recently had the opportunity to speak with a teacher from Germany. They track their students into different educational program relatively early on. He told me that non-German speaking immigrant students must learn German before entering the regular programs. I suspect that our very democratic educational system (I'm not objecting to that) is perhaps being compared to selective segments of student populations from other countries. I would really like to see a reliable summary of how other countries' educational systems are organized and clarification on what groups of students are indluded in their test populations. My concern could be misplaced, but I will hold it until I see evidence we are or are not comparing aplles to apples. So far as I know, our best students still rank with the international best students, but for that, too, I have seen no reliable evidence.

I believe each of the previous comments have a degree of validity. I not only wonder about the apples to apples - I wonder if the sampling from the U.S. is broad enough to be a true sampling. Are they including parochial and prep-schools in the sampling? Are they selecting schools within major cities or suburban schools? I just wonder if the U.S. sampling is targeting too small and too narrow a segment of the education system to present an accurate picture of U.S. student performance. Quote: "For 4th grade pupils, TIMSS is a 72-minute assessment, which was given to 248 randomly selected schools and 10,795 pupils; the 8th grade test takes 90 minutes and was administered at 232 schools, to 8,912 students." End Quote.

I tried to find information about the schools in the sampling and have so far (in over an hour of searching through the TIMSS sampling report) could not locate this information. Perhaps someone from EdWeek or a visitor here could point me to the information I'm looking for.

I did find the data analysis tool on their website very interesting. I wanted to compare actual technology use for instruction in Math classes to those countries with high scores. However, the push from the NCLB act on data, I believe the average teacher or building principal would not have the time to adequately draw inferences from this data to influence decision-making with regard to improving student performance. In NY State, the State Report card has been developed to compare school results to similar schools. I think it would be helpful to those struggling to make this data useful if the TIMSS survey could add this element. Certainly comparing compulsory public education students to what amounts to a college-prep system in another country seems askew if the survey is to be meaningful.

I also would like to see the actual questions. What type of response was expected and the thinking process required to repond appropriately. An interesting documentary available through the Annenburg website (http://www.learner.org/resources/series26.html#) called Minds of Our Own, raises some key questions about learning and instruction in science which may be much more valuable and tangible to educators than the TIMSS survey.

There is certainly more investigation needed, but I don't think the TIMSS survey offers enough answers to positively impact practicing educators who are looking for improving student performance, and merely raises questions and fodder for politicians.

One point I meant to add - I'd like to see a comparison of students who received classical music instruction on a musical instrument as part of the TIMSS as the skills described by Ms. Eastman as essential to performance in math are addressed in music instruction.

The "positive" finding that American eigth graders' performance in math (and science) is more optimistic than real,i.e., that our eighth graders have gained on other industrialized and nonindustrialized nations with whom we are compared. A closer look at our performance on the Trends in International Mathematics and Science Study yields the following very bad news.

Included in countries who continue to perform higher in grade 8 in math than we do are: China, Malaysia and Latvia. While there has been no growth in normalized scores for the US,all but one of those countries performing currently below us is improving. Five of the 7 above us in the rankings are also gaining while we are not, these are Singapore, Hong Kong, Latvia, England and Hungary.

Add to this the very debilitating fact that in December 2004, it was announced that among 40 industrialized countries, the U.S. ranked 28th in math competence

(18th in reading). (http://www.nytimes.com/2004/12/07/national/07student.html?oref=login)

The worst is this. Where rote learning can earn credits, our students manage. Where thinking, understanding and application are required, our students do not.

For those who have not yet had an opportunity to read the "special issue" in Curriculum Inquiry, 34(3), Fall 2004, I would strongly recommed it. Russell's article, subsequent debates, and Russell's rejoinder are particularly relevant to our discussions on TIMSS and PISA findings.

The lead article by Russell "Connections among Factors in Education" argues that there is a lack of fit between international tests and the local curriculum taught in the different countries. Although student scores on such examinations are "very important indicators of educational progress for individual students," making wrong inferences about "populations of students" based on scores in these examinations can be seriously misleading (similar to Susan Ciminelli's concern above). Although Russell proposes an interesting equation to deal with our educational problems and choice-making: (A x C x R)/T = k, where A is achievement rate, C is coverage, R is retentivity, and T is time, I share his concern as an educator about how we "select the most appropriate material to teach" and would add, how do we determine the most efficient and effective ways to teach this material?

In a subsequent dialogue (complaint), "The Perils and Promises of International Surveys of School Achievement," Travers raises three important questions based on Russell's equation. Based on current research,

1. Do we have the best curricular material?

2. Are we teaching as well as we can?

3. Are we reaching as many students as we can?

I would add a fourth, Are we attracting and employing the best teachers we can? For answers, we can look at Horizon Research Inc.'s report "The Status of Elementary School Science Teaching (2002)" on teacher perceptions of their own content prepardness. The report acknowledges that "fewer than one-third of elementary teachers reported feeling very well qualified to teach each of the science disciplines" (p. 5). In a subsequent report "Looking Inside the Classroom (2003)," they observed that ~ 50% of the lessons in science and mathematics were low in quality. Science was not only infrequently taught, but the weakest elements in science and mathematics lessons were the limited time, opportunity, and structure afforded to students to engage, ask questions, make sense, and understand all the material.

These observations are striking when compared with Fowler and Poetter's article in the same issue of Curriculum Inquiry. From their qualitative case study, the authors concluded that the key to French success in elementary mathematics education was their use of ongoing formative assessment, mathematically competent teachers, policies and practices that help disadvantaged students, and the use of constructivist methods. French teachers, not only occupy a prestigious role in their society, but only those who demonstrate a high-level of competence and understanding in mathematics teach at their elementary schools!

At any rate, as Leithwood indicates in his final article and commentary on the debates, state claims of progress are not reflected in NAEP results or International comparative studies. All this media, practitioners, and researchers' interests in International tests, Leithwood asserts could also serve two important policical uses: "Avoiding parochial standard setting and providing an external check on claims about the success of reform efforts" (p. 370).

Amidst the current brouhaha based on reports on the third version of the trends in TIMSS and PISA studies, concerns about shortchanging a generation of students as a result of focusing on the "basics" are heightened. Without a proper foundation in science and mathematics in elementary schools, their performance in PISA (2006, focused on science) and NCLB mandated science tests (2007) are likely to raise more questions about their problem-solving abilities. My 15-years of classroom teaching in middle and high schools internationally has repeatedly shown that, science, properly taught, engages all students, develops their analytical abilities, enhances their self-worth and confidence, and consequently improves their school-wide academic achievement.

Nancy Romance and Michael Vitale's 15-years of research on increasing student academic achievement through meaningful hands-on science in elementary schools in Florida is another case in point. Focusing on best practices, and their underlying concepts and principles, like Nancy's NSF-funded Science IDEAS Project and similar practices could empower more teachers. Sharing our knowledge and resources about how best to teach "fundamental" ideas in science and mathematics is another. In conclusion, as Russell points out, teachers' experience bears out the fact that what their students learn is very closely related to what they teach.

For three years I had the honor to observe in bilingual Laotian classrooms (Laotian teachers and Laotian students) in an American school. By the end of first grade the Laotian students were applying mental math calculations(addition, subtraction, division, and multiplication)and using their critical thinking to solve difficult problems. The teachers related that their math teaching was typical of that one would see in an Asian country and they refused to use the University of Chicago math curriculum that served as the district's curriculum. The teachers were contemptuous of its shallow coverage.

So how did they get all of their students to such high proficiency? Beginning the first week of kindergarten, students started daily drill in addition and subtraction facts. You could hear the unison voices echoing down the hall. If one student had more difficulty, Laotian teachers tenaciously worked with him or her until mastery. After facts were learned to automaticity, then students learned to apply relevant algorithms so that they could solve two and three digit problems in their head. Once students were capable of doing that then teachers began introducing difficult word problems.

They worked with the same diligence in getting their students to solve different types of word problems as they had with the drill. In all my years in American classrooms I'd never seen anything like it.

As an aside, I should mention that years ago when I was working as a behavior consultant I never was called into math classrooms -- always, reading, gym, recess, etc. Suddenly I realized that almost half of my referrals were from math classes. What was the difference? In Illinois, U of Chicago Everyday Math had become the fad and my data showed that at any time no more than 20 - 30% of the students were above an 80% mastery level on the majority of learning tasks. When achievement drops that low students either act out or tune out. In contrast to the Laotian success based mastery learning, the children in these classes were experiencing such high levels of frustration that their behavior quickly deteriorated.

I don't know where this one writer is seeing bells, and drill and traditional math classrooms, but in my experience, they are as rare today as when I was a middle schooler in the midst of the great New Math experiment.

Susan Eastman's and Mary Damer's comments on the practical aspects of teaching mathematics are well received regardless of how our students "compare." As I high school chemistry teacher, I see students who do not have mastery of basic mathematical skills. In particular, students who have trouble with chemistry ask for the "answers" immediately and are impatient with the thinking process. My guess is that these students are looking for a plug and chug algorithmic experience. In other words, they want simply to follow a pattern without any understanding of how to think through or set up a problem.

There is a distinction that is made by chemistry teachers between formulas and equations. I imagine that mathematics students are looking for the "magic" formula - like the formula to calculate the circumference of a circle, a volume of a sphere, or the sin (theta), where one needs only to rearrange the formula, plug in the numbers and chug (calculate.) The view these formulas in the same way that they view a calculator. Pop in the numbers and out comes the "answer." No deep thinking or understanding is involved in the type of testing that tests for the right "answer."

With over 100 elements and the multiplicity of ways in which atoms, molecules can react leads to problem solution complexities that are several orders of magnitude more complex than what is required in the traditional math sequence. Students who cannot think through a problem, because they have been successful in mathematics plug and chug classes have severe problems in chemistry.

For some students the search for the "answer" is so well-ingrained that they complain bitterly, "But I do know mathematics, just ask my math teacher!" I believe they mean what they say. But I believe that their understanding of what mathematics truly is (a method of thinking) is distorted by their success in mathematics classes.

I am opening up a dialogue with the math teachers at our school and they seem to be receptive (as long as it doesn't take too much work!) The science teachers, who have the greatest stake in this dialogue will be providing the middle and high school math teachers with some problems and problem solving thinking methods and strategies.

I am hopeful that dialogues such as these will be the key to transforming education.

All the best to all the teachers out there. We are the ones on the front lines, who know the terrain best.