SAT Scores Fall While Classroom A's Rise
At first glance, the disconnect between the decline in SAT scores and the rise in the number of students getting A's in the classroom seems to confirm that grade inflation is the reason ("U.S. Students Earn More A's, But Drop in SAT Scores," breitbart.com, Jul. 18). I don't doubt that inflation plays an important role, but I maintain there is another explanation worthy of consideration.
The purpose of the SAT is to rank students. If the test were loaded up with only the most important material taught effectively in classrooms, scores would likely be bunched together, making comparisons nearly impossible. To fulfill its sales pitch, therefore, SAT designers have to engineer score spread. Over the decades, they've learned that the best way of doing so is to include items that measure socioeconomic factors.
That's why classroom grades and SAT scores don't necessarily correlate. Students who have received excellent instruction don't all come from the same background. It's these differences that the SAT exploits to achieve its goal. A fairer way of drawing valid inferences about what students have learned in the classroom rather than what they have brought to the classroom would be the design of a test whose purpose is to allow absolute inferences to be drawn, rather than relative inferences. By that, I mean a test that compares a student's performance to an absolute standard, rather than to the performance of others. In short, no student should be hurt by another student's grade ("Why We Should Stop Grading Students on a Curve," The New York Times, Sept. 10, 2016).
I seriously doubt such a test will ever be used because we are obsessed with rankings. That's why the annual issue of U.S. News & World Report is so eagerly anticipated.