Opinion
Education Opinion

This Semester’s Statistics Final: The Higher Education Edition

By Eduwonkette — November 19, 2008 1 min read
  • Save to favorites
  • Print

We’ve always had a blast writing exam questions on this blog, so let me throw out a few bones for all you wily academics teaching undergrad Stat I this fall. The reader who answers both questions correctly gets an award named after her/him, which will commemorate all future exam excellence (i.e. the YOUR NAME HERE! Commemorative Award - though this hilarious post makes me want to name it after satirist Gary Babad, I will refrain!):

1) In Kevin Carey’s recent article in the Chronicle of Higher Education, he writes:

The new Cessie data also show a disconnect between students and faculty members. The view from the front of the classroom is generally rosier. Thirty percent of faculty members reported that they "often" or "very often" discussed ideas and work with students outside of class. Only about 15 percent of students said the same.

Do these data suggest that faculty and students see the educational process in fundamentally different ways? Why or why not? (Hints: How many students do faculty teach in each course? How many professors does each student have in a semester?)

2) In his NY Times op-ed, Peter Salins discusses some New York state universities’ shifting SAT standards. He writes:

In the 1990s, several SUNY campuses chose to raise their admissions standards by requiring higher SAT scores, while others opted to keep them unchanged. With respect to high school grades, all SUNY campuses consider applicants’ grade-point averages in decisions, but among the total pool of applicants across the state system, those averages have remained fairly consistent over time. Thus, by comparing graduation rates at SUNY campuses that raised the SAT admissions bar with those that didn’t, we have a controlled experiment of sorts that can fairly conclusively tell us whether SAT scores were accurate predictors of whether a student would get a degree.

As a result of this policy change, Salins makes the following causal claims:

When we look at the graduation rates of those incoming classes, we find remarkable improvements at the increasingly selective campuses. These ranged from 10 percent (at Stony Brook, where the six-year graduation rate went to 59.2 percent from 53.8 percent) to 95 percent (at Old Westbury, which went to 35.9 percent from 18.4 percent). Most revealingly, graduation rates actually declined at the seven SUNY campuses that did not raise their cutoffs and whose entering students’ SAT scores from 1997 to 2001 were stable or rose only modestly. Even at Binghamton, always the most selective of SUNY’s research universities, the graduation rate declined by 2.8 percent.

Do you accept Salins’ claim? Why or why not? (Hints: How is this similar/different from an experiment?)

And the Award Will Be Named...: The Corey Bunje Bower Commemorative Award. See his comprehensive response inside.

The opinions expressed in eduwonkette are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.