« The ATR Deal: An Acknowledgement that Teacher Price Incentives Aren't All They're Cracked Up to Be? | Main | This Week's COWAbunga Award! »

This Semester's Statistics Final: The Higher Education Edition

| 10 Comments
standardDev.gif
We've always had a blast writing exam questions on this blog, so let me throw out a few bones for all you wily academics teaching undergrad Stat I this fall. The reader who answers both questions correctly gets an award named after her/him, which will commemorate all future exam excellence (i.e. the YOUR NAME HERE! Commemorative Award - though this hilarious post makes me want to name it after satirist Gary Babad, I will refrain!):

1) In Kevin Carey's recent article in the Chronicle of Higher Education, he writes:
The new Cessie data also show a disconnect between students and faculty members. The view from the front of the classroom is generally rosier. Thirty percent of faculty members reported that they "often" or "very often" discussed ideas and work with students outside of class. Only about 15 percent of students said the same.
Do these data suggest that faculty and students see the educational process in fundamentally different ways? Why or why not? (Hints: How many students do faculty teach in each course? How many professors does each student have in a semester?)

2) In his NY Times op-ed, Peter Salins discusses some New York state universities' shifting SAT standards. He writes:
In the 1990s, several SUNY campuses chose to raise their admissions standards by requiring higher SAT scores, while others opted to keep them unchanged. With respect to high school grades, all SUNY campuses consider applicants’ grade-point averages in decisions, but among the total pool of applicants across the state system, those averages have remained fairly consistent over time.

Thus, by comparing graduation rates at SUNY campuses that raised the SAT admissions bar with those that didn’t, we have a controlled experiment of sorts that can fairly conclusively tell us whether SAT scores were accurate predictors of whether a student would get a degree.
As a result of this policy change, Salins makes the following causal claims:
When we look at the graduation rates of those incoming classes, we find remarkable improvements at the increasingly selective campuses. These ranged from 10 percent (at Stony Brook, where the six-year graduation rate went to 59.2 percent from 53.8 percent) to 95 percent (at Old Westbury, which went to 35.9 percent from 18.4 percent).

Most revealingly, graduation rates actually declined at the seven SUNY campuses that did not raise their cutoffs and whose entering students’ SAT scores from 1997 to 2001 were stable or rose only modestly. Even at Binghamton, always the most selective of SUNY’s research universities, the graduation rate declined by 2.8 percent.
Do you accept Salins' claim? Why or why not? (Hints: How is this similar/different from an experiment?)

And the Award Will Be Named...: The Corey Bunje Bower Commemorative Award. See his comprehensive response inside.
10 Comments

Carey argues elsewhere that we should replace more college professors with computers. It seems his statistics professor should have been the first.

If a professor has a class of 100 students and frequently speaks with 15 of them, 85 of them would report that they don’t speak to their professor. If 30% of all professors do this, both groups would be telling the truth. No inconsistency.

If the grade point averages of the students accepted to SUNY with higher SAT scores were the same as those with lower SATs, then Mr. Salins’ conclusion would have some validity. I would bet, however, that their GPAs are proportionately higher, so SATs aren’t necessarily better indicators of college success than GPAs.

I'm sorry... Was this meant to be a rhetorical question? BTW, I'm a HS science teacher in nyc. Your question sounds strangely like an essay topic for a NYS teacher certification exam, like the Chemistry CST.

1) The statistics quoted suggest that only 15% of students go to office hours (which seems consistent with other data). If, in fact only 30% of professors discussed ideas and worked with students who came to office hours it might explain why only 15% of students bother to come. But it would be completely possible to have 100% of faculty interacting with students outside of the class and still only have 15% of students involved. If you wanted to compare faculty responses to students responses, a better question would have been "what fraction of your students do you work with or discuss ideas with outside of class?" Or they could have asked the students "What fraction of faculty members make themselves available outside of class?" But at minimum, to compare the answers to two questions, those answers have to have the same "units" (e.g. "% of students" or "% of faculty").

2) Since I'm not taking the course for a grade I'll focus on the "experiment" side of the question. It is most definitely not a "controlled" experiment. Many other things besides SAT scores may have changed at the schools that made the changes -- including the perception of the colleges that made the change as "more competitive" and more appealing to motivated, competitive students across a range of SAT scores.

A much more direct approach would be to look at the correlation between SAT score and graduation rate at a wide range of schools, and compare it to how a variety of other possible predictors (GPA, rank in high school class, parental income, parental education) correlate with graduation rate, as well as looking at the correlations between different possible predictors.

While we're on the subject of "causal connection" I'll bring up one of my pet peeves in the correlation-does-not-imply-causality department that I worry is becoming almost endemic in ed-policy discussions.

Even if SAT scores are a good predictor of graduation rate, focused efforts to raise SAT scores (like sending all high school students to test prep classes) will not necessarily improve overall graduation rates. That example may seem obviously silly, but in CA the logic that pushes for all 8th graders to take Algebra goes something like this: students who take Algebra II in high school tend to do well in college. Therefore students should take Algebra I in 8th grade to increase the chances that they take Algebra II in the two years of high school math that are required for graduation, to help ensure their success in college.

The logic tends to leave math teachers banging their heads against walls.

1.) This could happen for any number of reasons. Let's say that there are 100 professors and 1000 students at a college. Each professor teaches two classes per semester and each student takes four. There are 20 students in each class. Let's say that in order for a professor to say they "often" or "very often" work with students outside of class, that they must regularly interact with 5 students. 30 of the professors do this, meaning that they interact with 150 students. There's your 30%/15%.

Or, if you prefer a slightly more complex (i.e. realistic) example, let's keep the same set-up and say that while 30 of the professors interact with at least 5 students, that some interact with more and many interact with less. For sake of simplicity, let's say 10 profs interact regularly with 30 students; 10 interact regularly with 15 students; 10 interact regularly with 5 students; 10 interact regularly with 1 student, and the other 60 are curmudgeons. So 30 (30%) of the profs interact regularly with at least 5 students and put down that they do so "often" or "very often." Meanwhile, professors regularly interact with 510 different students. But, wait, some of these students are interacting with more than one professor. 30 of them are overachievers and interact regularly with all four professors (which seems like 120 students in the profs' statistics); 50 interact with 3 profs (150 to profs); 70 interact with 2 profs (140); 100 interact with only one prof (100); and the other 750 don't interact at all with profs outside of class. While 25% of the students are interacting with at least one prof outside of class, students only put they do this "often" or "very often" if they're interacting regularly with at least two profs. In this case, that means 150 (15%) put down that they're regularly interacting with profs outside of class.

I could make up a million other hypotheticals, but I'm tired of making numbers add up. The point is that since there are so many more students than faculty, faculty members can be regularly interacting with a small portion of the student body and keep quite busy while most of the student body is busy slacking off or partying.

2.) In a true experiment schools would be randomly assigned to raise or keep steady their SAT cut-offs. Which would make that variable exogenous -- there would be nothing else causing the raise in SAT score cut-offs that would also cause an increase in graduation rates. If the randomization worked, the treatment and control groups should end up with equal levels of other variables (GPA, endowment, # of applicants, etc.). It is far from clear that this is what happened in this case.

While Salins suggests that GPAs went up in similar increments across the campuses, this could reflect a mixture of the baby boom echo, increased college enrollment rates, and grade inflation -- meaning that a college could decline in relative selectivity/prestige and still see an increase in average incoming student GPA.

What we do not know is what led these nine campuses to raise SAT cut-off scores while the other seven did not. Were these 9 on an upward trend in prestige/selectivity? Had they been gaining more attention, raising more money, building new facilities, etc. and, therefore, been drawing more applicants? Or were they simply convinced that SAT scores were more important than people at the other seven thought? Similarly, were officials at the other seven not in position to raise SAT cut-offs b/c they were failing to recruit enough applicants with high SAT scores, or b/c they truly believed that grades were more important?

If colleges that are in a position to increase selectivity decide to boost SAT cut-offs while those who are not don't, then we can predict that graduation rates will increase at the former and hold steady or fall at the latter -- not because of decisions surrounding SAT scores, but because of the circumstances that led to that decision. If it's simply a matter of preference or ideology in terms of what is more important -- grades or SAT scores -- then some some sort of causal claim may not be wholly unjustified. Unless, of course, this preference is also causing a change in graduation rates unrelated to the independent effects of SAT scores -- say, for example, that accepting kids with higher SAT scores but a similar GPA means that the school has more students from upper-income families that are more likely to graduate while also being able to afford more generous grants to the smaller percentage of students that are eligible for financial aid.

I did not accept Mr Salin's facts, figures or analysis because his basic descriptors were so wrong. For example: He stated that Brockport and Oswego are "urban campuses".
Brockport is a village of 8,000 people still isolated from Rochester suburbs. Basically, a fine quaint town hosting a party school.
Oswego , at 17,000 population, on the south shore of Lake Ontario approx. 40 miles NW of Syracuse (150,000?) might be within the stretch of being termed "urban".

If someone cannot get basic categorical terms accurate, how are they to analyze finer things precisely? Oh , he was Provost for how many years? How stupendously has the value of NY State education improved during that time?
How many more Oliver Norths and James Howard Kunstlers graduated from Brockport?
Enough time on that .

1) We all have the VERY SAME BRIGHT STUDENT sitting in the front of the class. I know it seems impossible that the same student is attending all of our classes all over the world, but she's a highly motivated student. No one else ever talks with us after class. Oh, I know 15% of students say they do. All but one are lying.

2) Salin is 100% correct: of COURSE you improve graduation rates if you start selecting by SAT score. The only folks who can go to college four years straight without financial difficulties are those whose parents can pay for SAT coaching while in high school.

I don't think it is a surprise that SAT scores are a good predictor of college graduation.

First, we accept the general principal that students who do well in school are those who are best at passively accepting information and then regurgitating what information is asked for on standardized tests. (you don't have to accept it, but I think a majority of progressive educators do)

Second, we also accept that the SAT like most standardized tests, tend to favor those who are best at "doing school". (again for the same reason and you are welcome to disagree)

Accept those two thought and it should be no surprise at all that good test takers are also good at traditional school, thus they are more likely to graduate.

The question is are the colleges really educating those students?

I'm not an education pro, although I did teach at a design school for about 7 years.

My question is: isn't the job of a school to graduate everyone? The predictor of graduation based on what the kids were before they came seems to me an implicit acceptance of the idea that the purpose of school is to sort, not to create places to learn.

I understand that the historical function of schools has been to sort. Identify and support the smart ones to create more value for everyone. And I understand that the values of our democracy correctly identify the purpose as getting everyone to learn to think and learn.

But, it seems to me that whole question of predicting success diverts the focus from the school to the school's customer. It's sort of a General Motors problem.

Just curious if this makes sense to readers of this blog.

Thank you for any replies.
(Note: I promise not to get into any flame wars. It's just an honest, good faith question put to people who live with this every day.)

Comments are now closed for this post.

Advertisement

Recent Comments

  • Michael Josefowicz: I'm not an education pro, although I did teach at read more
  • Brendan: I don't think it is a surprise that SAT scores read more
  • Sherman Dorn: 1) We all have the VERY SAME BRIGHT STUDENT sitting read more
  • Py Samer: I did not accept Mr Salin's facts, figures or analysis read more
  • Corey: 1.) This could happen for any number of reasons. Let's read more

Archives

Categories

Technorati

Technorati search

» Blogs that link here

Tags

8th grade retention
Fordham Foundation
The New Teacher Project
Tim Daly
absent teacher reserve
absent teacher reserve

accountability
accountability in Texas
accountability systems in education
achievement gap
achievement gap in New York City
acting white
admissions
AERA
AERA annual meetings
AERA conference
AERJ
Alexander Russo
Algebra II
American Association of University Women
American Education Research Associatio
American Education Research Association
American Educational Research Journal
American Federation of Teachers
Andrew Ho
Art Siebens
ATR
Baltimore City Public Schools
Barack Obama
Bill Ayers
black-white achievement gap
books
books on educational research
boy crisis
brain-based education
Brian Jacob
bubble kids
Building on the Basics
Cambridge Education
carnival of education
Caroline Hoxby
Caroline Hoxby charter schools
cell phone plan
charter schools
Checker Finn
Chicago
Chicago shooting
Chicago violence
Chris Cerf
class size
Coby Loup
college access
cool people you should know
credit recovery
curriculum narrowing
D3M
Dan Willingham
data driven
data-driven decision making
data-driven decision-making
David Cantor
DC
Dean Millot
demographics of schoolchildren
Department of Assessment and Accountability
Department of Education budget
desegregation
Diplomas Count
disadvantages of elite education
do schools matter
Doug Ready
Doug Staiger
dropout factories
dropout rate
dropouts
education books
education policy
education policy thinktanks
educational equity
educational research
educational triage
effects of neighborhoods on education
effects of No Child Left Behind
effects of schools
effects of Teach for America
elite education
ETS
Everyday Antiracism
excessed teachers
exit exams
experienced teachers
Fordham and Ogbu
Fordham Foundation
Frederick Douglass High School
Gates Foundation
gender
gender and education
gender and math
gender and science and mathematics
gifted and talented
gifted and talented admissions
gifted and talented program
gifted and talented programs in New York City
girls and math
good schools
graduate student union
graduation rate
graduation rates
guns in Chicago
health benefits for teachers
High Achievers
high school
high school dropouts
high school exit exams
high school graduates
high school graduation rate
high-stakes testing
high-stakes tests and science
higher ed
higher education
highly effective teachers
Houston Independent School District
how to choose a school
IES
incentives in education
Institute for Education Sciences
is teaching a profession?
is the No Child Left Behind Act working
Jay Greene
Jim Liebman
Joel Klein
John Merrow
Jonah Rockoff
Kevin Carey
KIPP
KIPP and boys
KIPP and gender
Lake Woebegon
Lars Lefgren
leaving teaching
Leonard Sax
Liam Julian

Marcus Winters
math achievement for girls
McGraw-Hill
meaning of high school diploma
Mica Pollock
Michael Bloomberg
Michelle Rhee
Michelle Rhee teacher contract
Mike Bloomberg
Mike Klonsky
Mike Petrilli
narrowing the curriculum
National Center for Education Statistics Condition of Education
NCLB
neuroscience
new teachers
New York City
New York City bonuses for principals
New York City budget
New York City budget cuts
New York City Budget cuts
New York City Department of Education
New York City Department of Education Truth Squad
New York City ELA and Math Results 2008
New York City gifted and talented
New York City Progress Report
New York City Quality Review
New York City school budget cuts
New York City school closing
New York City schools
New York City small schools
New York City social promotion
New York City teacher experiment
New York City teacher salaries
New York City teacher tenure
New York City Test scores 2008
New York City value-added
New York State ELA and Math 2008
New York State ELA and Math Results 2008
New York State ELA and Math Scores 2008
New York State ELA Exam
New York state ELA test
New York State Test scores
No Child Left Behind
No Child Left Behind Act
passing rates
Pearson
picking a school
press office
principal bonuses
proficiency scores
push outs
pushouts
qualitative educational research
qualitative research in education
quitting teaching
race and education
racial segregation in schools
Randall Reback
Randi Weingarten
Randy Reback
recovering credits in high school
Rick Hess
Robert Balfanz
Robert Pondiscio
Roland Fryer
Russ Whitehurst
Sarah Reckhow
school budget cuts in New York City
school choice
school effects
school integration
single sex education
skoolboy
small schools
small schools in New York City
social justice teaching
Sol Stern
SREE
Stefanie DeLuca
stereotype threat
talented and gifted
talking about race
talking about race in schools
Teach for America
teacher effectiveness
teacher effects
teacher quailty
teacher quality
teacher tenure
teachers
teachers and obesity
Teachers College
teachers versus doctors
teaching as career
teaching for social justice
teaching profession
test score inflation
test scores
test scores in New York City
testing
testing and accountability
Texas accountability
TFA
The No Child Left Behind Act
The Persistence of Teacher-Induced Learning Gains
thinktanks in educational research
Thomas B. Fordham Foundation
Tom Kane
Tweed
University of Iowa
Urban Institute study of Teach for America
Urban Institute Teach for America
value-addded
value-added
value-added assessment
Washington
Wendy Kopp
women and graduate school science and engineering
women and science
women in math and science
Woodrow Wilson High School