« All Progress Reports, All the Time | Main | NYC Progress Report Chutes and Ladders! »

In NYC, More F Schools than A Schools in Good Standing with NCLB


Some of you have asked what fraction of NYC schools receiving each Progress Report grade are in good standing with NCLB. As a refresher, NCLB labels schools in need of improvement based on overall proficiency. NYC's system is based 60% on year-to-year growth, 25% on proficiency, 5% on attendance, and 10% on surveys.

Given these differences, perhaps you won't be surprised to find that a higher fraction of F schools are in good NCLB standing than are A schools:

* 74% of A schools are in good standing with NCLB

* 67% of B schools are in good standing with NCLB

* 69% of C schools are in good standing with NCLB

* 48% of D schools are in good standing with NCLB

* 89% of F schools are in good standing with NCLB

What if we just look at the "performance grade", aka the proficiency grade, that each school received, and see how that maps on to NCLB good standing? Recall that this year, schools also were given separate grades for the performance, progress, and environment categories. I guess the peculiar results below are a function of the fact that schools are being compared to peer groups, but here's what I've got:

* 86% of A schools based on proficiency on the are in good standing with NCLB

* 60% of B schools are in good standing with NCLB

* 60% of C schools are in good standing with NCLB

* 51% of D schools are in good standing with NCLB

* 75% of F schools are in good standing with NCLB


This would make sense, since the biggest problem with NCLB is that it measures absolute proficiency to the exclusion of growth. Looks like the DOE has figured out how to remedy that particular ill.

I'm confused. I thought NCLB measured percent change in proficiency (not absolute proficiency) from one year to the next (AYP).

That is, in any given school, if X% of the 4th graders in the 2006-2007 school year were proficient, then X+C% of the 4th graders in the 2007-2008 school year must achieve proficiency.

This is one of the major problems w/ NCLB in practice: You're not really measuring how much an individual student or class is learning. Instead, you're measuring the proficiency levels of this year's 4th graders compared to last year's 4th graders. And each year, the 4th graders in the school are supposed to do better than the 4th graders the year before - until in 2014 or so all the 4th graders are supposed to be perfect. Am I wrong?

How about looking at the correlation between Progress Report grades and school overcrowding (maybe capacity utilization in the Blue Book)? The Times' Jenny Medina today showed that schools with larger enrollments were less likely to move 2 letter grades. One wonders, if there is a link with size is there also a link with excessive size...?

Hi Socrates - I agree that NCLB's biggest problem is that it does not measure growth, but based on analyzing these data last year and again over the last day, I strongly disagree that the NYCDOE has figured out how to measure growth in a valid and reliable way. You can see skoolboy's earlier post on the flaws in their "growth model," and I'll have more on this to follow.

Attorney DC - Yes, you are right - my language is confusing. I meant to draw the contrast between proficiency rather than growth, and the word absolute makes that unclear. Sorry about that.

Maisie - Any chance you have overcrowding data in an electronic format? If so, I would definitely be interested in taking a look.

I am not quite sure that I have gotten my head around the implications of EW's analysis yet, without fully understanding many of the ins and outs of how NYC is measuring proficiency and growth. (some states use a weighted figure that takes into account various levels above and below the mark--so that a district gets no credit for a kid that didn't take the test, but a percentage if they scored something, a higher percentage for close to proficient, etc--with extra credit for super-achievers).

However, DC, I think I am correct for all states in saying that X+C is not the measure of growth for AYP. I believe that the basic formula across the board (which states were to incorporate into their plans) was to set an initial benchmark based on data (based on the achievement level of the bottom 20th percentile of schools) and then to raise the bar incrementally to 100% between inception and 2012. So the number that is being aimed at is always fixed--and the same for all schools and groups in the state. Some states took the approach of even steps upward over all available years. Other states started more gradually with upward movement only every three years and steeper expectations as improvements were put into place.

Now, it does get muddied by things like "safe harbor" which allows a school district to demonstrate a percentage growth over the previous year (or averaged over three years). This allowed schools who were in really deep doo-doo to keep out of the school improvement designation if they were on a turn-around path. The problem, of course, is that it doesn't ensure that they reach the final goal.

Margo: Thanks for offering some more insight into how NCLB works. I left teaching right around the time NCLB was really picking up steam.

I think it's a little overly ambitious to think that 100% of children will be proficient in both math and English. The premise behind NCLB is that with incentives to teach more effectively, schools will create great scholars.

NCLB doesn't offer lesson plans or advice on HOW schools can teach more effectively - it's as if the politicians believe that all teachers KNOW how to teach well, but are choosing not to utilize this knowledge until they have really big incentives to do so. That doesn't conform with any reality I know.

Appears you're comparing data from two different years.

Hi David,
I used the NCLB variable in the datafile that the DOE itself provided yesterday - if there is more recent data available, please let me know and I will happily rerun the numbers.

DC--I agree, NCLB does presume that teachers/schools/districts know what to do and tries to reincentivize a system around a set of minimums for all students. It has certainly been surprising to me to see some of the highly counter-educational responses to NCLB--whether coming directly from teachers, or from various administrative "leaders." This would include the amount of effort that goes into teaching students how to take the tests, and "reviewing" material that was apparently never internalized on the first go around. We are still working pretty much with standards and assessments 1.0, and there is plenty of room for improvement (leaner more directive standards, more varied testing types, etc)--but I flat-out don't know where we got the impression that the learning that comes from highly responsive learning situations with real-life applications and interdisciplinary ties doesn't show up on the tests that we have.

I don't know if the fed providing lesson plans or (more) advice would have been taken any more gracefully. I think we have a little too much of "just tell me what to do so I can do it and then go on about teaching my way," for that to be very effective. I rather suspect that we are closer than we believe to having the skills and abilities to make the needed changes. It amazes me that a building in which teachers are charged with teaching the scientific method, no one can write a meaningful hypothesis behind a desired change--or to develop meaningful measures of success. Likewise, there are teachers charged with teaching democracy, but the school as a whole cannot figure out how to involve students, teachers or parents in a decision-making process. Teachers are charged with teaching mathematics and statistics, but data gathering (and dissemination) on the progress the school is making towards goals is a mystery. Teachers are teaching writing, but the school can't figure out how to communicate with parents or the community.

In short--I think the needed skills are pretty much there, but out of synch, unfocused and poorly coordinated. Imagine a middle school that made a whole-school/community project of writing and implementing its school improvement plan.

Margo: Thanks for your comments. I agree that advice from the feds on how to teach more effectively may not have been taken well by the education community.

My general feeling on improving education (as always) is that it all boils down to student motivation, effort and behavior. When the students are motivated, they learn. When they are not motivated, don't participate, and don't do their homework - they don't learn. Unfortunately, unmotivated (and often disruptive) students can make it harder for those students trying their best to get a good education.

When we can figure out a way to make all kids care about education, and realize how important it is to make every effort to succeed in school, that is when we'll see the big changes. Unfortunately, I doubt if that day will come anytime soon.

I find it weird that all of the numbers eduwonkette uses in this post are wrong, and no one here appears to care, including eduwonkette--who knew better than to post these and refuses to take them down. Small point, small but real bad faith, no rigor.


eduwonkette and I have independently produced these percentages using the datafile that the DOE posted on its website yesterday. I would be happy to sit down with any DOE analyst and show him or her how I produced these figures.

Hi David,

Again, I used YOUR file, as did the New York Times in their article yesterday. My numbers are the same as theirs - I guess they are acting in bad faith, too, right?

A question/thought for David Cantor (if he's still reading):

Let's assume for a moment that these numbers are correct, or at least really close. Why would this be upsetting to you? The NCLB definition of "good standing" is asinine. A really good model of performance would look much, much different. In other words, couldn't the fact that your grades and NCLB's grades appear uncorrelated be a good sign?

Jennifer, the Times isn't campaigning to discredit the progress report. More importantly, the Times isn't premised on providing statistical exactitude. The DOE datafile lists the most recent NCLB accountability status of each school; it doesn't suggest these accountabilities, drawn from the previous year's data, and the Progress Report grades "map" onto one another meaningfully. You do.

Hi David,

Here and at the Times, we both reported how school grades compared to schools' current NCLB status. When parents, teachers, and principals received the grades, they, too, were comparing their grade with other information available about the school, a key variable of which is the current NCLB status. As you know, schools' NCLB status was provided in your own datafile. When new data are available, I am happy to provide the new numbers.

I never suggested that these divergences are an indictment of the Progress Report system, nor did the Times. As Corey noted above, they are very different systems, which is why these discrepancies are not at all surprising.

David: eduwonkette's post reported that the proportion of schools receiving an F that are in good standing according to the most recent NCLB accountability status for each school is higher than the proportion of schools receiving an A that are in good standing. That is a factual statement derived from the data the NYC DOE has posted on its website and made available to the public. If it is not factual, show me how. For example, if the DOE has more recent NCLB accountability status information for each school, put them up on the website and demonstrate that her calculations are inaccurate. Calling eduwonkette's motives into question does not change the facts that she and the New York Times have reported.

Yesterday, in a comment, you asserted that "all of the numbers eduwonkette uses in this post are wrong." But you did not provide the correct numbers. What are the correct numbers? (And if you cannot tell me, how do you know that the ones reported are wrong?)

To the three jennifer jennings fans still hanging around: I'll avert a reductio ad infinitum with a final post.
Aaron Pallas's comments speak to why I find this blog unlikeable. He says eduwonkette's reporting that a higher proportion of F than A schools are in good standing is "a factual statement derived from the data the NYC DOE has posted on its website." He adds, "If it is not factual, show me how." Well, it's factual in that the data isn't invented, but it doesn't convey all the facts. I have no problem with Jennifer's reporting so far as it goes, but aren't you all obliged by training, by craft, and, especially when you are trying to discredit an argument, by integrity, to note somewhere that the NCLB status at issue was derived from a different set of student performance results? And that new status reports based on the same results used in the progress reports will appear this year? (I have no idea how the progress reports will map onto those results.) So, no, Aaron, I'm not challenging the facts behind Jennifer's post; they are largely immaterial. I'm saying the post was less forthcoming that it should have been, which is a prominent feature of this piece of real estate.

David: Please review what you actually said above. You said: "I find it weird that all of the numbers eduwonkette uses in this post are wrong, and no one here appears to care, including eduwonkette--who knew better than to post these and refuses to take them down. Small point, small but real bad faith, no rigor."

"All of the numbers eduwonkette uses in this post are wrong." eduwonkette is demonstrating "bad faith," and "no rigor." But you write, "I'm not challenging the facts behind Jennifer's post." Who do you think you're kidding? If the NCLB data aren't relevant, then why did the DOE include them on the datafile it released to the public?

Have you complained to the New York Times that it was less than forthcoming in reporting the same information that appeared here?

I am at a loss to understand why reporting the facts is evoking such a strong reaction from the Press Secretary for the NYC Department of Education. Why is it so threatening that different sources of information -- School Progress Reports, quality review scores, NCLB status, improvement targets, and so forth -- yield different judgments about school performance? Why can't the Department tolerate acknowledging this?

I read in some New York City paper last week that the NYC DOE press office has a form for employees to fill out after they’ve spoken to the press. They’re supposed to give themselves a letter grade. (Sort of a self-evaluation of performance, not progress.) I wonder if this form is also supposed to be used when press secretaries post on blogs? If so, I’m curious to find out what David Cantor has given himself for his recent exchange with Eduwonkette. As a huge advocate for accountability and data-driven assessments, I can’t figure out why the NYC DOE’s Office of Accountability is so seemingly afraid of data and accountability…..When evaluations are being made about schools, principals, teachers, classrooms, and children, doesn’t the DOE want make these evaluations based on good science and good math? Isn’t the Office of Accountability working within a “continuous progress” model? Why so vitriolic to researchers who use the DOE data and help the public understand the DOE model? I really didn’t want to believe my close friend who is a top level employee of the DOE….when asked by me about the Office of Accountability, s/he said, “Oh, you mean the Office-That-Is-Accountable-to-No-One?” Come on Jim and David, prove my friend wrong: be accountable to your own formula and data! The public is counting on you!

Comments are now closed for this post.


Recent Comments

  • Citizen X: I read in some New York City paper last week read more
  • skoolboy: David: Please review what you actually said above. You said: read more
  • david cantor: To the three jennifer jennings fans still hanging around: I'll read more
  • skoolboy: David: eduwonkette's post reported that the proportion of schools receiving read more
  • eduwonkette: Hi David, Here and at the Times, we both reported read more




Technorati search

» Blogs that link here


8th grade retention
Fordham Foundation
The New Teacher Project
Tim Daly
absent teacher reserve
absent teacher reserve

accountability in Texas
accountability systems in education
achievement gap
achievement gap in New York City
acting white
AERA annual meetings
AERA conference
Alexander Russo
Algebra II
American Association of University Women
American Education Research Associatio
American Education Research Association
American Educational Research Journal
American Federation of Teachers
Andrew Ho
Art Siebens
Baltimore City Public Schools
Barack Obama
Bill Ayers
black-white achievement gap
books on educational research
boy crisis
brain-based education
Brian Jacob
bubble kids
Building on the Basics
Cambridge Education
carnival of education
Caroline Hoxby
Caroline Hoxby charter schools
cell phone plan
charter schools
Checker Finn
Chicago shooting
Chicago violence
Chris Cerf
class size
Coby Loup
college access
cool people you should know
credit recovery
curriculum narrowing
Dan Willingham
data driven
data-driven decision making
data-driven decision-making
David Cantor
Dean Millot
demographics of schoolchildren
Department of Assessment and Accountability
Department of Education budget
Diplomas Count
disadvantages of elite education
do schools matter
Doug Ready
Doug Staiger
dropout factories
dropout rate
education books
education policy
education policy thinktanks
educational equity
educational research
educational triage
effects of neighborhoods on education
effects of No Child Left Behind
effects of schools
effects of Teach for America
elite education
Everyday Antiracism
excessed teachers
exit exams
experienced teachers
Fordham and Ogbu
Fordham Foundation
Frederick Douglass High School
Gates Foundation
gender and education
gender and math
gender and science and mathematics
gifted and talented
gifted and talented admissions
gifted and talented program
gifted and talented programs in New York City
girls and math
good schools
graduate student union
graduation rate
graduation rates
guns in Chicago
health benefits for teachers
High Achievers
high school
high school dropouts
high school exit exams
high school graduates
high school graduation rate
high-stakes testing
high-stakes tests and science
higher ed
higher education
highly effective teachers
Houston Independent School District
how to choose a school
incentives in education
Institute for Education Sciences
is teaching a profession?
is the No Child Left Behind Act working
Jay Greene
Jim Liebman
Joel Klein
John Merrow
Jonah Rockoff
Kevin Carey
KIPP and boys
KIPP and gender
Lake Woebegon
Lars Lefgren
leaving teaching
Leonard Sax
Liam Julian

Marcus Winters
math achievement for girls
meaning of high school diploma
Mica Pollock
Michael Bloomberg
Michelle Rhee
Michelle Rhee teacher contract
Mike Bloomberg
Mike Klonsky
Mike Petrilli
narrowing the curriculum
National Center for Education Statistics Condition of Education
new teachers
New York City
New York City bonuses for principals
New York City budget
New York City Budget cuts
New York City budget cuts
New York City Department of Education
New York City Department of Education Truth Squad
New York City ELA and Math Results 2008
New York City gifted and talented
New York City Progress Report
New York City Quality Review
New York City school budget cuts
New York City school closing
New York City schools
New York City small schools
New York City social promotion
New York City teacher experiment
New York City teacher salaries
New York City teacher tenure
New York City Test scores 2008
New York City value-added
New York State ELA and Math 2008
New York State ELA and Math Results 2008
New York State ELA and Math Scores 2008
New York State ELA Exam
New York state ELA test
New York State Test scores
No Child Left Behind
No Child Left Behind Act
passing rates
picking a school
press office
principal bonuses
proficiency scores
push outs
qualitative educational research
qualitative research in education
quitting teaching
race and education
racial segregation in schools
Randall Reback
Randi Weingarten
Randy Reback
recovering credits in high school
Rick Hess
Robert Balfanz
Robert Pondiscio
Roland Fryer
Russ Whitehurst
Sarah Reckhow
school budget cuts in New York City
school choice
school effects
school integration
single sex education
small schools
small schools in New York City
social justice teaching
Sol Stern
Stefanie DeLuca
stereotype threat
talented and gifted
talking about race
talking about race in schools
Teach for America
teacher effectiveness
teacher effects
teacher quailty
teacher quality
teacher tenure
teachers and obesity
Teachers College
teachers versus doctors
teaching as career
teaching for social justice
teaching profession
test score inflation
test scores
test scores in New York City
testing and accountability
Texas accountability
The No Child Left Behind Act
The Persistence of Teacher-Induced Learning Gains
thinktanks in educational research
Thomas B. Fordham Foundation
Tom Kane
University of Iowa
Urban Institute study of Teach for America
Urban Institute Teach for America
value-added assessment
Wendy Kopp
women and graduate school science and engineering
women and science
women in math and science
Woodrow Wilson High School