« Religious Charter Schools? | Main | Diplomas Count 2007: Ready for What? »

Data Use: How are Schools Doing?


The federal No Child Left Behind Act has increased demands on school technology officials to put in place new and better systems to collect and analyze data. Now that Congress is working to renew the law, educators' hunger for data probably will grow.

But are schools collecting the right data? And even if they are collecting the right information, are they using it intelligently to improve student performance? What do you think?


From my understanding, certain states (such Iowa and Florida) are moving toward collecting more appropriate data to measure student progress and evaluate the effectiveness of schools. The literature base supporting the use of certain assessment systems is growing constantly, and suggests that end-of-year tests can provide some information as to student achievement, but not a complete picture. Additional assessment systems such as Curriculum-Based Assessment (CBA) procedures can help round-out the assessment process teachers, school, districts, and states use to evaluate performance.

As any psych major knows, data is worthless unless collected and analyzed in a scientific way that produces meaningful results, none of which is happening at the school level. Most educators are not trained in such a way and shouldn't be. They are expert in their own fields. Let the psychologists and data analysts do this work at the macro level to decipher larger trends in education, if that makes them happy.

As for teachers, if I need a mountain of data to tell me that Johnny is not performing well - and why not - then I'm not really paying attention to what's going on in my classroom anyway.

Computer software and testing companies profit handsomely when they can convince the public that teachers in general are incompetent, and the machinations of education are better left to them - for a generous fee of course. Hence the anti-teacher rhetoric making the rounds.

We live in a data-driven society and education is no exception. However, in education, there are 3 sets of data to look at for each student class performance, personal data and formal test data. If there is a large disparity among this data, then it is time to examine the reason for the disparity. The student's academic success in the short term goal. The students long-long success is the long term goal is the ultimate goal. The theory and the practice often don't match up.

Many public school teachers ARE incompetent.

Data are data, plain and simple. The current drive to gather data about student achievement has , as the ultimate purpose, holding schools, districts and teachers accountable for the achievement, or lack of it, of students. The focus is on the end product, but that is only half of the story, and the back half at that. Current data does not show the why or the when of achievement. The best data is gathered daily, in the classroom, by the teachers, as they teach. Teachers are the most qualified to assess the achievement of their students and the people that will adjust their teaching and curricullum to best teach their students. The objective of teaching is to bring about learning. This is not measured on end of term or any time standardized achievement tests. Such tests do offer a glimpse of overall achievement, but have no value beyond a quick peak at what is happening and a hint as to where improvements might be made.
Data collection as school reform is like trying to cure a fever with a thermometer.

The purpose of data collection is to improve student achievement. Data results are analyzed by school, and by district, and changes are made in the school programs or methodology. Ideally, funding should be available to implement the changes, but what usually happens, programs are prioritized, those at the bottom are eliminated, saving the money and using it for school improvement plans.

The accountability issue of data collection should not include publishing school results, and humiliating the poorest performing schools. Socio-economic level of the school community is the greatest factor in the school's overall average in student achievement.

In Britain, teachers' organizations are lobbying the government to reduce the number of large scale testing (70 by the time the student is 16). Teachers are "fiddling" with the test results in the best interests of the children, who are streamed based on their test performance. In the local paper, The Observer, a report about the stress on teachers to demonstrate growth, resulted in an investigation of a particular young teacher, who hung herself.

In the Toronto Star, the testing data was used to compare two schools, publicly embarrassing and humiliating the staff, students and families of the school that was seen lacking.

Teachers and schools welcome the opportunity to demonstrate accountability, but publishing school scores and ranking schools harms the very people we as educators, are working with to facilitate success.

No test is perfect, depending on the school community there may be cultural bias, English as a Second Language issues, and stresses brought on by the parents who believe the tests will be used to determine their child's academic future. All the testing including the assessment and observation done in class by the teacher, are meant to facilitate student achievement, not hinder or impede it.

Is testing and assessment necessary? Absolutely. There are accountability issues for the student, the teacher and the parents. Should the results be published. No.

Standardizing data collection (by requiring all districts in a state to conform to a single set of tests) and making the results public have provided me, as a parent, with important tools to support the education of my child. Annual testing did not begin with No Child Left Behind. When my older child was in elementary school, the district had annual norm-referenced tests. I (sometimes) received my child's scores. The only context for these were the national percentile rankings.

My younger child has been eased into annual criterion-referenced testing (my state began before NCLB and has progressed), and reporting of the school's, the district's and the state's scores. When he moved from a middle school that had scores for students with disabilities that rivaled those of the non-disabled students (where he also scored at a proficient level) to a high school where disabled students reached proficiency in single digit numbers (and where his non-proficient scores do not align with his passing grades in several subjects), I can reasonably suspect that the issue is not that he stopped learning, but that perhaps the school has some content/teaching issues.

Now certainly, some find this to be an embarrassing situation for the school. My questions as a parent about how the school plans to improve the situation are not necessarily welcome. But the elephant in the living room has a name and cannot be (easily) denied. Not that there are not efforts to deny it. My local school board (and the National School Boards organization) would like to amend No Child Left Behind so that that students with disabilities who are also minority would only count as one-half in each category. There's no statistical basis for this--it's just a way to minimize some of the embarrassing scores. They would also like to make the "n" size for which districts are accountable bigger. And include some non-disabled kids (formerly disabled, presumably they were "cured?") in the disability category. And allow more kids with disabilities to take alternative tests (this means that they are tested on different content--not just that their disabilities are accommodated in the testing situation). And, by the way, only have to test 90% of kids, instead of 95%. And allow waivers for some kids who are going to get bad scores anyway (chronically truant, family problems, etc).

Personally, I find it embarrassing that educators demonstrate so little understanding of the nature of statistics.

Very interesting set of posts, indeed. The public/private nature of test results is certainly interesting. On one hand, it makes sense to inform the public - who is paying - of the results of their expenses. At the same time, the public does not often have the prerequisite knowledge to intepret those data.

Responding to Jason and Bob's comments, I really see worthwhile points in both of your comments. Jason - I very much agree that teachers are often not equipped to collected and interpret meaningful data, and that trying to do so without such preparation can lead to inaccurate data collection and interpretation, and a distraction from instruction. At the same time, Bob - I agree that teacher's should be collected data on a daily basis and interpreting that data in light of their incredibly personal knowledge of the students they teach.

There is a way to incorporate both - reducing teacher time with assessment and making it more accurate and reliable, as well as letting teachers collect and interpret data. The answer, that I've found at least, is curriculum-based assessment (CBA) procedures which are currently the most reliable and valid way to measure student classroom performance with basic skills. They are easy to collect, but only if teachers are trained in the appropriate use and interpretation (hence balancing the use of teacher-collected data with expert analysis). These procedures also take very little time - one to two minutes per child per data point, which can be collected anywhere from weekly to quarterly.

To be sure, CBA procedures are not the answer to all assessment - they do not measure, as effectively, some forms of reading comprehension, problem-solving skills in math (for example), or science or social studies. For those areas, and others, teachers should continue to assess these areas, but work with experts in assessment and evaluation to construct instruments which reliably and validly measure these skill sets.

How are schools doing? That depends ....

It seems reasonable to assert that before we engage the discussion regarding how schools are doing and what data should schools be collecting, we should be answering the question of "What do we (society) expect our schools to do?"

The anser to that question allows for a more reasoned (less emotional and less subjective) dialogue.

For example, if society responds that our (high) schools should be graduating 100% of the senior class, then certainly we can collect and analyze the data to determine the effectiveness of that objective.

However, if our society deems that our schools should be graduating 100% of the senior class with the competencies to earn a living, then that's a completely different set of data ... data, in my view, although it can exist, won't easily find the light of day.

I may be a bit off in my notes here but generally there are about 160 million people of all ages in the American workforce. Of that population, about 28% have completed a Bachelors degree; 17% a Masters degree, and 1% a Ph.D. Subsequently, using that data alone, we can infer that the American workforce is not well educated.

At the same time it seems reasonable to say that our workforce has done remarkably well since post WWII in creativity, productivity, and moving this country -indeed the world- forward. As such, we can also argue then that even without advanced formal education things get done.

My point is that we don't need more data to know that in some school districts on any given day, about half the population of the high schoolers don't attend school. We don't need more data to show that in some districts, less than 50% of the students entering high school as freshman don't graduate as seniors.

And we certainly don't need more data collected by those "outside the walls of education" to indicate (generally) that public education in America is a failure.

What we certainly need is a clear answer -from lawmakers as well as parents, from corporate execs as well as NCOs, from educators as well as the students themselves- to the question, "What do we expect our schools to do?

Okay, as a student most people don`t listen but Im talking anyways.
From what I`ve read this is about technolagy for searching things we should be learning?
I feel everyschool should have computers, and sets of laptops for the classrooms.
But on the other hand, everything that we are learning we can get in a textbook. With this technolagy coming in the future we should be learning about whats going on now! Wether you like it or not we are and will be the future leaders of the world. We need to know what`s going on, learn how to help in order for us to be prepared for what will come.
We are forced to learn about art and music? That should be a choice. Schools should offer classes that matter, like politics and business. I think we can live without music and pictures that go on the fridge. The things we learn now will go on with us for the rest of our lives. We need to take advantage of the tech. we have now and learn what we need, not what the government thinks will do us good. Were not even aloud to talk about who we think should be president because everyone has diffrent opinions. We future leaders should voice our opinions. They, as well as our current leaders, have the right for freedom of speech. Why are they being against it?
Skyler S.

Neither Phonics First nor Whole Language is an English-appropriate approach to reading.

The data that interests me is that of learning gains separated from achievement data. I also would like to see the comparison of learning gains for all the subgroups in a school.
I have great difficulty with the way Florida
masks poor learning gains behind high proficiency levels, then awrds such a school 100 dollars per student for creating little growth. Conversely,
some low proficiency schools with higher learning
gains than the bonus-moneyed schools get sanctions.
The gifted/advanced learners have been largely
forgotten as many states track only to proficiency. Check to see how little growth these kids have made while the nation leaves no child behind. When equal and mediocre students are the target product, I guess states don't feel accountable for the growth/nurturance of these
students. One may have great reservations about showing a widening achievemnt gap should these kids be able to develop to potential. Perhaps the
poor use of data relative to the gifted is a way to keep parents uninformed...I don't know.
Growth models/value added assessment would be better. Why were these overlooked years ago?
One has to wonder if the oversight was purposeful.

Several school districts in Florida continue to dig deeper into data. We say around here, "data are". Our district has been training their administrators and staff members to understand data. (As was posted earlier, you have to know what you are looking at) Standardized data is used as an "introduction" to district data. It has been a process for the past 4 years, but I can honestly say that most schools can tell you their test scores before they come. In order to own the information the teachers must be the ones looking at the data. The teachers must be the ones who can use their personal data (research based assessment/progress monitoring) to back up or contest what national data is showing. It's all about an open dialogue that promotes community understanding and student gains.

Two quick comments on earlier postings:

re: Bob Frangione's statement to the effect that trying to reform education by collecting data is like trying to cure a fever with a thermometer... I couldn't agree more. I think Bob might also agree that while thermometers are not sufficient to cure a fever, they provide helpful - even critical - information to help determine what sort of cure is needed. The same applies to data collection and school reform.

re: Diane Hanfmann's comments on growth models... After attending a conference on growth models a year or so ago, my 76 year old mother asked me to tell her what had been discussed. I told her it was a conference on how to figure out whether schooling was helping kids get smarter over time. She replied, "I would have thought someone would have figured that out a long time ago." That's the common perception. Unfortunately, the details are extraordinarily sticky. One simple example: how do you measure learning from fall of 3rd grade to spring of 5th grade if the content standards at each grade level are very dissimilar?

To the best of my knowledge, no one has intentionally "covered up" the notion of growth models. Many norm referenced tests have had vertical scales (measures of longitudinal growth) for decades, and some state criterion-referenced tests have had them for a long time, too. Other states have tried to implement them but, for complex reasons relating to both psychometrics and curriculum, they didn't work. Growth models are still a work in progress, and they may not be the holy grail that some hope they will be.

I have to say that I'm very impressed by the diversity of views

I am remarkably UN-impressed at these comments. First, data is a plural word in Latin, and "data are" is the classically correct usage - although most users find that usage uncomfortable.

Second, the data on individual achievement are very different - in composition as well as in uses - from the data on school achievement. The sum of the parts is a very different item, showing trends, patterns, and variations over time and circumstance. Some of those data are useful in calculating the NCLB standard of "non-performing" or "low-performing," and there most surely should be other data factored into that formula. Yet, way before we generalize, we ought to realize an individual student's test score measures a very different thing than a school's performance rate, or, for that matter, improvement rate. Testing the school is interesting, feasible, and can inform some decisions. Testing the kid is a very different task.

Third, the current spectrum of tests have very, very little utility, and are clearly not cost-effective. That is, they give a general profile of all schools in a state, about 6 to 8 months AFTER a timely response might affect that real profile. Responses much later decline in utility with every month's delay. And those responses are often very late indeed. To guide the decisions that are now guided by these untimely tests, a random sample of kids could take tests every month and deliver much, much more useful information in much, much more useful form by including both test scores, gain scores, income, attendance, and other unobtrusive measures to track school effectiveness. Yet no one suggests such a tactic since there are only a few test companies who are crawling all over the US DOE for more contract money.

Fourth, there are plenty of OTHER ways of getting much more interesting and much more useful diagnostic information on kid performance than the multiple choice, paper and pencil standard that now engages the billions of federal NCLB effort. We came up with those tests in the days when computers were capable of very limited functions, and computerized grading allowed for faster (not the currently common slower) feedback. There are now plenty of computers in the schools themselves, and plenty of bandwidth for those computers, to collect much, much more interesting information on how kids write, how they compute, how they solve realistic problems in written, video, sound, and quantitative media. Yet no one even asks the question of why these are never used.

When comments here raise interesting questions, this might be a worthwhile group.

Collecting data to see how we are doing is not half as important as how we are actually doing.
Data is as data goes, often influenced by who is collecting the data. I suspect he who controls the data results has the power to control the system. When we have teachers actually seeing how students are doing by looking at their written work, we understand more in reality about that sutdent than any testing data can give us. The data can be manipulated and eliminating cursive writing will be a step backwards in literacy. Just remember how we used to vote with confidence with paper ballots, and now we have machines that we aren't quite sure we can trust. Standardization was the first move the communists made in education when they came into Poland.

Thanks to Mike for his thoughts; yet I remain strongly in favor of growth models. I am unsure
of the state where Mike resides but I invite him to investigate the testing regimen in Florida and explain how it is sensitive to/conducive to making
schools accountable for a years' worth of growth in the gifted population. I am committed to the belief that all students, regardless of their proficiency level, deserve an opportunity to learn. I refuse to allow our nation's gifted to
comply with compulsory attendance laws to be warehoused. What would you suggest?

When we collect this data,whatever it is, we act as if it is an absolute. It is not. We begin to treat students as if they fall into a category, and that does a disservice to all of them. When a teacher doesn't know some of this data, they can help students learn beyond what it says. Too much data can take the spark out of education, so lets not overload teahers with more papers to sort out.
Buying technology and systems to quickly record results of the data is the name of the game here. Very expensive and not student friendly. Just a way to get the numbers figured out quickly and be sure the scores go up. Collecting data is not the purpose of education, and who is to say what that data should be? This scheme of promoting the value of data to the classroom is used as one more attack on public educators who are far too busy teaching their subjects to analyse some data that may or may not apply.

There is little doubt enormous amounts of data is being collected. The real issue at this time is will education departments use it correctly. Guess what? Not every kid is a college student. We have mounds of data that proves this but we fail to act as responsible educators and provide increased ROP schools in states so that non college bound students can thrive and survive in our society. We also still operate our schools at best at the industrial revolution level using data with a manufacturing goal and not an information goal. We still function with pre-industrial revolution schedules when data collected in brain based studies show again and again that summers are optimal learning times.
Data collection is a waste of time with these paradigms in place in the U.S. Educational System. Until we move education into the information age paradigm all this data is just so much useless information and a waste of trees if we are honest with ourselves.

Data collection and analysis to inform instruction empowers teachers and students.
Will our educational system allow and support empowered teachers? Or will it merely increase teacher responsibility (accountability) without providing the additional resources necessary for improved instruction(as has historically been the case)?

Data, data, and more data.

The problem is that we are collecting mountains of data from 50 entirely different experiments trying to answer the same questions. Whatever happened to the basic scientific method, i.e. controlled, independent, and dependant variables?

Could the answer may be simpler than we think?

EVERYBODY takes the EXACT SAME nationally standardized test after being taught the EXACT SAME nationally standardized curriculum in the EXACT SAME nationally standardized way. It’s only from that point that we can fine tune every aspect of our educational system to allow for specific levels of individual teacher innovation and the creation of local programs addressing local needs.

I have calculated data for principals who gave me some interesting directions on which students to exclude. Also, data was not available on every student. Hmmm. What's that all about? You tell me.

Following up on Diane Hanfmann's very thoughtful response on 7/17... My point was not to advocate for or against growth models. I quite agree that they have their uses, and that they may well be the measure of choice when answering some questions. My level of concern goes up a few notches when they are advocated as a broad-spectrum solution without careful evaluation of the nature of the question being addressed, the characteristics of the available data, and the other alternative methods that may exist. As David Tyack pointed out years ago, searching for the One Best Method is usually futile.

I appreciate Mike's response. I can easily agree
that additional data can be useful as well. Thank you for the clarification!

When it comes to special needs students, we really need to think about the value of testing and the data that we get. It is insane to expect a student reading at 2-3rd grade level to pass a 6-8 state reading test.The failure rate is surely close to 100% and the results, when counted in, simply drags the school in question further into school improvement. Year after year this is going on in every state and it needs to stop.
We need to utilize all the tools we have for special needs children and take them out of the NCLB loop unless it is voluntary. We have worked very hard to get these students and their needs met and current testing procedures simply don't work for them.

In response to Ed Robbins--I think we need to be asking some questions about why we have some students at the 6-8 grade level who are reading at the 2-3 grade level. Certainly some students with disabilities have cognitive disabilities that would account for this. The majority of disabilities, however, are NOT cognitive. NCLB already provides for alternative testing of those with cognitive disabilities.

For the rest of the disabled population (the largest of which would be learning disabled) scores are all over the ball park. At some schools they are achieving at high levels, at other schools not at all. Likewise the approach to learning and services is all over the map--some schools feature a high level of inclusion, and do it well, others do something more like warehousing students. Tests should be appropriately modified and accommodated to a student's particular disability--as is currently required. Until there is a great deal more regularity to both the availability of services and outcomes, taking students with special needs "out of the NCLB loop," is really just a pass that the schools don't need.

Skyler S. I read your blog and want to commend you for writing it, and expressing your opinion. You are the future and being exposed to technology, history, art and music are all valuable to you no matter what career you choose. Narrowing your field of learning when you are a young person does not give you the opportunity to learn about yourself. Of course you have special interests now, but they may change and a whole education allows you to be more. Art on the refrigerator is appreciation of creative work, and creativity is needed in all work places. So that is my opinion, Skyler. Keep writing on blogs and sharing yours.

Ed Robbins makes a strong and realistic point.
What benefit are assessment results of a level the student has not yet attained (in the case of students who have learning challenges) or in the case of students who have exceeded that level long ago (as in the case of some gifted students)?
I realize the testing regimen of NCLB is based on meeting age grade proficiency. Unfortunately, that fit is not yielding the best fit to all learners, thus not providing meaningful information for all students. NCLB is not grounded in the reality of individual differences.
While it concerns me a great deal, it dictates
that odd things happen in our classrooms across the country.

Comments are now closed for this post.


Recent Comments

  • Diane Hanfmann: Ed Robbins makes a strong and realistic point. What benefit read more
  • Deanna Enos/ Nobody Left Behind One Child's Story About Testing: Skyler S. I read your blog and want to commend read more
  • Margo/Mom: In response to Ed Robbins--I think we need to be read more
  • Ed Robbins - Special Ed Teacher: When it comes to special needs students, we really need read more
  • Diane Hanfmann: I appreciate Mike's response. I can easily agree that additional read more




Technorati search

» Blogs that link here