A new way to track high school students' readiness for college and trigger earlier intervention is being studied at Stanford University.
The College Readiness Indicator System initiative was developed at the John Gardner Center for Youth and Their Communities at Stanford and is being tested at schools in Dallas, Pittsburgh, San Jose, Calif., and New York City. A paper by Oded Gurantz and Gradeila Borsato explores lessons from the first two years of testing the initiative.
The idea behind the system is that grades and student performance alone are not enough to determine college readiness. And schools can be more effective using a model that allows them to engage proactively with students before they go off-track.
The indicator system measures three areas:
1. Academic preparedness - as reflected in grade point average and availability of Advanced Placement courses.
2. Academic tenacity - using attendance or disciplinary infractions to demonstrate effort.
3. College knowledge - understanding financial requirements for college and other skills needed to access and navigate college.
The new system also considers progress on these indicators at the individual student level, in the classroom (resources and opportunities provided), and at the system level (policies and funding for supports, such as counselors).
Its design is intended to be flexible so districts can use indicators that work best in their local context. In the pilot, the system was adapted differently in each of the four districts, but with the same goal of better linking high school work to postsecondary expectations.
One advantage of the model was that it forced administrators to think through how to use data in a more timely manner to provide interventions and resources to students, the researchers found. For instance, instead of focusing on attendance at the end of the semester, the model prompted one district to look at daily or monthly attendance patterns and to reconsider raising the GPA cut score to match performance needed for college-level placement.
Researchers reviewing the trial run of the system emphasized the need for buy-in from district leadership, IT professionals, principals, and teachers for the approach to work. It also found staff turnover and lack of resources in schools hurt the roll out of the model. Finally, gathering the information was much more straightforward than deciding how to use the results.
"Collecting more data will not lead to better outcomes for youth unless a system is in place that helps turn those data into meaningful action," the researchers concluded.