« Spotlight on Inclusion and Assistive Technology | Main | Milwaukee Judge Orders Compensatory Special Education »

Texas Alternate Assessments Approved

| 1 Comment

My colleague Stephen Sawchuk attended the annual assessment conference of the Council of Chief State School Officers in Los Angeles. He filed this report:

Texas has become the first state to have its alternate assessment aligned to modified academic achievement standards pass the U.S. Department of Education's peer-review process. That means it can officially use the test for accountability purposes under No Child Left Behind.

The alternate assessment-modified academic achievement standard, or AA-MAAS, is part of the "2 percent flexibility" announced under former Education Secretary Margaret Spellings to measure the content knowledge of some students with disabilities who are not severely cognitively disabled, but don't appear to be able to show what they know on regular grade-level assessments.

The AA-MAAS measures grade-level content, but can be easier than the general assessment. And so states are now trying to figure out what modifications to the test items are appropriate for this population.

Once the test is operational, states must oversee districts' decisions about which students are eligible to take the test. There's no limit to the number of kids who can be assessed using an AA-MAAS, but only 2 percent of proficient scores can be factored into adequate yearly progress decisions.

Most of the states that have applied for the flexibility still don't have operational tests and are using a proxy method to account for the population. The National Center on Educational Outcomes has compiled a lot of information on these tests, which can be found here.

Last year, six states submitted their modified exams for peer review, but a bunch of issues prevented them from being accepted. Several, for instance, did not cover content standards to the same degree of breadth and depth as the general assessment, as required by the education department. Texas seems to have passed this hurdle, although the Education Department hasn't posted Texas' acceptance letter yet so further details are not yet available.

Many philosophical questions remain, despite this advancement. At the CCSSO conference, there were literally dozens of sessions on the 2 percent assessments this year, and after attending a number of them, it became very clear that what researchers, states, and policymakers know about this population of students with disabilities and about how best to measure their abilities remains nascent.

There's been a lot of progress, for which researchers deserve a lot of credit. For instance, a number of state consortia have been conducting "cognitive interviews" with these students to determine which test-item modifications help them to access the test. They're finding some interesting things, namely that "scaffolds" built into the tests to help these students don't always seem to be having that effect. Bold-facing key words in reading passages, for instance, appears to have mixed results. But other modifications do seem to be helping.

There are still a number of questions that aren't really decided. One popular modification, removing one of the wrong answers, or "distractors," on a multiple-choice question, is a popular strategy. Texas went this route. But, as one presenter queried at the conference, does that really increase access for a student with a disability? Or does it change the construct of the question so that it may no longer be measuring the same thing?

1 Comment

"One popular modification, removing one of the wrong answers, or "distractors," on a multiple-choice question, is a popular strategy. . . .Or does it change the construct of the question so that it may no longer be measuring the same thing?

It raises the average guessing score by 5% or more when the test is scored by just counting right marks. If these tests were scored for student judgment, as well as for knowledge, it could make little difference. How the test is scored is more important than the number of answer options.

Accurate, honest, and fair scores can be obtained by Knowledge and Judgment Scoring (KJS). Free software to do this, as well as, advanced software for counseling students is available from http://www.nine-patch.com. Replace dumb scoring with smart scoring.

Richard Hart
Professor Emeritus, NWMSU

Comments are now closed for this post.

Follow This Blog


Most Viewed on Education Week



Recent Comments