Sorting Misconceptions vs. Reality for STAAR Automated Scoring

STAAR ASE scoring

The Texas Education Agency’s (TEA) announcement regarding the use of an Automated Scoring Engine (ASE) to score constructed response items on STAAR assessments has created many questions. Maybe you’ve heard about the ASE but aren’t sure what to make of it. Or, you are questioning how automated scoring will affect a student’s test scores. Today, we’re looking at the Automated Scoring Engine or ASE and offering a breakdown of the common misconceptions versus the reality of how automated scoring is used.

Misconception #1: Automated Scoring Engine (ASE) technology is new and untested.

Although ASE technology has been portrayed as brand new, the truth is that it’s been around for over 10 years. ASEs have been used in scoring tests for various assessments, including the Texas Success Initiative Assessment (TSIA). In addition, over 21 other states, including California, Florida, and Colorado, have used ASEs in their assessment process. ASEs are a fundamental part of the scoring process for standardized examinations like the GRE, GED, GMAT, and TOEFL.

Misconception #2: The Automated Scoring Engine (ASE) is the sole determinant of scores.

A common thought is the Automated Scoring Engine (ASE) is the main source of determining test scores. However, a comprehensive human-powered process is in place from the beginning to the completion of testing.

First, before the actual event of scoring the Constructed Responses (CR), every step of the preparation process is dependent on human input. The four steps of the process are:

  1. Field Testing
  2. Anchor Approval Meeting
  3. Preparing Human Scorers and ASE
  4. Scoring

During Field Testing, all responses are double human-scored. Then there’s the Anchor Approval Meeting, where humans identify example responses to set the scoring standards. Human scorers are trained using these examples, and ASEs are programmed based on thousands of hand-scored responses.

During scoring, it’s not just the ASE making all the scoring decisions. Human raters provide support through calibration checks, and humans review at least 25% of responses for quality control. So, there’s plenty of human oversight throughout the process.

Misconception #3: Automated Scoring Engine (ASE) prioritizes quantity over quality.

Another common misconception is that ASEs might be more interested in word count than actual content. ASEs use condition codes like blank response fields, too few words used, duplicated text, etc., to flag responses that don’t quite fit the instructions given on the test. Other factors that can flag a response include repeating the question without answering it, using vocabulary unfamiliar to the ASE, or just plain off-topic.

These flagged responses are then given to trained human scorers to ensure they’re assessed fairly and accurately.

Misconception #4: Automated Scoring Engine (ASE) grade tougher than humans.

Although many perceive that ASEs make it harder for students to score well, this is another misconception. A proof-of-concept study conducted by the TEA showed that ASEs were on par with human scorers.

In fact, when comparing ASE scores to those given by humans, the match rate was statistically equal or even greater. 

Misconception #5: You’re stuck with your Automated Scoring Engine (ASE) score.

If parents or educators think there has been a scoring error, a rescore can be requested. Human scorers review these requests and may adjust scores accordingly. Rescore requests have several steps to follow and may incur a fee. The process to request a rescore proceeds as follows:

  1. The District Testing Coordinator completes a rescore request.
  2. A purchase order is required when the rescore request is made.
    • If the score improves, there is no fee.
    • If there is no change in the score, the school is charged $50.
  3. If the rescore request is made during the rescore window and the score improves, the new score is reflected in the school’s accountability ratings.
  4. If the request is made after the rescore window has closed and the score improves, the change is not reflected in the school’s accountability ratings but will be reflected in the student’s final overall test score.

Conclusion

While there are common misconceptions about the automated scoring engines used in STAAR tests, once you know the facts, the ASEs are less intimidating. Most importantly, human involvement will continue to play a crucial role in the scoring process.

Explore the ESC Region 13 guide to the STAAR Automated Scoring Engine (ASE) for Constructed Response Scoring. This guide clears up common misconceptions and offers insights into the ASE scoring process.

To learn more about the STAAR test and resources created by Region 13, visit our website or read one of our STAAR blogs. Still have questions? Contact one of our team members.

Butch has worked with testing and accountability for over 15 years at the campus, district, regional, and state levels. Originally from North Carolina, Butch is the State Assessment Specialist for the Education Service Center Region 13, helping district test coordinators and others navigate the world of STAAR and TELPAS testing. He is available to answer any of your state testing policy and procedure questions.

Add comment

Your email address will not be published. Required fields are marked *