Material Detail

"Can Automated Scoring Surpass Hand Grading of Students' Constructed Responses and Error Patterns in Mathematics?" icon

Can Automated Scoring Surpass Hand Grading of Students' Constructed Responses and Error Patterns in Mathematics?

A unique online parsing system that produces partial-credit scoring of students' constructed responses to mathematical questions is presented. The parser is the core of a free college readiness website in mathematics. The software generates immediate error analysis for each student response. The response is scored on a continuous scale, based on its overall correctness and the fraction of correct elements. The parser scoring was validated against human scoring of 207 real-world student responses (r = 0.91). Moreover, the software generates more consistent scores than teachers in some cases. The parser analysis of students' errors on 124 additional responses showed that the errors were factored into two groups: structural (possibly conceptual), and computational (could result from typographical errors). The two error groups explained 55% of students' scores variance (structural errors: 36%; computational errors: 19%). In contrast, these groups explained only 33% of the teacher score variance (structural: 18%; computational: 15%). There was a low agreement among teachers on error classification, and their classification was weakly correlated to the parser's error groups. Overall, the parser's total scoring closely matched human scoring, but the machine was found to surpass humans in systematically distinguishing between students' error patterns.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Can Automated Scoring Surpass Hand Grading of Students' Constructed Responses and Error Patterns in Mathematics?

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.