Automated essay graders, or Robo-graders as they are sometimes called, are cheaper and faster than human readers, and testing is a rapidly growing industry. Automated essay graders are programmed to “read” for certain types of words that signal the content and structure of the essay. AEG is looking for a specific type of organization and is limited in the types of essays it can effectively score. Using automated essay graders puts an emphasis on argumentative and informational essays, styles that are evidence-based. Building from that concept, I began researching the type and organizational structure that would best suit an automated essay grader. The difficulty in trying to discover the “best practices” for helping students prepare to face Robo-grading is that much of the information regarding how the systems are designed is proprietary. It is the intellectual property of the testing industry and not something to be shared. The testing industry is privatizing the educational preparation for the tests they administer.
Based on the research I have been able to find, my goal for this book is to create an understanding of what AEG can assess and provide tips for the best practices and skills to develop when facing AEG systems. There are many arguments regarding teaching to a test, and that Robo-grading is harming writing instruction, but regardless of those opinions, students are being evaluated on the basis of artificial intelligence and their transition to college or the workplace is being impacted. The testing industry is the clear winner in the standardized testing movement. Rather than making software recognize “good” writing, they will redefine “good” writing according to what the software can recognize. Considering the resources being put into perfecting Robo-grading, it’s likely that we will see rapid expansion in the use of artificial intelligence as an evaluation tool. It’s important to give students a chance to learn to “think” like a Robo-grader.