<< But can a machine that cannot draw out meaning, and cares nothing for creativity or truth, really match the work of a human reader?
<< In the quantitative sense: yes, according to a study released Wednesday by researchers at the University of Akron. The study, funded by the William and Flora Hewlett Foundation, compared the software-generated ratings given to more than 22,000 short essays, written by students in junior high schools and high school sophomores, to the ratings given to the same essays by trained human readers.
<< The differences, across a number of different brands of automated essay scoring software (AES) and essay types, were minute. "The results demonstrated that over all, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items," the Akron researchers write, "with equal performance for both source-based and traditional writing genre." >>
Source:
http://www.insidehighered.com/news/2012/04/13/large-study-shows-little-difference-between-human-and-robot-essay-graders
I find this hard to believe. When I write complex sentences, I notice the grammar correcting software on word processors frequently marking items that are not in error. I fear computer grading of writing would encourage following a formula but discourage creativity.