Project Essay Grade (PEG®)

Student writing responses are scored by MI’s automated essay scoring engine, Project Essay Grade (PEG®).  Since acquiring the PEG technology from Dr. Ellis Batten Page in 2003, MI has focused on incorporating the latest advances in natural language processing, semantic and syntactic analysis, and classification methods to produce a state-of-the art automated scoring engine. Today’s PEG software delivers valid and reliable scoring that is unrivaled in the industry.

As with most automated scoring software, PEG utilizes a set of human-scored training essays to build a model with which to assess the writing of unscored essays. Using advanced statistical techniques, PEG analyzes the training essays and calculates more than 500 features that reflect the intrinsic characteristics of writing, such as fluency, diction, grammar, and construction. Once the features have been calculated, PEG uses them to build statistical and linguistic models for the accurate prediction of essay scores. MI enhances scoring accuracy by using extensive custom dictionaries and word lists, producing results that are comparable to MI’s well-trained and expert human readers.

For more information about PEG, please contact us at marketing@measinc.com


MI’s scoring engine has been used to provide over three million scores to students in formative and summative writing assessments over the past six years. Our results have been validated in independent third party studies and in research that we have conducted on behalf of our clients. In 2012, MI achieved the highest agreement index of the nine vendors participating in the Automated Scoring Assessment Prize (ASAP) competition sponsored by the Hewlett Foundation.