The Research Shows....

PEG Writing can be a reliable tool to help educators make important intervention decisions.

Wilson, J. (2018). Universal screening with automated essay scoring: Evaluating classification accuracy in Grades 3 and 4. Journal of School Psychology, 68, 19-37.

Writing proficiency is essential for success in K-12 and post-secondary education. The inclusion of more constructed-response items in the Common Core State Standards (CCSS) and associated assessments emphasizes this fact. Yet, many students struggle to develop the skills they need to be successful writers. They may be at greater risk of failing summative CCSS-based assessments, the results of which are often used to inform decisions about placing students on academic tracks that could have a significant impact on their lives.  However, struggling writers can be identified early to reduce their risk of course failure or even dropout.

Teacher helping students with their writingNearly half of U.S. states have implemented programs such as Response to Intervention (RTI) to identify at-risk students and provide the support they need to progress academically. Programs such as RTI start with general education and early identification efforts through universal screening, by which brief academic assessments, called screeners, are administered to an entire school population. Students scoring below a specific level may then be referred for supplemental instruction at secondary or tertiary levels. 

At all levels, these programs require frequent screening, progress monitoring, and evaluation through reliable measures. According to the Commission on Excellence in Special Education (2001), many students are “placed into special education without adequate documentation of their responsiveness to scientific, research-based instruction.” Federal policies, such as reflected in the Individuals with Disabilities Education Improvement Act (IDEA, 2004), now encourage data-driven decision making by local school systems to make appropriate placements and referrals of students into different classes. 

PEG Writing can be a reliable tool to help educators make the best decisions. In a recent study published in the Journal of School Psychology, Joshua Wilson (2018) evaluated the use of PEG (Project Essay Grade) automated essay scoring as a screener to identify struggling writers as part of a universal screening system. Wilson sampled a diverse group of 100 Grade 3 students and 130 Grade 4 students, each of whom completed an informal writing prompt in the Fall and Spring and then took their state test, the Smarter Balanced Assessment Consortium’s English Language Arts test. The scores from the informal writing prompts were correlated with performance on the state test. 

Findings indicated that students scoring in the lower-range of PEG (scores lower than 12 on a first draft of a 30-minute essay) had a higher likelihood of subsequently failing the state test. Similarly, students scoring in the upper-range of PEG (scores above 18) had a high likelihood of subsequently passing the summative test. 


Notably, observed outcomes of the study suggest an increased risk for students who had lower scores on the screeners:

  • Scores below 12 on the Spring screener – Of these students, 58% in Grade 3 and 87% in Grade 4 did not pass the state summative test.
  •  Scores between 12-17 on the Fall and Spring screener – Rates increase from 14% (Fall) to 22% (Spring) in Grade 3, and 50% (Fall) to 61% (Spring) in Grade 4.

This correlation indicates an achievement gap that grows larger with a lack of improvement. The test results of the 2009 NAEP achievement tests suggest the same conclusion; while 18% of all fourth-grade students scored “below basic,” the proportion was 27% for eighth-grade students.

Students writingThe consequences are clear, but preventive measures may not be. A single screener cannot perfectly identify which students are actually at risk. For accurate classification, Dr. Wilson points out that a combination of screeners to test both reading and writing have been proven more effective than either type of screener alone. He proposes the use of PEG Writing as the initial screener in a two-stage system or in the design of a screener including stimulus material that students would read to respond to a prompt. The stimulus material could even include recorded segments for prompts designed to test listening skills. Also, because PEG Writing offers unlimited opportunities for practice, teachers can better monitor progress and collect more data.

These findings show that PEG can be used to guide decisions about which students are at risk and who may require supplemental instruction. Furthermore, these findings are promising because alternative writing screeners are time-consuming to administer and score, and subject to poor reliability (i.e., scores may be untrustworthy). In contrast, PEG scores students’ essays with a high degree of reliability and does so immediately, allowing educators to reduce the gap between assessment, identification, and intervention.