MI Write is Measurement Incorporated's automated writing evaluation (AWE) program. AWE programs support the teaching and learning of writing by providing automated scores and feedback to students' writing. By easing the burden of providing feedback, MI Write allows teachers to assign more writing and focus their feedback efforts. In turn, MI Write affords students the increased writing practice opportunities they need to improve writing quality. Moreover, MI Write's automated writing quality scores provide timely and reliable assessment data—which can be used to examine changes in performance over time—and automated feedback helps students improve their knowledge of writing quality criteria. MI Write is distinguished by the following features:
- Appropriate for grades 3-12
- Immediate scores and feedback aligned with Education Northwest's 6+1 Trait Writing Model
- Pre-packaged writing prompts—many including stimulus material—for a range of content areas
- Capabilities for teachers to create and assign custom prompts
- A library of pre-writing tools to support writing planning
- Peer review tools
- Interactive student lessons
- Usage and performance reports (for students and teachers)
- Integrated teacher feedback and communication tools
- Tools to support differentiation (prompt recommendations, grade level scoring options, and personalized feedback)
- Accessibility resources such as adaptable font size, background color, and highlighting
- Rostering and class management tools
Most importantly, MI Write is supported by an extensive research base. Researchers have examined (1) the efficacy of automated scoring and feedback in improving writing outcomes, (2) the accuracy of automated scoring as a screener for at-risk writers, (3) effects of AWE in naturalistic implementation contexts, and (4) best practices in AWE implementation to improve writing instruction. Links to select peer-reviewed publications are available below.
Efficacy of AWE in improving writing outcomes
This research uses experimental and quasi-experimental designs to evaluate the efficacy of MI Write in improving writing outcomes.
Delgado, A., Wilson, J., Palermo, C., Cruz Cordero, T., Myers, M., Eacker, H., Potter, A., Coles, J. & Zhang, S. (2024). Relationships between middle-school teachers' perceptions and application of automated writing evaluation and student performance. In M. Shermis & J. Wilson (Eds.), The Routledge International Handbook of Automated Essay Evaluation (pp. 261-277). New York, NY: Routledge. https://doi.org/10.4324/9781003397618-17
Cruz Cordero, T., Wilson, J., Myers, M., Palermo, C., Eacker, H., Potter, A., & Coles, J. (2023). Writing motivation and ability profiles and transition after a technology-based writing intervention. Frontiers in Psychology—Educational Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1196274
Palermo, C., & Thomson, M. M. (2018). Teacher implementation of self-regulated strategy development with an automated writing evaluation system: Effects on the argumentative writing performance of middle school students. Contemporary Educational Psychology, 54, 255-270. https://doi.org/10.1016/j.cedpsych.2018.07.002
Wilson, J., & Czik, A. (2016). Automated essay evaluation software in English language arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers and Education, 100, 94-109. https://doi.org/10.1016/j.compedu.2016.05.004
Wilson, J., Palermo, C., & Wibowo, A. (2024). Elementary English learners' engagement with automated feedback. Learning and Instruction, 91. https://doi.org/10.1016/j.learninstruc.2024.101890
Wilson, J., & Roscoe, R. D. (2020). Automated writing evaluation and feedback: Multiple metrics of efficacy. Journal of Educational Computing Research, 58(1), 87-125. https://doi.org/10.1177/0735633119830764
Wilson, J., Zhang, F., Palermo, C., Cruz Cordero, T., Myers, M., Eacker, H., Potter, A., & Coles, J. (2024). Predictors of middle school students' perceptions of automated writing evaluation. Computers & Education, 211. https://doi.org/10.1016/j.compedu.2023.104985
Writing screening with automated scoring
This research examines the viability of MI Write as a screener for at-risk writers.
Chen, D., Hebert, M., & Wilson, J. (2022). Examining human and automated ratings of elementary students' writing quality: A multivariate generalizability theory application. American Educational Research Journal. https://doi.org/10.3102/00028312221106773
Wilson, J. (2018). Universal screening with automated essay scoring: Evaluating classification accuracy in Grades 3 and 4. Journal of School Psychology, 68, 19-37. https://doi.org/10.1016/j.jsp.2017.12.005
Wilson, J., Chen, D., Sandbank, M. P., & Hebert, M. (2019). Generalizability of automated scores of writing quality in grades 3-5. Journal of Educational Psychology, 111, 619-640. https://doi.apa.org/doi/10.1037/edu0000311
Wilson, J., Olinghouse, N. G., McCoach, D. B., Andrada, G. N., & Santangelo, T. (2016). Comparing the accuracy of different scoring methods for identifying sixth graders at risk of failing a state writing assessment. Assessing Writing, 27, 11-23. https://doi.org/10.1016/j.asw.2015.06.003
Wilson, J., & Rodrigues, J. (2020). Classification accuracy and efficiency of writing screening using automated essay scoring. Journal of School Psychology, 82, 123-140. https://doi.org/10.1016/j.jsp.2020.08.008
Naturalistic implementation contexts
This research examines outcomes associated with naturalistic and large-scale implementation of MI Write.
Huang, Y., & Wilson, J. (2021). Using automated feedback to develop writing proficiency. Computers and Composition, 62, 102675. https://doi.org/10.1016/j.compcom.2021.102675
Palermo, C., & Thomson, M. M. (2019). Classroom applications of automated writing evaluation: A qualitative examination of automated feedback. In L. Bailey (Ed.), Educational Technology and the New World of Persistent Learning (pp. 145-175). IGI Global. https://doi.org/10.4018/978-1-5225-6361-7.ch008
Potter, A., & Wilson, J. (2021). Statewide implementation of automated writing evaluation: Analyzing usage and associations with state test performance in grades 4-11. Educational Technology Research and Development, 69(3), 1557-1578. https://doi.org/10.1007/s11423-021-10004-9
Wilson, J. (2017). Associated effects of automated essay evaluation software on growth in writing quality for students with and without disabilities. Reading and Writing, 30, 691-718. https://doi.org/10.1007/s11145-016-9695-z
Wilson, J., Ahrendt, C., Fudge, E. A., Raiche, A., Beard, G., & MacArthur, C. (2021). Elementary teachers' perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation. Computers & Education, 168, 104208. https://doi.org/10.1016/j.compedu.2021.104208
Wilson, J., & Andrada, G. N. (2016). Using automated feedback to improve writing quality: Opportunities and challenges. In Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Handbook of research on technology tools for real-world skill development (pp.678-703). IGI Global. https://doi.org/10.4018/978-1-4666-9441-5.ch026
Wilson, J., Huang, Y., Palermo, C., Beard, G., & MacArthur, C. A. (2021). Automated feedback and automated scoring in the elementary grades: Usage, attitudes, and associations with writing outcomes in a districtwide implementation of MI Write. International Journal of Artificial Intelligence in Education, 31, 234-276. https://doi.org/10.1007/s40593-020-00236-w
Wilson, J., Myers, M. C., & Potter, A. (2022). Investigating the promise of automated writing evaluation for supporting formative writing assessment at scale. Assessment in Education: Principles, Policy & Practice, 29(2), 183-199. https://doi.org/10.1080/0969594X.2022.2025762
Wilson, J., Olinghouse N. G., & Andrada, G. N. (2014). Does automated feedback improve writing quality? Learning Disabilities: A Contemporary Journal, 12, 93-118.
Best practices in AWE implementation
This research investigates how to best implement MI Write to improve writing instruction.
Wilson, J., Zhang, S., Palermo, C., Cruz Cordero, T., Zhang, F., Myers, M., Potter, A., Eacker, H., & Coles, J. (2024). A latent dirichlet allocation approach to understanding students' perceptions of automated writing evaluation. Computers and Education Open. https://doi.org/10.1016/j.caeo.2024.100194
Palermo, C., & Wilson, J. (2020). Implementing automated writing evaluation in different instructional contexts: A mixed-methods study. Journal of Writing Research, 12(1), 63-108. https://doi.org/10.17239/jowr-2020.12.01.04
Wilson, J., Potter, A., Cordero, T. C., & Myers, M. C. (2022). Integrating goal-setting and automated feedback to improve writing outcomes: A pilot study. Innovation in Language Learning and Teaching, 1-17. https://doi.org/10.1080/17501229.2022.2077348
Scoring Services
We offer on-demand essay scoring services to researchers and others seeking reliable, generalizable essay scores. Scoring services use the same automated essay scoring models used in MI Write. Models can be used to score essays written by grade 3-12 students in response to any informational, narrative, or persuasive/argumentative prompt. How it works:
- Contact us at MIMarketing@measinc.com with your scoring request.
- Send us your essays using the formatting and secure delivery specifications we provide.
- Receive essay trait scores for each of Conventions, Ideas, Organization, Sentence Fluency, Style, and Word Choice.