Supportive Automated Feedback for Short Essay Answers
1st September 2012 - 31st October 2014
The aim of this research was to produce an effective automated interactive feedback system that provides an acceptable level of support for university students writing essays in a distance-learning or e-learning context.
The natural language processing problems that sit at the centre of this project are how to ‘understand’ a student essay well enough to provide accurate and individually targeted feedback, and how to generate that feedback automatically. The educational problem is how to develop and evaluate effective models of feedback. This requires research into selection of the content, the mode of presentation and the delivery of the feedback.
The research questions include:
- Can we extend existing essay-marking techniques to detect passages on which a human marker would usually give some feedback?
- Can we adapt existing information extraction, summarisation, and key-phrase extraction methods to select content for such feedback?
- Can automatic sentence generation methods deliver this feedback in a natural way?
- What effect does summarisation have on essay improvement?
- How does the provision of hints affect both the essay being written and essay writing in the future?
- What effect does automated feedback have on the student's levels of self-regulation and metacognition in the writing task?
We have produced
- an essay assessment engine which generates a profile of an essay providing the basis for feedback;
- a feedback generator which uses customised, corpus-informed, sentence-generation, summarisation and key-phrase extraction;
- a systematic evaluation of the feedback generated under various conditions in terms of its reliability and validity;
- a working system incorporating these components (OpenEssayist) that is suitable for use by both learners and teachers;
- a field evaluation of the effectiveness of automated feedback on short essay answers.