Performance Enhancement of Automatic Short Answer Grading (Asag) Using Deep Learning

Main Article Content

Rupal Chaudhari, Manish Patel, Ankur Goswami

Abstract

There has been a considerable increase in the number of blended learning courses, which has prompted interest in the question of how appropriate and beneficial automated assessment may be in such a circumstance. This is despite the fact that the impacts of the pandemic have not even been taken into consideration. In this study, we investigate the automated short answer grading (ASAG) scenario, which is a scenario that includes the application of machine learning, more specifically deep learning techniques, to grade student responses that have severe length limits. Because it offers a basis for thinking about answers that are more conversational or open-ended, ASAG continues to be one of the most active areas of study in the field of natural language processing (NLP), despite the fact that it has been investigated for more than half a century. One of the most fundamental issues that ASAG faces is the absence of sufficient training data, which includes, among other things, information that is tagged and information that is relevant to a domain. The purpose of this project is to investigate these questions using a variety of deep learning approaches. To be more specific, there is a comprehensive analysis of deep learning models, the curation of datasets, and evaluation criteria for ASAG tasks. This research comes to a close by examining the development of guidelines for educators, with the goal of enhancing the utility of ASAG research materials.

Article Details

Section
Articles