Automatic assessment of short answer questions: Review

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the rapid development of technology, automated assessment systems have become an essential tool to facilitate the process of correcting short answers. This research aims to explore ways to improve the accuracy of automated assessment using natural language processing techniques, such as Latent Semantic Analysis (LSA) and Longest Common Sequence (LCS) algorithm, hile highlighting the challenges associated with the scarcity of Arabic language datasets.These methodologies facilitate the assessment of both lexical and semantic congruences between student submissions and benchmark answers. The overarching objective is to establish a scalable and precise grading mechanism that reduces manual evaluation's temporal and subjective dimensions. Notwithstanding significant advancements, obstacles such as the scarcity of Arabic datasets persist as a principal impediment to effective automated grading in languages other than English. This research scrutinizes contemporary strategies within the domain, highlighting the imperative for more sophisticated models and extensive datasets to bolster the precision and adaptability of automated grading frameworks, particularly concerning Arabic textual content. With the rapid development of technology, automated assessment systems have become an essential tool to facilitate the process of correcting short answers. This research aims to explore ways to improve the accuracy of automated assessment using natural language processing techniques, such as Latent Semantic Analysis (LSA) and Longest Common Sequence (LCS) algorithm, hile highlighting the challenges associated with the scarcity of Arabic language datasets.These methodologies facilitate the assessment of both lexical and semantic congruences between student submissions and benchmark answers. The overarching objective is to establish a scalable and precise grading mechanism that reduces manual evaluation's temporal and subjective dimensions. Notwithstanding significant advancements, obstacles such as the scarcity of Arabic datasets persist as a principal impediment to effective automated grading in languages other than English. This research scrutinizes contemporary strategies within the domain, highlighting the imperative for more sophisticated models and extensive datasets to bolster the precision and adaptability of automated grading frameworks, particularly concerning Arabic textual content.

References Powered by Scopus

Convolutional neural networks for sentence classification

8059Citations
N/AReaders
Get full text

Basic validation procedures for regression models in QSAR and QSPR studies: Theory and application

333Citations
N/AReaders
Get full text

Extending the Applicability of the ANI Deep Learning Molecular Potential to Sulfur and Halogens

237Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Mahmood, S. A., & Abdulsamad, M. A. (2024). Automatic assessment of short answer questions: Review. Edelweiss Applied Science and Technology, 8(6), 9158–9176. https://doi.org/10.55214/25768484.v8i6.3956

Readers' Seniority

Tooltip

Lecturer / Post doc 1

100%

Readers' Discipline

Tooltip

Medicine and Dentistry 1

100%

Save time finding and organizing research with Mendeley

Sign up for free