A Review of Automated Essay Scoring (AES)

Authors

Keywords:

Automatic essay scoring; automatic essay grading; semantic analysis; feedback; natural language processing; deep learning;evaluation metrics; transformer models.

Abstract

Student achievement is measured via tests. Now, humans assess Manual evaluation gets harder when teacher-to-student ratios grow. Manual evaluation is slow and unreliable. Online examinations replaced pen and paper. Computer-based testing only evaluates multiple-choice questions, not essays and short answers. Many researchers have worked on automated essay grading and short answer scoring for decades, but assessing an essay by considering all elements is challenging. Few studies evaluated content, but many styles. This article discusses essay-grading automation. We explored artificial intelligence and machine learning essay-scoring techniques and research restrictions. Relevance and coherence aren't rated in essays. Automated Essay Scoring (AES) is a difficult undertaking that involves grading student writings. Human inaccuracies, inequity issues, time requirements, and so on are all diminished. Natural language processing, machine learning, deep learning, etc. are just a few of the many methods that may be used for this. High-quality components are essential to the overall efficiency of such systems. This paper's primary objective is to assess various AES tactics in both intra- and inter-domain contexts.

Downloads

Published

2025-01-03

How to Cite

Al Moaiad, Y., Alobed, M., Alsakhnini, M., & Al-Haithami, W. (2025). A Review of Automated Essay Scoring (AES). International Journal on Contemporary Computer Research (IJCCR), 1(1), 68–86. Retrieved from https://ojs.mediu.edu.my/index.php/IJCCR/article/view/5658

Issue

Section

Natural Language Processing

Most read articles by the same author(s)

1 2 > >>