A Review of Automated Essay Scoring (AES)

Main Article Content

Yazeed Al Moaiad
https://orcid.org/0000-0002-0801-9887
Mohammad Alobed
Mahmoud Alsakhnini
Wafa Al-Haithami

Abstract

Student achievement is measured via tests. Now, humans assess Manual evaluation gets harder when teacher-to-student ratios grow. Manual evaluation is slow and unreliable. Online examinations replaced pen and paper. Computer-based testing only evaluates multiple-choice questions, not essays and short answers. Many researchers have worked on automated essay grading and short answer scoring for decades, but assessing an essay by considering all elements is challenging. Few studies evaluated content, but many styles. This article discusses essay-grading automation. We explored artificial intelligence and machine learning essay-scoring techniques and research restrictions. Relevance and coherence aren't rated in essays. Automated Essay Scoring (AES) is a difficult undertaking that involves grading student writings. Human inaccuracies, inequity issues, time requirements, and so on are all diminished. Natural language processing, machine learning, deep learning, etc. are just a few of the many methods that may be used for this. High-quality components are essential to the overall efficiency of such systems. This paper's primary objective is to assess various AES tactics in both intra- and inter-domain contexts.

Article Details

How to Cite
Al Moaiad, Y., Alobed, M., Alsakhnini, M., & Al-Haithami, W. (2025). A Review of Automated Essay Scoring (AES). International Journal on Contemporary Computer Research (IJCCR), 1(1), 68-86. Retrieved from http://ojs.mediu.edu.my/index.php/IJCCR/article/view/5658
Section
Natural Language Processing

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.