A Review of Automated Essay Scoring (AES)
Main Article Content
Abstract
Student achievement is measured via tests. Now, humans assess Manual evaluation gets harder when teacher-to-student ratios grow. Manual evaluation is slow and unreliable. Online examinations replaced pen and paper. Computer-based testing only evaluates multiple-choice questions, not essays and short answers. Many researchers have worked on automated essay grading and short answer scoring for decades, but assessing an essay by considering all elements is challenging. Few studies evaluated content, but many styles. This article discusses essay-grading automation. We explored artificial intelligence and machine learning essay-scoring techniques and research restrictions. Relevance and coherence aren't rated in essays. Automated Essay Scoring (AES) is a difficult undertaking that involves grading student writings. Human inaccuracies, inequity issues, time requirements, and so on are all diminished. Natural language processing, machine learning, deep learning, etc. are just a few of the many methods that may be used for this. High-quality components are essential to the overall efficiency of such systems. This paper's primary objective is to assess various AES tactics in both intra- and inter-domain contexts.