Exploring Explainability and Transparency in Automated Essay Scoring Systems: A User-Centered Evaluation

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, rapid advancements in computer science, including increased capabilities of machine learning models like Large Language Models (LLMs) and the accessibility of large datasets, have facilitated the widespread adoption of AI technology, underscoring the need to ethically design and evaluate these technologies with concern for their impact on students and teachers. Specifically, the rise of Automated Essay Scoring (AES) platforms have made it possible to provide real-time feedback and grades for student essays. Despite the increasing development and use of AES platforms, limited research has focused on AI explainability and algorithm transparency and their influence on the usability of these platforms. To address this gap, we conducted a qualitative study on an AI-based essay writing and grading platform, Packback Deep Dives, with a primary focus of exploring the experiences of students and graders. The study aimed to explore the system’s usability related to explainability and transparency and to uncover the resulting implications for users. Participants took part in surveys, semi-structured interviews, and a focus group. The findings reveal several important considerations for evaluating AES systems, including the clarity of feedback and explanations, effectiveness and actionability of feedback and explanations, perceptions and misconceptions of the system, evolving trust in AI judgments, user concerns and fairness perceptions, system efficiency and feedback quality, user interface accessibility and design, and system enhancement design priorities. These proposed key considerations can help guide the development of effective essay feedback and grading tools that prioritize explainability and transparency to improve usability.

Cite

CITATION STYLE

APA

Hall, E., Seyam, M., & Dunlap, D. (2024). Exploring Explainability and Transparency in Automated Essay Scoring Systems: A User-Centered Evaluation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14724 LNCS, pp. 266–282). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-61691-4_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free