Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Code tracing is a foundational programming skill that involves simulating a program’s execution line by line, tracking how variables change at each step. To code trace, students need to understand what a given program line means, which can be accomplished by translating it into plain English. Translation can be characterized as a form of self-explanation, a general learning mechanism that involves making inferences beyond the instructional materials. Our work investigates if this form of self-explanation improves learning from a code-tracing tutor we created using the CTAT framework. We created two versions of the tutor. In the experimental version, students were asked to translate lines of code while solving code-tracing problems. In the control condition students were only asked to code trace without translating. The two tutor versions were compared using a between-subjects study (N = 44). The experimental group performed significantly better on translation and code-generation questions, but the control group performed significantly better on code-tracing questions. We discuss the implications of this finding for the design of tutors providing code-tracing support.

Cite

CITATION STYLE

APA

Caughey, M., & Muldner, K. (2023). Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13916 LNAI, pp. 66–77). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-36272-9_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free