Document Type : Research Paper
Authors
University of Isfahan
Abstract
The study examined how effective ChatGPT, compared to human raters, is for scoring writing tasks, when tasks were arranged from simple to complex or vice versa. In so doing, a correlational design was employed. The participants were 113 EFL learners. Two sets of writing tasks were customized based on the SSARC (simplify, stabilize, automatize, reconstruct, complexify) model. Task design incorporated resource-dispersing and resource-directing components as outlined in Robinson's Triadic Componential Framework (TCF). The participants were divided into two groups. They took a pre-test. One group performed the tasks in the S-C, whereas the other performed them in the C-S sequence. The participants enhanced their text based on comments on tasks. After that, they took a posttest. Human raters and ChatGPT scored the tests. A Pearson Correlation test was run to obtain the correlation between a human rater and ChatGPT. The results indicated a strong positive correlation between IELTS scores assessed by human raters and those by ChatGPT when tasks were arranged from S-C (r = 968, p > 05) or C-S (r = 860, p > 05). These findings suggest that ChatGPT can be an effective tool for writing assessments.
Keywords
Main Subjects