Le and his team examined whether learners’ preferences for feedback from human instructors versus generative artificial intelligence (AI) would change after receiving feedback from different sources and interface types in an academic English writing task. The study recruited 114 university students who were non-native English speakers and randomly assigned them to four groups: no feedback (control), human instructor feedback, ChatGPT 4.0 in a free-conversation interface, and a structured writing analysis tool powered by ChatGPT. Learners’ preferences were measured both before and after the task using rating scales and binary-choice questions, and the four groups were compared in terms of post-task preference and preference change.
The results showed that learners already had a clear preference for human instructors before the task (87.2% chose human), and this preference remained stable after the task (86.0% chose human), reflecting a phenomenon of algorithm aversion in educational settings. However, post-test preference scores differed significantly among the four groups: the human instructor group rated significantly higher than both the free-conversation AI group and the control group. On the binary human/AI choice measure, significant differences were also found — the human instructor and structured AI tool groups both scored higher than the free-conversation AI group. Regarding preference change, the overall mean shift was close to zero, but the differences among groups were significant: the free-conversation AI group showed a slight increase in preference for AI, whereas the human instructor and structured AI tool groups remained more favorable toward humans. In other words, although all three feedback types were effective, the free-conversation interface was the only one that reduced algorithm aversion and increased learners’ acceptance of AI, while the structured, one-time feedback tool further reinforced their preference for human instructors.
Based on these findings, the authors argue that enhancing the interactivity and dialogic nature of AI-based learning tools may influence learners’ preferences more effectively than purely improving their technical performance. Interactive dialogue allows for clarification and correction, which reduces learners’ unrealistic expectations that algorithms must be perfect and mitigates distrust. Overall, the study situates human preference within the context of interface design, providing both empirical insights and cautions for the adoption, product design, and pedagogical integration of AI in education.
Source (Open Access): Le, H., Shen, Y., Li, Z., Xia, M., Tang, L., Li, X., … & Fan, Y. (2025). Breaking human dominance: Investigating learners’ preferences for learning feedback from generative AI and human tutors. British Journal of Educational Technology.

