Using conversation analysis method to investigate student-AI chatbot interaction
基于对话分析(CA)的微观互动特征与互动能力(IC)研究
该组文献是本主题的核心,采用经典的对话分析方法,探讨学生在与AI交互中的话轮转接、修正机制、话语标记语、接受性展示及修补策略。研究旨在揭示人机对话的结构性特征,并评估学习者如何在AI环境下构建和展示其语用能力与互动能力(Interactional Competence)。
- Conversational Analysis of Learner – AI Chatbot Interactions in Developing Spoken Fluency(A. Matiienko-Silnytska, N.M. Mikava, Iryna Savranchuk, Neonila Tkhor, Hanna Poliakova, 2025, Arab World English Journal)
- explain. write. edit. summarise: An Exploratory Study on Agency Negotiations in Student-Chatbot Conversations(Á. Einarsson, Ekaterina Pashevich, 2026, Human-Machine Communication)
- Intent‐Based Versus GPT‐Based Conversational Agents: Benefits and Challenges for Practicing and Assessing Oral Interaction(V. Timpe‐Laughlin, Rahul Divekar, Tetyana Sydorenko, Judit Dombi, Saerhim Oh, 2025, TESOL Quarterly)
- Situated L2 pronunciation instruction during small-group robot-assisted language learning activities(Teppo Jakonen, Derya Duran, Pauliina Peltonen, 2025, Language Teaching Research)
- Assessments in L2 conversation-for-learning discussions(Eunseok Ro, Josephine Mijin Lee, 2025, Language Teaching Research)
- Establishing recipiency in divergent L2 contexts of classroom interaction: A conversation analysis(Mengistu Anagaw Engida, Haile Kassahun Bewuket, Mekonnen Esubalew Tariku, Wondiyfraw Mhiret Dessie, 2024, Heliyon)
- L2 grammar‐for‐interaction: Functions of “and”‐prefaced turns in L2 students’ collaborative talk(František Tůma, Leila Kääntä, Teppo Jakonen, 2023, The Modern Language Journal)
- Locating Chinese L2 interactional competence and intercultural communicative competence in turn management(Yi Wang, 2026, Chinese as a Second Language (漢語教學研究—美國中文教師學會學報). The journal of the Chinese Language Teachers Association, USA)
- I Wanna Talk Like You: Speaker Adaptation to Dialogue Style in L2 Practice Conversation(Arabella J. Sinclair, Rafael Ferreira, D. Gašević, Christopher G. Lucas, Adam Lopez, 2019, No journal)
- Metadiscourse in Simulation: Reflexivity of/as Communication Skills Learning(G. Peters, 2022, Teaching and Learning in Medicine)
- Human‐ versus artificial intelligence‐delivered roleplay tasks for assessing interactional competence: An applied conversation analytic study(Masaki Eguchi, Kotaro Takizawa, Mao Saeki, Fuma Kurata, Shungo Suzuki, Yoichi Matsuyama, Yasuyo Sawaki, 2025, TESOL Quarterly)
- The mediative role of learning materials: Raising L2 learners’ awareness of silence and conversational repair during L2 interaction(Seiko Harumi, 2023, Journal of Silence Studies in Education)
- Category-activity puzzles as resources for humor in L2 classrooms(Nimet Çopur, 2025, HUMOR)
- From Conversation to Interaction: A Pedagogical Exploration of Applying Conversation Analysis in EFL Classrooms(Xiao Han, 2024, Teaching English as a Second or Foreign Language--TESL-EJ)
- Interactional Practices and Normative Expectations in EFL Classrooms: A Conversation Analysis Approach to Turn-Taking(Abdessatar Azennoud, Achraf Guaad, Khawla Lamghari, 2025, Journal of Humanities and Social Sciences Studies)
- Repairing problems of acceptability in pre-task interaction(David Shimamoto, 2025, Classroom Discourse)
- Learner-initiated grammar explanations in L2 French classroom interaction(Loanne Janin, 2025, European Journal of Applied Linguistics)
- Discourse markers in L2 learners' responses to teacher‐generated compliments during classroom interaction(Mostafa Morady Moghaddam, 2023, Foreign Language Annals)
- Extending repair in peer interaction: A conversation analytic study(Miam Chen, Shelly Xueting Ye, 2022, Frontiers in Psychology)
AI聊天机器人辅助语言学习(SLA)的教学成效与反馈评价
此类文献侧重于实证研究,探讨AI工具在提升学生口语流利度、发音、语法、词汇习得及听力理解方面的实际效果。重点分析了AI提供的即时纠错反馈(Corrective Feedback)与教师反馈的差异,以及其在增强学习者自主性和减少语言焦虑方面的作用。
- Optimizing ESL Learners’ Speech Act Performance: The Role of AI-Powered Chatbots in Pragmatic Competence Development(Gohar Rahman, B. Mudhsh, Mohammad Almutairi, Marouan Kouki, 2025, Theory and Practice in Language Studies)
- AI Conversational Agents for Corporate Language Learning: Enhancing Engagement and Retention(Fernando Salvetti, Barbara Bertagni, Ianna Contardo, 2025, Int. J. Adv. Corp. Learn.)
- Personalized language learning with an LLM chatbot: effects of immediate vs. delayed corrective feedback(Alireza M. Kamelabad, Beatrice Turano, Mattias Lundin, Gabriel Skantze, 2026, Frontiers in Education)
- Impact of AI gamification on EFL learning outcomes and nonlinear dynamic motivation: Comparing adaptive learning paths, conversational agents, and storytelling(Liu Liu, 2024, Education and Information Technologies)
- Improving Primary School Students' Oral Reading Fluency Through Voice Chatbot-Based AI(Mohamed Ali Nagy Elmaadaway, M. El-Naggar, Mohamed Radwan Ibrahim Abouhashesh, 2025, J. Comput. Assist. Learn.)
- Is Artificial Intelligence in Education an Object or a Subject? Evidence from a Story Completion Exercise on Learner-AI Interactions(G. Veletsianos, Shandell Houlden, Nicole Johnson, 2024, TechTrends)
- The role of psycholinguistics for language learning in teaching based on formulaic sequence use and oral fluency(Yue Yu, 2022, Frontiers in Psychology)
- Learning between Human-made and AI-generated Content: A Multimodal Discourse Analysis of Selected EFL Educational Reels(M. M. Ali, Mamdouh M. Elaskalany, 2025, مجلة البحث العلمي في الآداب)
- The AI chatbot interaction for semantic learning: A collaborative note-taking approach with EFL students(Mei-Rong Alice Chen, 2024, Language Learning & Technology)
- Effects of learner uptake following automatic corrective recast from Artificial Intelligence chatbots on the learning of English caused-motion construction(Rakhun Kim, 2024, Language Learning & Technology)
- The role of AI-powered chatbots in enhancing second language acquisition: An empirical investigation of conversational AI assistants(Jordan R. Taeza, 2025, Edelweiss Applied Science and Technology)
- Integrating chatbot technology into English language learning to enhance student engagement and interactive communication skills(Jing Zhang, 2025, Journal of Computational Methods in Sciences and Engineering)
- Enhancing English Speaking Skills in Romanian 8th Grade Classrooms through Conversational AI: The Case of Character.AI(Alice Siretean, 2025, Interconnected Learning and Teaching International Journal for Foreign Languages)
- AI Tools For Speaking Fluency And Pronunciation: Effectiveness And Limitations(Babayeva Komila Rishatovna, 2025, European International Journal of Philological Sciences)
- Integrating Conversational AI Tools to Enhance Pronunciation, Listening Skills, and Learner Autonomy in Second Language Education(Donny Adiatamana Ginting, Shahzadi Hina, 2025, Global Education : International Journal of Educational Sciences and Languages)
- A Case Study on Middle School Students' Learning Experience in Free English Conversation with Generative AI Chatbots(C. Lee, Namin Shin, 2025, Korean Association For Learner-Centered Curriculum And Instruction)
- Voice-Based Chatbots for English Speaking Practice in Multilingual Low-Resource Indian Schools: A Multi-Stakeholder Study(Sneha Shashidhara, Vivienne Bihe Chi, Abhay P Singh, Lyle Ungar, S. Guntuku, 2026, ArXiv)
- Improving Learning Efficacy on Duolingo via Generative AI and the Learner Feedback Loop(Natalie Glance, 2025, Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2)
- LEARNENGLISH TEENS: AN AI-MEDIATED TEACHING SPEAKING SKILL IN THE EFL CLASSROOM(Ida Royani, 2025, Esteem Journal of English Education Study Programme)
学生与AI协作互动的动态行为模式与序列路径分析
该组研究利用学习分析技术(如认知网络分析ENA、会话路径分析)识别学生与AI互动中的宏观行为模式。关注点包括学生如何发起反馈请求、自我调节学习(SRL)能力的体现、语言同步性对表现的影响,以及互动策略随时间演变的动态过程。
- Analysing Conversation Pathways with a Chatbot Tutor to Enhance Self-Regulation in Higher Education(Ludmila Martins, M. Fernández-Ferrer, Eloi Puertas, 2024, Education Sciences)
- Talking in Sync: How Linguistic Synchrony Shapes Teacher-Student Conversation in English as a Second Language Tutoring Environment(A. P. Aguinalde, Jinnie Shin, 2025, Proceedings of the 15th International Learning Analytics and Knowledge Conference)
- The Process of Undergraduates' Collaboration With a Generative Artificial Intelligence Chatbot: Insights From Conversation Content and Epistemic Network Analysis(Weipeng Shen, Xiao-Fan Lin, Jiachun Liu, Xinxian Liang, Ruiqing Chen, Xiaoyu Lai, Xinwen Zheng, 2025, J. Comput. Assist. Learn.)
- AI Chatbot Use in Higher Education: A Life-Course Perspective on Student Engagement and Cognitive Learning Outcomes(Muh. Nurfajri Syam, Muh Nurul Ainal Hakim, Della Fadhilatunisa, Saipul Abbas, 2026, Artificial Intelligence in Lifelong and Life-Course Education)
- Leveraging Process-Action Epistemic Network Analysis to Illuminate Student Self-Regulated Learning with a Socratic Chatbot(Joel Weijia Lai, Wei Qiu, Muang Thway, Lei Zhang, Nurabidah Binti Jamil, Chit Lin Su, S. S. Ng, Fun Siong Lim, 2025, J. Learn. Anal.)
- ChEDDAR: Student-ChatGPT Dialogue in EFL Writing Education(Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, T. Lee, So-Yeon Ahn, Alice H. Oh, 2023, ArXiv)
- From Prompt to Polished: Exploring Student–Chatbot Interactions for Academic Writing Assistance(Maya Usher, Meital Amzalag, 2025, Education Sciences)
- Students' Feedback Requests and Interactions with the SCRIPT Chatbot: Do They Get What They Ask For?(Andreas Scholl, Natalie Kiesler, 2025, ArXiv)
- Reading with a Chatbot - The added value of Generative AI as a resource in mediating learners in Dynamic Assessment of L2 English reading(D. Leontjev, M. E. Poehner, Ari Huhta, Pirjo Pollari, 2025, Studies in Language Assessment)
- A Comparative Study of Teacher Feedback and Chatbot Feedback on Second Language Learners’ Pragmalinguistic and Sociopragmatic Competences(Z. F. Ajabshir, 2024, International Journal of Human–Computer Interaction)
- The Impact of Different Conversational Generative AI Chatbots on EFL Learners: an Analysis of Willingness to Communicate, Foreign Language Speaking Anxiety, and Self-perceived Communicative Competence(Chenghao Wang, Bin Zou, Yiran Du, Zixun Wang, 2024, System)
- ARCHIE: Exploring Language Learner Behaviors in LLM Chatbot-Supported Active Reading Log Data with Epistemic Network Analysis(Steve Woollaston, B. Flanagan, Patrick Ocheja, Yuko Toyokawa, Hiroaki Ogata, 2025, Proceedings of the 15th International Learning Analytics and Knowledge Conference)
- How Students' Self-Regulated Learning Abilities Influence Intents and Engagement Goals in Chatbot-Assisted Writing(Dongyub Lee, Sabine Lee, Sidney S Fels, Kyoungwon Seo, 2025, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
教育对话系统的架构设计、数据集构建与评估框架
这部分文献关注技术底层与方法论,包括针对教育场景开发的多角色代理、多模态交互系统、知识追踪技术,以及专门的话语标注框架和评估工具(如FlexEval)。研究旨在为AI教育应用提供标准化的数据集和系统性的评价标准。
- InterviewBot: Real-Time End-to-End Dialogue System to Interview Students for College Admission(Zihao Wang, Jinho D. Choi, 2023, ArXiv)
- CollaClassroom: An AI-Augmented Collaborative Learning Platform with LLM Support in the Context of Bangladeshi University Students(Salman Sayeed, Bijoy Ahmed Saiem, Al-Amin Sany, Sadia Sharmin, A. Islam, 2025, ArXiv)
- Towards a Multimodal Document-grounded Conversational AI System for Education(Karan Taneja, Anjali Singh, Ashok K. Goel, 2025, No journal)
- AI Chatbots as Multi-Role Pedagogical Agents: Transforming Engagement in CS Education(C. Cao, Zijian Ding, Jionghao Lin, F. Hopfgartner, 2023, ArXiv)
- Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs(Alexander Scarlatos, Ryan S. Baker, Andrew S. Lan, 2024, Proceedings of the 15th International Learning Analytics and Knowledge Conference)
- Mobile-based artificial intelligence chatbot for self-regulated learning in a hybrid flipped classroom(Insook Han, Hyangeun Ji, Seoyeon Jin, Koun Choi, 2025, Journal of Computing in Higher Education)
- FlexEval: a customizable tool for chatbot performance evaluation and dialogue analysis(S. Christie, Baptiste Moreau-Pernet, Yu Tian, John Whitmer, 2024, No journal)
- Predicting Learning Styles in a Conversational Intelligent Tutoring System(A. Latham, Keeley A. Crockett, D. Mclean, B. Edmonds, 2010, No journal)
- La dimensione formativa dell’interazione con le chatbot AI. Per una pedagogia dialogica digitale(Maria Rita Mancaniello, Francesco Lavanga, 2024, Studi sulla Formazione/Open Journal of Education)
- A Systematic Approach to Evaluate the Use of Chatbots in Educational Contexts: Learning Gains, Engagements and Perceptions(Wei Qiu, Chit Lin Su, Nurabidah Binti Jamil, Maung Thway, S. S. Ng, Lei Zhang, Fun Siong Lim, Joel Weijia Lai, 2025, Comput.)
- From Words to Wisdom: Discourse Annotation and Baseline Models for Student Dialogue Understanding(Farjana Sultana Mim, Shuchin Aeron, Eric L. Miller, Kristen Wendell, 2025, ArXiv)
- Early insights into SLA with chatGPT: Navigating CS teachers and student perspectives in an opinion-based exploration(C. Boudia, Krismadinata, 2024, Edelweiss Applied Science and Technology)
- Sahar Dataset: a Validated Dialogue Based Dataset For a Child-Centric, Empathetic and Knowledge-Driven Chatbot(Hadi Al Khansa, Ahmad Mustapha, Mariette Awad, 2025, Proceedings of the AAAI Symposium Series)
- A proposed methodology for investigating student-chatbot interaction patterns in giving peer feedback(M. P. Lin, Daniel H. Chang, Philip H. Winne, 2024, Educational technology research and development)
AI互动的社会情感、权力关系与批判性话语视角
该组文献从社会文化视角审视人机互动,探讨AI对课堂权力结构的影响、学习者身份认同的重塑、以及AI系统的感知有用性。研究还涉及情感分析和共情能力的构建,反思AI在教育中作为“导师”或“考官”的角色定位。
- EFL Student's Experiences with AI Chatbots: A Critical Discourse Analysis(Bayanuddin Munir, U. Sulaiman, M. Rasyid, Ahmad Afiif, 2025, Langkawi: Journal of The Association for Arabic and English)
- Development and validation of the perceived interactivity of learner-AI interaction scale(Feifei Wang, Alan C. K. Cheung, Ching-sing Chai, Jin Liu, 2024, Education and Information Technologies)
- Development of adaptive and emotionally intelligent educational assistants based on conversational AI(Rommel Gutierrez, W. Villegas-Ch., Jaime Govea, 2025, Frontiers in Computer Science)
- The effect of chatbot-supported instruction on nursing students' history-taking questioning skills and stress level: A randomized controlled study.(Sıdıka Kestel, Afra Çalık, Mustafa Kuş, 2025, Journal of professional nursing : official journal of the American Association of Colleges of Nursing)
- Mentor or Examiner? A Critical Discourse Analysis of Ai-Generated Feedback in EFL Writing Education(Liang Ting, Charanjit Kaur Swaran Singh, Tee Tze Kiong, Farkhodjon Rakhimjonov, Tarsame Singh Masa Singh, 2025, International Journal of Academic Research in Progressive Education and Development)
- Stereotypes and prejudices in the Italian L2 class. A conversation analysis of their emergence in teachers' talk(Nicola Nasi, L. Caronia, 2023, EDUCATION SCIENCES AND SOCIETY)
- Idiographic self-regulated affordance uptake in AI-mediated language learning(Q. Nguyen, Dung Thi Hue Doan, 2025, Cogent Education)
- A study of students’ perception of character AI in practicing English speaking fluency(Maya Farhanna Napitupulu, Ahmad Amin Dalimunte, 2025, Celtic : A Journal of Culture, English Language Teaching, Literature and Linguistics)
- Generative conversational AI: Active practices for fostering students with mild intellectual disabilities to improve English communication skills(M. Elkot, Eltaieb Youssif, Omer Elsheikh Hago Elmahdi, Mohammed Abdalgane, Rabea Ali, 2025, Contemporary Educational Technology)
数字化与协作环境下的生生/师生交互基准研究
这组文献提供了传统课堂或视频介导环境下的交互基准。虽然不完全以AI为核心,但它们通过分析同伴反馈、小组协作动态及非言语行为(如手势),为理解“交互”本质提供了理论参照,帮助对比人机互动与人际互动的差异。
- Collaborative Accomplishment of L2 Peer Feedback Interaction: Pursuing Uptake Through Accounting and Depersonalizing(Kübra Ekşi, Nilüfer Can Daşkın, 2025, International Journal of Applied Linguistics)
- Follow-up contributions for collaboratively accomplishing peer feedback in video-mediated L2 interactions(Kübra Ekşi, Nilüfer Can Daşkın, 2025, Applied Linguistics Review)
- Analyzing L2 Classroom Dynamics: A Conversation Analysis Perspective(Shuangchang Wen, 2023, English Language Teaching and Linguistics Studies)
- Spontaneous Gestures in L2 Naturalistic Spontaneous Interaction: Effects of Language Proficiency(Hiroki Hanamoto, 2023, Anglica Wratislaviensia)
- Implications of materials use in a shop encounter roleplay in Finnish as an L2 classroom interaction – a comparison of two types of task materials(Katriina Rantala, 2025, Classroom Discourse)
- Group-based assessments and L2 interactional competence: test-takers’ practices for re-aligning to the assessment task(Michael Stephenson, Christopher Leyland, 2025, Language and Education)
- Task-induced development of hinting behaviors in online task-oriented L2 interaction(Ufuk Balaman, 2018, Language Learning & Technology)
- SPEAK-BOT AND GROUP DYNAMICS: EXPLORING COLLABORATIVE INTERACTION QUALITY IN AI-ASSISTED SPEAKING PEDAGOGY(Veni Nella Syahputri, Nyak Mutia Ismail, Cut Nabilla Kesha, 2025, Getsempena English Education Journal)
- Learner-initiated self-selection as a next speaker in a technology-mediated L2 learning environment: A multimodal conversation analytic perspective(Simin Ren, P. Seedhouse, 2026, System)
最终分组结果构建了一个从微观到宏观、从技术到社会的完整研究图谱。研究首先立足于对话分析(CA)的方法论基础,深入剖析人机互动的微观语言特征;其次通过实证研究验证AI在语言习得中的成效;接着利用学习分析技术探索互动的动态路径;同时涵盖了系统设计与评估的工程视角;最后延伸至社会文化与批判性话语分析,并以传统课堂交互作为对比基准。这一体系全面覆盖了学生与AI聊天机器人互动研究的前沿方向。
总计85篇相关文献
Generative artificial intelligence (GenAI) chatbots extend transformative impact in higher education. Current research requires more comprehensive evaluations of the collaborative learning fostered by students and GenAI chatbots. However, existing articles have rarely explored the dynamic process of student–AI collaboration in higher education.This study aims to analyse and visualise the changes in the process of undergraduates' collaboration with a GenAI chatbot. The interaction patterns of the collaboration were explored under the perspective of social constructivist learning theory. The differences between student‐AI interaction patterns at 5 time points (after 5 lessons) were further compared to show the dynamic collaboration process.A 9‐week course was implemented for 40 Chinese undergraduates, who completed 5 rounds of collaboration with a GenAI chatbot named ERNIE Bot. Employing a designed coding scheme, a total of 6180 codes was collected from the conversation content of each round. Based on the interval data, content analysis and epistemic network analysis (ENA) were conducted.First, undergraduates gradually became more active and targeted in their collaboration with the GenAI chatbot. Second, the focal points of their collaboration changed from “Comprehension” (the first–third lessons) to “Generation” (the third–fifth lessons), along with different interaction patterns. Notably, the interaction patterns changed more rapidly and prominently during the “Comprehension” phase than the “Generation” phase.The findings contribute to understanding the social constructivist learning process within student‐AI collaboration in higher education. Practical recommendations for students and educators were offered as well.
A chatbot is artificial intelligence software that converses with a user in natural language. It can be instrumental in mitigating teaching workloads by coaching or answering student inquiries. To understand student-chatbot interactions, this study is engineered to optimize student learning experience and instructional design. In this study, we developed a chatbot that supplemented disciplinary writing instructions to enhance peer reviewer’s feedback on draft essays. With 23 participants from a lower-division post-secondary education course, we delved into characteristics of student-chatbot interactions. Our analysis revealed students were often overconfident about their learning and comprehension. Drawing on these findings, we propose a new methodology to identify where improvements can be made in conversation patterns in educational chatbots. These guidelines include analyzing interaction pattern logs to progressively redesign chatbot scripts that improve discussions and optimize learning. We describe new methodology providing valuable insights for designing more effective instructional chatbots by enhancing and engaging student learning experiences through improved peer feedback.
LLM-based chatbots such as ChatGPT have given technologies that traditionally operated on the back end a user-friendly, conversational interface. Their rapid adoption among students has prompted universities worldwide to issue guidelines and re-examine existing practices. Supplementing prior research based on discourse analysis and self-reported measures (e.g., surveys and interviews), we propose an approach for analyzing naturally occurring student–chatbot interactions, rooted in conversation analysis of chats (n = 503) donated by eight current Danish university students. The analysis identifies conversational patterns across three main types of activities and examines how agency is negotiated across the structural dimensions of signification, domination, and legitimation. Despite methodological limitations related to the sample, this study offers a promising path toward understanding how human–machine relations are recursively shaped in dialogue.
Spoken English proficiency is a powerful driver of economic mobility for low-income Indian youth, yet opportunities for spoken practice remain scarce in schools. We investigate the deployment of a voice-based chatbot for English conversation practice across four low-resource schools in Delhi. Through a six-day field study combining observations and interviews, we captured the perspectives of students, teachers, and principals. Findings confirm high demand across all groups, with notable gains in student speaking confidence. Our multi-stakeholder analysis surfaced a tension in long-term adoption vision: students favored open-ended conversational practice, while administrators emphasized curriculum-aligned assessment. We offer design recommendations for voice-enabled chatbots in low-resource multilingual contexts, highlighting the need for more intelligible speech output for non-native learners, one-tap interactions with simplified interfaces, and actionable analytics for educators. Beyond language learning, our findings inform the co-design of future AI-based educational technologies that are socially sustainable within the complex ecosystem of low-resource schools.
The integration of generative artificial intelligence (GenAI) in higher education has opened new avenues for enhancing academic writing through student–chatbot interactions. While initial research has explored this potential, deeper insights into the nature of these interactions are needed. This study characterizes graduate students’ interactions with AI chatbots for academic writing, focusing on the types of assistance they sought and their communication style and tone patterns. To achieve this, individual online sessions were conducted with 43 graduate students, and their chatbot interactions were analyzed using qualitative and quantitative methods. The analysis identified seven distinct types of assistance sought by students. The most frequent requests involved content generation and expansion, followed by source integration and verification, and then concept clarification and definitions. Students also sought chatbot support for writing consultation, text refinement and formatting, and, less frequently, rephrasing and modifying content and translation assistance. The most frequent communication style was “requesting,” marked by direct appeals for assistance, followed by “questioning” and “declarative” styles. In terms of communication tone, “neutral” and “praising” appeals dominated the interactions, reflecting engagement and appreciation for chatbot responses, while “reprimanding” tones were relatively low. These findings highlight the need for tailored chatbot interventions that encourage students to seek AI assistance for a broader and more in-depth range of writing tasks.
The growing use of generative AI (GenAI) has sparked discussions regarding integrating these tools into educational settings to enrich the learning experience of teachers and students. Self-regulated learning (SRL) research is pivotal in addressing this inquiry. One prevalent manifestation of GenAI is the large-language model (LLM) chatbot, enabling users to seek information and assistance. This paper aims to showcase how data on student interaction with a chatbot can be used in learning analytics to gain insights into SRL. This is achieved by adapting existing SRL frameworks to comprehend 34 students’ interaction with an educational Socratic chatbot for a statistics class at the introductory undergraduate level. Chatbot conversations from students are categorized into learning actions and processes using the framework’s process-action library. Thereafter, we analyze this data through ordered epistemic network analysis, furnishing valuable insights into how different students interact with the chatbot. Our findings reveal that higher-scoring students engage more frequently in reflective and evaluative activities, while lower-scoring students focus on searching for answers. Furthermore, students should shift from structured problem-solving, such as solving classroom questions, to questioning fundamental concepts with the chatbot and soliciting more examples to improve their learning gains.
In an era where digital tools are becoming central to education, chatbots offer a unique opportunity to facilitate language learning through interactive dialogue. This study investigates the integration of chatbot technology into English language learning and its impact on enhancing student engagement and interactive communication skills. The research involved a random selection of participants from a university English course, aiming to assess how chatbot interactions can bolster communication skills. Data was collected through pre- and post-implementation assessments, focusing on students’ speaking proficiency across various tasks. The analysis employed advanced analytical models such as t-tests to determine improvements in fluency, pronunciation, intonation, and stress patterns. A mixed-methods approach was utilized, incorporating surveys to gather students’ perceptions of chatbot usage in their learning process. This procedure involved implementing chatbot interactions during the course, allowing students to practice speaking and receive immediate feedback. Results indicated significant improvements in students’ overall speaking abilities, particularly in fluency and intonation. While no notable differences were observed in pronunciation between proficiency levels, substantial advancements were found in interactive speaking tasks. The findings highlight the potential of chatbot technology to improve student engagement and communication skills in English language learning. This research underscores the necessity for further exploration into innovative educational tools to support language acquisition in diverse learning environments.
This study explores the interactional practices and normative expectations of teachers and students in English as a Foreign Language (EFL) classroom, with a focus on turn-taking and conversational dynamics. Addressing a gap in understanding how institutional norms shape classroom interactions, the research employs Conversation Analysis (CA) as its methodological framework, emphasizing the systematic organization of talk-in-interaction. Data were collected from two recorded classroom sessions, including one conducted at the American Language Center in Rabat, Morocco, and another source from a publicly available YouTube video. The transcriptions, adhering to Jefferson’s (1988) system, were analyzed to uncover patterns of turn-taking, repair initiators, and backchanneling in teacher-student exchanges. The findings reveal that teachers use strategies such as other-initiated self-repair, scaffolding, and missing units to guide student contributions while managing conversational flow. Additionally, students demonstrated clear expectations for feedback, often signaled through transition relevance places. These practices underline the collaborative nature of EFL classroom interactions and the critical role of teachers in fostering language learning. The study highlights the pedagogical value of interactional competence and offers insights for improving teacher training and classroom engagement strategies.
With the increasing integration of technology in education, chatbots and e-readers have emerged as promising tools for enhancing language learning experiences. This study investigates how students engage with digital texts and a purpose-built chatbot designed to promote active reading for EFL students. We analysed student interactions and compared high-proficiency and low-proficiency English learners. Results indicate that while all students perceived the chatbot as easy to use, useful, and enjoyable, significant behavioural differences emerged between proficiency groups. High-proficiency students exhibited more frequent interactions with the chatbot, engaged in more active reading strategies like backtracking, and demonstrated less help seeking behaviours. Epistemic Network Analysis revealed distinct co-occurrence patterns, highlighting the stronger connection between navigation and review behaviours in the high-proficiency group. These findings underscore the potential of chatbot-assisted language learning and emphasise the importance of incorporating active reading strategies for improved comprehension.
Chatbots can have a significant positive impact on learning. There is a growing interest in their application in teaching and learning. The self-regulation of learning is fundamental for the development of lifelong learning skills, and for this reason, education should contribute to its development. In this sense, the potential of chatbot technologies for supporting students to self-regulate their learning activity has already been pointed out. The objective of this work is to explore university students’ interactions with [EDUguia] chatbot to understand whether there are patterns of use linked to phases of self-regulated learning and academic task completion. This study presents an analysis of conversation pathways with a chatbot tutor to enhance self-regulation skills in higher education. Some relevant findings on the length, duration, and endpoints of the conversations are shared. In addition, patterns in these pathways and users’ interactions with the tool are analysed. Some findings are relevant to the analysis of the link between design and user experience, but they can also be related to implementation decisions. The findings presented could contribute to the work of other educators, designers, and developers interested in developing a tool addressing this goal.
This study investigates interactions between AI chatbots and EFL learners and their effect on spoken fluency and communicative competence, focusing on micro-level features of discourse such as turn-taking management, discourse-marker use, hesitation phenomena, and repair mechanisms. Thirty-first- and second-year undergraduates (CEFR B1–B2) at Odesa I. I. Mechnikov National University completed two counterbalanced 20-minute speaking sessions: a voice-based chatbot conversation and a matched human–human peer task. Audio data were transcribed and annotated for turn-entry timing, discourse markers, hesitation, and repair sequences. Fluency indices included speech rate, articulation rate, mean length of run, pause profiles, and repair density. The findings indicate that chatbot interaction yielded modest but reliable improvements in speech rate and mean length of run, alongside a reduction in very long pauses, while articulation rate remained essentially unchanged. Self-repair density increased, with most repairs being self-initiated and quickly resolved, often triggered by lexical access challenges or automatic-speech-recognition misrecognitions; these sequences supported conversational progressivity rather than disrupting flow. Gains were most evident in information-gap and problem-solving tasks, whereas opinion-exchange tasks showed weaker effects due to limited backchanneling and occasional system latency. Learners perceived chatbot practice as useful, less anxiety-provoking than peer interaction, and supportive of willingness to communicate, while noting limitations in discourse naturalness and prosodic feedback. The significance of the study lies in linking interactional micro-features to fluency outcomes, offering empirical evidence for AI-mediated speaking practice design. Pedagogically, results support blended implementation: chatbots for warm-up, rehearsal, and decision-focused tasks, complemented by human interaction to develop pragmatic nuance and prosody.
This study explores students’ perceptions of Character AI as a tool for improving English speaking fluency in an Indonesian EFL context. As AI-powered conversational platforms become more prominent in education, Character AI stands out for offering unscripted, persona-driven dialogues that simulate real-life interactions. Employing a qualitative phenomenological approach, the research involved ten 10th-grade students who had used Character AI for speaking practice. Data were collected through semi-structured interviews, observations, and student transcript documentation, then analyzed using thematic analysis. Findings reveal three major benefits: improved speaking fluency, increased learner confidence, and greater flexibility in practice. Students reported reduced hesitation, better sentence construction, and enhanced vocabulary through consistent AI interaction. They also experienced lower anxiety and greater willingness to speak, citing the judgment-free environment provided by AI as crucial to building confidence. However, several challenges were noted, including AI’s limited contextual understanding, inaccurate speech recognition, and lack of real-time grammar correction. Technical issues, such as accent misinterpretation, occasionally disrupted conversations. Despite these limitations, students perceived Character AI as a valuable supplementary tool for speaking practice, particularly in situations where human interaction is limited. The study concludes that Character AI has strong potential to enhance speaking fluency and learner autonomy, but its effectiveness would benefit from integration with teacher guidance and complementary language support. These findings contribute to the growing body of research on AI in language learning and offer practical implications for its use in Indonesian classrooms.
Abstract Research on AI-mediated language learning has often emphasized aggregate measures of performance and engagement, yet little is known about how learners individually perceive and enact affordances in sustained interaction with conversational agents. This study addressed this gap through an idiographic multiple-case analysis of four Vietnamese adults who engaged in eight sessions of self-directed English practice with ChatGPT. Interaction logs and stimulated recall interviews were analyzed using a five-dimensional affordance framework consisting of perceptibility, valence, intentionality, compositionality, and normativity to trace how uptake unfolded across time. The findings show that each learner developed a distinct trajectory. Some orchestrated affordances proactively and in compositional ways, others constrained uptake through cautious or normative orientations and shifts in affect, often triggering turning points in how action potentials were noticed and realized. These results demonstrate that affordances are not static features of AI systems but emergent relations that acquire significance only in action and ecologically. The study advances affordance theory in language education by showing that variability is not incidental but constitutive of learning ecologies, and it suggests that effective AI-mediated pedagogy requires preparing learners to notice, revalue, and creatively combine affordances in ways that support autonomy and sustained engagement.
Objectives This study explored the pedagogical potential of generative, voice-enabled, conversational, and goal-oriented chatbots for English teaching and learning by analyzing the free conversation experiences between a generative AI chatbot and middle school students. Methods Video-recorded free conversations between a generative English-speaking chatbot developed by Company Y and three middle school students were transcribed and analyzed. Based on the research questions, both qualitative and quantitative analyses were conducted, including text analysis, readability assessment, lexical analysis, token count, and sentiment analysis. Results The findings indicated that the generative AI-based chatbot outperformed traditional rule-based chat bots in terms of encouraging active verbal output, providing personalized conversations, offering adaptive sup port, facilitating emotional interactions, and enhancing learner motivation. Quantitative analysis particularly sup ported the chatbot’s effectiveness in personalized conversation and emotional interaction. Conclusions The results suggest that generative AI-powered chatbots can serve as valuable tools to supplement one-on-one tutoring in English speaking instruction.
As generative artificial intelligence (GenAI) chatbots gain traction in educational settings, a growing number of studies explore their potential for personalized, scalable learning. However, methodological fragmentation has limited the comparability and generalizability of findings across the field. This study proposes a unified, learning analytics–driven framework for evaluating the impact of GenAI chatbots on student learning. Grounded in the collection, analysis, and interpretation of diverse learner data, the framework integrates assessment outcomes, conversational interactions, engagement metrics, and student feedback. We demonstrate its application through a multi-week, quasi-experimental study using a Socratic-style chatbot designed with pedagogical intent. Using clustering techniques and statistical analysis, we identified patterns in student–chatbot interaction and linked them to changes in learning outcomes. This framework provides researchers and educators with a replicable structure for evaluating GenAI interventions and advancing coherence in learning analytics–based educational research.
No abstract available
This study investigates the integration of conversational AI tools to enhance pronunciation, listening comprehension, and learner autonomy in second language education. An experimental design was implemented, involving pre-tests, post-tests, questionnaires, and interviews to evaluate learners’ progress after engaging in AI-driven conversation practice. The results indicate significant improvements in pronunciation and listening comprehension, with learners reporting higher confidence, motivation, and independence in managing their own learning. The discussion interprets these outcomes through the lens of second language acquisition theory, emphasizing AI’s role as an accessible conversational partner and its contribution to learner autonomy and lifelong learning. A comparison with previous studies reveals that this research not only confirms the linguistic benefits of AI-assisted learning but also highlights its transformative potential in fostering self-directed learning. The study concludes that conversational AI can serve as an effective complement to traditional instruction, offering both linguistic and motivational advantages for learners in the digital age.
No abstract available
No abstract available
This study explores the role of AI-powered chatbots in enhancing second language acquisition (SLA), focusing on speaking proficiency, learner engagement, and confidence. A mixed-methods, quasi-experimental design was employed involving 60 intermediate ESL learners divided into a chatbot-assisted experimental group and a control group using traditional practice. Over six weeks, the experimental group engaged in structured interactions with a conversational AI chatbot offering real-time feedback. Pre- and post-tests, engagement surveys, and interviews were used for data collection. Findings revealed that the chatbot group showed significantly higher gains in speaking proficiency and greater improvements in willingness to communicate and self-confidence. Qualitative feedback highlighted increased practice, reduced anxiety, and high learner motivation, though limitations such as repetitive responses and limited cultural understanding were noted. The study concludes that AI chatbots can serve as effective supplemental tools in SLA, especially for enhancing oral skills and learner autonomy. Practical implications suggest integrating chatbots into language curricula for additional speaking practice, particularly in contexts with limited teacher availability. Educators are advised to blend chatbot use with guided instruction and monitor chatbot feedback quality to ensure pedagogical alignment.
Multimedia learning using text and images has been shown to improve learning outcomes compared to text-only instruction. But conversational AI systems in education predominantly rely on text-based interactions while multimodal conversations for multimedia learning remain unexplored. Moreover, deploying conversational AI in learning contexts requires grounding in reliable sources and verifiability to create trust. We present MuDoC, a Multimodal Document-grounded Conversational AI system based on GPT-4o, that leverages both text and visuals from documents to generate responses interleaved with text and images. Its interface allows verification of AI generated content through seamless navigation to the source. We compare MuDoC to a text-only system to explore differences in learner engagement, trust in AI system, and their performance on problem-solving tasks. Our findings indicate that both visuals and verifiability of content enhance learner engagement and foster trust; however, no significant impact in performance was observed. We draw upon theories from cognitive and learning sciences to interpret the findings and derive implications, and outline future directions for the development of multimodal conversational AI systems in education.
For Romanian 8th graders, large class sizes and limited instruction time often curtail opportunitiesto practice speaking, hindering attainment of national curriculum targets (Limba modernă 1,clasele V-VIII) for skills such as storytelling, dialogue, making suggestions, and asking questions.This article examines the integration of Character.AI, a conversational AI platform, to providesupplemental interactive speaking practice aligned with these curricular objectives outside theclassroom. Through AI-driven persona-based dialogues and role-play scenarios, students engage inunscripted conversations that mirror 8th grade speaking tasks, effectively extending functionallanguage use beyond crowded classroom settings. Key outcomes include improved oral fluency,reduced speaking anxiety, and greater learner autonomy, as the AI’s patience and non-judgmentalstance creates a safe environment for repeated practice and risk-taking in language use. Moreover,the platform’s voice-based capabilities enable spontaneous spoken interaction, closely resemblingreal-life conversations and further reinforcing learners’ speaking proficiency. While the AIoccasionally misinterprets context or lacks explicit error correction, these limitations aremanageable with appropriate teacher guidance. Overall, the findings underscore the pedagogicalvalue of Character.AI as a complementary tool that aligns with Romania’s English curriculum andsubstantially enhances students’ speaking opportunities, helping them meet oral languageobjectives despite classroom constraints.
e-REAL Labs are at the forefront of language education innovation, integrating AI-driven conversational agents (avatars) to transform the learning experience. These avatars serve as interactive partners in a cooperative learning framework, engaging students in dynamic, real-life dialogues tailored to their proficiency levels. By fostering an inclusive and adaptive environment, they encourage active participation and facilitate group-based language activities, enhancing collaboration and communication. The AI avatars are pivotal in guiding learners through interactive problem-solving tasks, compelling them to negotiate meaning, resolve misunderstandings, and build linguistic competence through adaptive feedback. This immersive approach accelerates language acquisition and cultivates essential social skills such as cultural awareness, critical thinking, and teamwork. Through scenario-based interactions that demand cooperation, e-REAL Labs’ AI-powered methodology ensures that every learner contributes to group success, creating a rich and engaging language-learning experience.
One of the most challenging aspects of learning a foreign language is learning to converse in that language. While large language models (LLMs) have made it feasible to build generative AI chatbots that simulate conversation, creating an AI-powered language tutor that is both pedagogically effective and engaging for learners requires solving a host of additional problems. At Duolingo, we have developed an AI conversational tutor embedded in our platform, featuring a character named Lily, who helps users practice real-world conversations in their target language. In building this system, we tackled multiple challenges: adapting dialogue to each learner's proficiency level, sustaining personalized and coherent interactions across sessions, maintaining consistent character-driven personality, and designing a structure that supports both guided and learner-initiated topics. Our solution integrates a three-party conversational architecture, a persistent memory mechanism to retain prior interactions, and real-time conversation evaluation to dynamically adjust to the learner's input. This talk will highlight how generative AI, when coupled with rigorous feedback loops and thoughtful design, can significantly enhance the language learning experience.
No abstract available
Utilizing artificial intelligence (AI) technology in educational institutions for students with mild intellectual disabilities offers promising avenues for enhancing this population’s learning outcomes and skill development. This study aims to investigate the effect of using generative conversational AI to improve English communication skills among students with mild intellectual disabilities. The study involved twelve students diagnosed with mild intellectual disabilities, divided equally into two groups. Six students engaged in guided conversations with AI, while the other six participated in free conversations with AI. These participants were randomly chosen from educational institutions specializing in intellectual disability education and mainstream schools. The results indicate that guided conversations significantly enhance English communication skills among participants. Additionally, the study highlights the development gains from engaging in guided conversations by AI applications. Statistical analysis reveals notable differences in the effect of guided versus free conversational approaches, with guided conversations yielding superior outcomes. This underscores the importance of structured guidance for comprehension and participation in different English communication skills among students with mild intellectual disabilities. Moreover, the study recommends the integration of AI tools in education to support students with disabilities, emphasizing the need for tailored AI applications to cater to diverse learning needs.
No abstract available
No abstract available
Although increasingly sophisticated in cognitive adaptability, current educational virtual assistants lack effective integration of real-time emotional analysis mechanisms. Most existing systems focus exclusively on static cognitive adaptation or incorporate superficial emotional responses, without dynamically modifying pedagogical strategies in response to detected emotional states. This structural limitation reduces the potential for generating personalized, empathetic, and sustainable learning experiences, particularly in complex domains such as critical reading comprehension. To address this gap, this study proposes and evaluates an educational assistant based on conversational artificial intelligence, which integrates natural language processing, real-time emotional analysis, and dynamic cognitive adaptation. The system was implemented in a controlled experimental setting with university students over a period of two weeks, utilizing a Moodle-based virtual learning platform. The evaluation methodology combines quantitative and qualitative techniques, including pre- and post-tests to assess academic performance, sentiment analysis of chat conversations to track emotional evolution, structured surveys to measure user perception, and semi-structured interviews to collect in-depth, experiential feedback. All interactions were logged for semantic and affective analysis. The architecture, organized using microservices, enables real-time semantic analysis of student messages, emotional inference, and adaptive adjustment of feedback strategies at the cognitive, emotional, and metacognitive levels. The results demonstrate a significant improvement in academic performance, with an average increase of 32.5% in correct answers from the pre-test to the post-test, particularly in inference and critical analysis skills. In parallel, the error correction rate during the sessions increased from 60 to 84%, while engagement levels and emotional perceptions showed progressive improvement. Integrating cognitive and emotional adaptation mechanisms with a rigorous multimodal evaluation process positions this assistant as an innovative advance in emotionally intelligent educational technologies.
While existing literature highlights the affective and technical affordances of chatbots, limited attention has been given to their discursive and structural impact on classroom power relations and learner identity. To address this gap, this qualitative study draws upon four theoretical frameworks: Digital Empowerment (Passey, 2014), Communicative Competence (Hymes, 1972), Social Constructivism (Vygotsky, 1978), and Critical Discourse Analysis (van Dijk, 1998). Data were collected through semi-structured interviews with twelve EFL students who regularly engaged with AI chatbots as interactive speaking partners. Employing thematic and discourse analysis, the study identifies three significant findings. 1) Chatbots foster digital autonomy and enhance learner confidence by providing emotional safety, immediate feedback, and opportunities for self-regulated learning; 2) Students acknowledge that chatbot interactions often lack adequate sociolinguistic nuance and contextual sensitivity, limiting their effectiveness in fully authentic communication scenarios; and 3) The integration of chatbots into language practice represents a pedagogical shift that encourages active learner engagement and disrupts traditional teacher-centered classroom authority. Collectively, these findings indicate that chatbots transcend their roles as mere technological tools, actively influencing learners' sociocognitive experiences and identities. This research contributes theoretically by applying Critical Discourse Analysis to AI-mediated learning contexts and practically by guiding educators and chatbot developers toward creating inclusive, empowering, and learner-centered digital tools.
Abstract This study examines the effectiveness of teacher feedback versus chatbot feedback on English as a foreign language (EFL) learners’ pragmalinguistic and sociopragmatic performances. Eighty-seven intermediate-level university learners were randomly assigned to teacher feedback (TF), chatbot feedback (CF), and control (CO) groups. During a 5-week treatment, the instructor delivered metapragmatic instruction on requests in each treatment session, followed by the participants completing three discourse completion tests (DCTs). Upon completing each DCT task, based on their group assignment, each group received feedback from either the teacher or a chatbot on their pragmalinguistic inaccuracies and sociopragmatic deviations. Analysis of pretest and post-test scores using t-tests and ANCOVA indicated improvements of both experimental groups. Although there was no significant difference in sociopragmatic gains between the groups, the CF group yielded better pragmalinguistic gains. The qualitative data from semi-structured interviews revealed students’ perceptions of affordances and limitations of chatbots in learning L2 pragmatics. Pedagogical implications for second language (L2) pragmatic instruction were discussed.
This study investigates how the SPEAK-BOT framework shapes group dynamics and collaborative interaction quality in EFL speaking pedagogy. Tthis study addresses the underexplored social dimension of language learning. Specifically, it examines how chatbot-generated prompts, when embedded in a pedagogical framework, influence turn-taking, elaboration, responsiveness, and cohesion during group discussions. A qualitative research design was employed with 20 third-semester English education students at Universitas Teuku Umar, Indonesia. The Students were organized into four small groups and engaged in structured speaking tasks that required consulting chatbot prompts as discussion starters. The instruments used in this research were an audio recorder for data collection, a thematic coding framework for discourse analysis, and a rubric-based scoring sheet to evaluate participants’ performance. The data were analyzed using discourse analysis and rubric-based scoring supported by descriptive statistics. The findings revealed a clear variation across groups. One group achieved very high interaction quality, marked by equal participation, deep elaboration, and strong cohesion. Two groups performed moderately, each showing strengths in some dimensions but gaps in others. One group demonstrated weak collaboration, relying heavily on chatbot output and producing fragmented discussions. The results suggest that the SPEAK-BOT framework has the potential to foster richer collaboration when learners use AI critically, but risks weakening interaction when prompts are adopted passively. The study contributes by reframing AI not as a substitute for peer dialogue but as a pedagogical mediator that can strengthen collaborative speaking pedagogy.
No abstract available
No abstract available
No abstract available
Linguistic synchrony, or alignment, has been shown to be critical for student learning, particularly for L2 students (second language learners), whose patterns of synchrony often differ from fluent speakers due to proficiency constraints. While many studies have explored various dimensions of synchrony in global language tutoring contexts, there is a gap in understanding how linguistic synchrony evolves dynamically over the course of a tutoring session and how tutors’ pedagogical strategies influence this process. This study incorporates three dimensions of synchrony—lexical, syntactic, and semantic—along with tutors’ dialogue acts to evaluate their association with student performance using multivariate time-series analysis. Results indicate that lower-performing L2 students tend to lexically align with their tutor more consistently in the long term and with higher intensity in the short term. In contrast, higher-performing students demonstrate greater alignment with the tutor in syntactic and semantic dimensions. Furthermore, the dialogue acts of eliciting, scaffolding, and enquiry were found to play the strongest roles in influencing synchrony and impacting learning outcomes.
Identifying discourse features in student conversations is quite important for educational researchers to recognize the curricular and pedagogical variables that cause students to engage in constructing knowledge rather than merely completing tasks. The manual analysis of student conversations to identify these discourse features is time-consuming and labor-intensive, which limits the scale and scope of studies. Leveraging natural language processing (NLP) techniques can facilitate the automatic detection of these discourse features, offering educational researchers scalable and data-driven insights. However, existing studies in NLP that focus on discourse in dialogue rarely address educational data. In this work, we address this gap by introducing an annotated educational dialogue dataset of student conversations featuring knowledge construction and task production discourse. We also establish baseline models for automatically predicting these discourse properties for each turn of talk within conversations, using pre-trained large language models GPT-3.5 and Llama-3.1. Experimental results indicate that these state-of-the-art models perform suboptimally on this task, indicating the potential for future research.
We present the InterviewBot, which dynamically integrates conversation history and customized topics into a coherent embedding space to conduct 10 min hybrid-domain (open and closed) conversations with foreign students applying to U.S. colleges to assess their academic and cultural readiness. To build a neural-based end-to-end dialogue model, 7361 audio recordings of human-to-human interviews are automatically transcribed, where 440 are manually corrected for finetuning and evaluation. To overcome the input/output size limit of a transformer-based encoder–decoder model, two new methods are proposed, context attention and topic storing, allowing the model to make relevant and consistent interactions. Our final model is tested both statistically by comparing its responses to the interview data and dynamically by inviting professional interviewers and various students to interact with it in real-time, finding it highly satisfactory in fluency and context awareness.
The integration of generative AI in education is expanding, yet empirical analyses of large-scale, real-world interactions between students and AI systems still remain limited. In this study, we present ChEDDAR, ChatGPT&EFL Learner's Dialogue Dataset As Revising an essay, which is collected from a semester-long longitudinal experiment involving 212 college students enrolled in English as Foreign Langauge (EFL) writing courses. The students were asked to revise their essays through dialogues with ChatGPT. ChEDDAR includes a conversation log, utterance-level essay edit history, self-rated satisfaction, and students' intent, in addition to session-level pre-and-post surveys documenting their objectives and overall experiences. We analyze students' usage patterns and perceptions regarding generative AI with respect to their intent and satisfaction. As a foundational step, we establish baseline results for two pivotal tasks in task-oriented dialogue systems within educational contexts: intent detection and satisfaction estimation. We finally suggest further research to refine the integration of generative AI into education settings, outlining potential scenarios utilizing ChEDDAR. ChEDDAR is publicly available at https://github.com/zeunie/ChEDDAR.
No abstract available
This study investigates the nature of co‐construction in roleplays conducted with human versus AI interlocutors for assessing interactional competence (IC) in L2 English. Seventy‐five university students in Japan completed roleplay tasks with both human tutors and an AI agent. The AI agent is a multimodal dialog system integrated with a large language model (LLM), designed to allow synchronous interaction with the participant through autonomous turn‐taking. Using conversation analysis, 24 interactions were analyzed to investigate how participants managed preference organization, sequence expansion, and turn‐taking. The analysis revealed that the AI‐delivered roleplays elicited some IC‐relevant practices and that participants treated the roleplay as a co‐constructed interaction, responding contingently to the AI's contributions. While the data suggested both human and AI interlocutors maintained mutual understanding, striking differences in turn‐taking practices were observed, including more frequent overlaps and inter‐turn gaps in the AI‐delivered condition. The study concludes that LLM‐integrated multimodal dialog systems, by producing recognizable verbal actions and multimodal signals, have the potential to effectively elicit co‐constructed interactional performances relevant to IC assessment.
No abstract available
This study explores the impact of an innovative approach that combines artificial intelligence (AI) chatbot support with collaborative note-taking (CNT) in the comprehension of semantic terms among English as a Foreign Language (EFL) learners. Given the significance of semantics in English language learning, traditional didactic methods often present challenges for EFL learners. The proposed AI chatbot-supported approach aims to foster learner interaction, while the CNT strategy focuses on enhancing knowledge retention and engagement with learning materials. Conducted as a quasi-experimental pre-test-post-test design, the study involved 60 English Language and Literature majors from a non-English-speaking area enrolled at a private university. Participants were divided into the AI chatbot-supported and CNT (AI-CNT) group and the conventional CNT (cCNT) group. Results indicated that the AI-CNT group outperformed the cCNT group across various dimensions of semantic learning outcomes, including performance, achievement, self-efficacy, metacognition, and anxiety reduction. This study highlights the potential of integrating AI chatbot support and the CNT strategy to significantly enhance the EFL semantic learning experience. The personalized and interaction-based linguistic practices, enriched with feedback and emotional support, offer a promising avenue for advancing language learning outcomes in the digital age.
Purpose - The increasing use of artificial intelligence (AI) chatbots in higher education has reshaped how students engage with learning activities and develop cognitive skills. From a life-course education perspective, higher education represents a critical stage in early adulthood where learning experiences may influence long-term learning habits and readiness for lifelong learning. However, empirical studies integrating chatbot usage intensity, AI effectiveness, and student engagement within a single explanatory model remain limited, particularly in developing country contexts. This study examines the effects of AI chatbot usage intensity and perceived AI effectiveness on students’ cognitive learning outcomes, with student engagement positioned as a mediating mechanism.Design/methods/approach - A quantitative cross-sectional survey was conducted involving 88 undergraduate students who had experience using AI chatbots for academic purposes. Data were collected using a validated questionnaire and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) to test direct and indirect relationships among the constructs.Findings - The results indicate that both chatbot usage intensity and AI effectiveness have significant positive effects on cognitive learning outcomes. These variables also significantly enhance student engagement, which in turn positively influences cognitive learning outcomes. Mediation analysis reveals that student engagement significantly mediates the relationship between AI effectiveness and cognitive learning outcomes, but not between chatbot usage intensity and cognitive learning outcomes, highlighting the dominant role of interaction quality over frequency of use.Research implications/limitations - The findings underscore the importance of designing AI-supported learning environments that prioritize pedagogical effectiveness and meaningful engagement rather than mere intensity of use. The cross-sectional design and reliance on self-reported data limit causal inference and generalizability.Originality/value - This study contributes to artificial intelligence in education research by integrating engagement as a mediating mechanism within a life-course framework, offering insights into how AI chatbot use during early adulthood may support sustainable cognitive development and lifelong learning readiness.
Artificial intelligence (AI) made substantial progress with language recognition. Proficiency in spoken English reading is a prerequisite for fluency in written English. However, research on its use, especially for non‐native speakers, is lacking despite increased usage.This study aimed to enhance the oral reading fluency abilities of fourth graders using the AI voice chatbot ‘Alexa’, the purpose being to examine the effects of Alexa on fourth graders' oral reading fluency skills.Ninety students were evenly divided into experimental and control groups. We developed, reviewed and administered pre‐ and post‐tests and interviews.Students' reading comprehension and oral reading fluency are enhanced when they use Alexa in the classroom. Many students turn to Alexa to help them with reading and English grammar. This study offers suggestions for future research.
BACKGROUND Comprehensive history-taking is crucial for patient assessment, prioritisation of care, and planning of care. While direct instruction methods effectively explain history-taking processes and components, they provide insufficient opportunities for practice, necessitating the implementation of supplementary teaching strategies. OBJECTIVE This study aimed to examine the effects of AI chatbot-supported history-taking training on nursing students' questioning skills and clinical stress levels. METHODS This randomized controlled study with pretest-posttest was conducted with 82 "first-year" nursing students. The students were randomly assigned to either an intervention (n = 41) or a control (n = 41) group. The intervention group underwent history-taking using the traditional teaching method (theoretical training, watching a video, and clinical practice) plus an artificial intelligence (AI) chatbot; the students in the control group were trained only with the traditional method. RESULTS The intervention group demonstrated significantly superior performance in two specific components: history of present illness and review of systems (p < 0.05). Clinical stress levels showed mixed results, with significant differences in Challenge and Benefit subscales but no difference in overall stress scores between groups. CONCLUSION Chatbot-based history-taking instruction is efficacious in improving students' history-taking questioning skills in specific components of history-taking, but not in clinical stress.
This study is part of the DD-Lang project (see Leontjev et al., 2024) aimed at enhancing Finnish upper secondary school students’ reading in English and developing their understanding of their reading processes by bringing together two approaches that support language learning: diagnostic assessment (DiagA) and dynamic assessment (DA). The project designed online reading exercises for English based on retired Matriculation Examination items that implement graduated support (DA based mediation) for learners who struggle to complete the reading tasks. The tasks also include a chatbot that the learners can query when taking the exercises. The study reported here explores how this AI-powered chatbot might complement the standardised mediation accompanying the reading tasks. In the study, five students completed the online reading tasks, received mediation and interacted with the chatbot when needed. Their experiences were then discussed in a group session with their teacher and a researcher. Findings show that the students’ reactions to standardised mediation were varied: while some thought it changed the way they read, others considered it repetitive and quite general. Although limited in scale, this research also suggests that integrating an AI-based chatbot into DA can enhance learners’ reading comprehension processes and inform classroom practices.
Artificial intelligence, particularly large language models (LLMs), has had a significant impact in many fields, including chatbots and virtual assistants. With the popularity of ChatGPT, the trend of human-AI collaboration through LLM based chatbots is growing, reaching an ever-expanding audience. A key group that requires special attention is children. A chatbot designed for children should be both knowledgeable and empathetic. While chatbots are essentially fine-tuned versions of LLMs, fine-tuning these models for this specific purpose presents a challenge due to the lack of readily available datasets that address both scientific queries and empathetic situations. This data shortage can be addressed by using generative AI techniques to create synthetic dataset samples. As such, we propose in this paper the use of ChatGPT prompting to generate the Sahar Dataset, a multi-turn student-centric chatbot interaction dataset that supports both STEAM and empathetic related dialogues. Our results show that the Sahar dataset is readable by 5th grade students according to the Flesch-Kincaid Grade score, while other popular datasets like Alpaca require a 9th grade reading level. Moreover, we obtained an IRB for human evaluations, and the results show that 90 percent of the dataset's STEAM is factual, and the empathetic dialogues lead to valid solutions to the child's problem 90 percent of the time.
Digital dialogic pedagogy embodies an extension of the traditional dialogic, in which digital technologies and, in particular, AI chatbots, take a central role in promoting educational interaction, fostering personalized learning. Question-driven interaction, the foundation of dialogic pedagogy, turns out to be a crucial tool for developing critical thinking and divergent thinking, as well as to stimulate individual autonomy. The interaction between humans and machines evolves from a simple exchange based on preset commands to a deeper dialogue in which AI chatbots are able to stimulate reflections, guide users in formulating questions and personalize feedback based on their individual needs. The concept of digital dialogic pedagogy highlights how artificial intelligence can improve the quality and accessibility of educational dialogue, offering new generative opportunities in diverse learning contexts. Certainly, the educational and scientific worlds are experiencing the value of AI in developing the knowledge and skills of the younger generation, but the integration of AI chatbots into educational practice can not only foster the personalization of learning, but also promotes a more inclusive approach, capable of actively engaging students and learners, regardless of their skills or backgrounds.
Recent advances in large language models (LLMs) have led to the development of artificial intelligence (AI)-powered tutoring chatbots, showing promise in providing broad access to high-quality personalized education. Existing works have studied how to make LLMs follow tutoring principles, but have not studied broader uses of LLMs for supporting tutoring. Up until now, tracing student knowledge and analyzing misconceptions has been difficult and time-consuming to implement for open-ended dialogue tutoring. In this work, we investigate whether LLMs can be supportive of this task: we first use LLM prompting methods to identify the knowledge components/skills involved in each dialogue turn, i.e., a tutor utterance posing a task or a student utterance that responds to it. We also evaluate whether the student responds correctly to the tutor and verify the LLM’s accuracy using human expert annotations. We then apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue. We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues. We perform extensive qualitative analyses to highlight the challenges in dialogueKT and outline multiple avenues for future work.
CollaClassroom is an AI-enhanced platform that embeds large language models (LLMs) into both individual and group study panels to support real-time collaboration. We evaluate CollaClassroom with Bangladeshi university students (N = 12) through a small-group study session and a pre-post survey. Participants have substantial prior experience with collaborative learning and LLMs and express strong receptivity to LLM-assisted study (92% agree/strongly agree). Usability ratings are positive, including high learnability(67%"easy"), strong reliability (83%"reliable"), and low frustration (83%"not at all"). Correlational analyses show that participants who perceive the LLM as supporting equal participation also view it as a meaningful contributor to discussions (r = 0.86). Moreover, their pre-use expectations of LLM value align with post-use assessments (r = 0.61). These findings suggest that LLMs can enhance engagement and perceived learning when designed to promote equitable turn-taking and transparency across individual and shared spaces. The paper contributes an empirically grounded account of AI-mediated collaboration in a Global South higher-education context, with design implications for fairness-aware orchestration of human-AI teamwork.
Technological advances in teaching English have been established to support interesting media and authentic learning materials in language classes. This study investigates the implementation of internet-based media in the intermediate-speaking class provided by LearnEnglish Teens on the British Council Online Course. Using a qualitative descriptive research design, an exploration of the integrated activities in the classroom was observed using field notes. The results reveal that LearnEnglish Teens has a dynamic class atmosphere where students and lecturers can integrate fragment language skills into an integrative proficiency of language use by providing class stages to practice. Meanwhile, the negotiated activities in the class driven by the lecturer had also been described in detail in each stage such as regulating turn-taking, repetition, impulsive correction, assertive treatment, unlocking creativeness, equipping speaking organization, and generating useful expressions in specific- individualized practices.
This mixed-methods study investigates the effectiveness of AI-powered chatbots in enhancing the pragmatic competence of Pakistani undergraduate ESL learners, focusing specifically on the production of contextually appropriate speech acts-requests, apologies, and refusals. A total of 50 students, ranging from A2 to B2 proficiency levels (CEFR), participated in a four-week intervention involving scripted and open-ended interactions with a Google Dialog flow chatbot designed to simulate real-life conversational scenarios. Quantitative results from pre- and post-intervention assessments, based on validated pragmatic tasks, revealed significant improvement in speech act appropriateness (p< 0.001), with mean scores increasing from 58.4 to 76.2. Requests showed the highest performance gain (33.7%), followed by apologies (29.5%) and refusals (24.6%). Qualitative data from post-intervention questionnaires and interviews with selected participants uncovered three key themes: increased engagement and motivation, improved speaking confidence, and heightened awareness of polite and context-sensitive language use. Students highlighted the chatbot’s non-judgmental, adaptive nature as instrumental in reducing language anxiety and promoting risk-taking in communication. While some limitations were noted, particularly regarding the chatbot’s ability to handle Context-dependent cultural variations, the findings support the integration of AI chatbots into blended ESL instruction. This study contributes to the growing body of research advocating the use of intelligent conversational agents to foster socio-pragmatic competence, especially in underrepresented educational contexts.
Artificial intelligence (AI) is changing how we teach speaking in a second language (L2) by making feedback on pronunciation and interactive speaking practice available outside of the classroom. There are two main types of tools that are used today: automatic speech recognition (ASR) systems that turn speech into text and give corrective signals, and conversational agents (like chatbots and speech-enabled assistants) that mimic conversation and keep people talking. Research increasingly indicates that these tools can facilitate quantifiable improvements in specific facets of pronunciation and speaking performance, especially when learners are provided with frequent practice opportunities and when AI-mediated feedback is augmented by peer or teacher support. Classroom studies show that ASR-supported practice can help students' pronunciation and speaking test scores in some situations compared to traditional teaching. It can also boost students' confidence and willingness to speak. There are still some big problems, though. ASR doesn't always rate speech with accents in ways that match how people think it sounds, and its feedback can change depending on the speaker, the task, and the phonological target. Conversational agents can boost the number of interactions, but they might not show how complicated the language is in real life and might favor speech that is easy for machines to understand over speech that is easy for people to understand. Emerging syntheses also underscore ethical risks (privacy, data retention, bias) and pedagogical risks (overreliance, diminished learner autonomy if feedback is not explicitly taught). This article examines research trends, suggests a pragmatic approach for assessing AI tools for speaking fluency and pronunciation, and delineates the circumstances in which advantages are most likely to translate to actual communicative competence.
Students' Feedback Requests and Interactions with the SCRIPT Chatbot: Do They Get What They Ask For?
Building on prior research on Generative AI (GenAI) and related tools for programming education, we developed SCRIPT, a chatbot based on ChatGPT-4o-mini, to support novice learners. SCRIPT allows for open-ended interactions and structured guidance through predefined prompts. We evaluated the tool via an experiment with 136 students from an introductory programming course at a large German university and analyzed how students interacted with SCRIPT while solving programming tasks with a focus on their feedback preferences. The results reveal that students'feedback requests seem to follow a specific sequence. Moreover, the chatbot responses aligned well with students'requested feedback types (in 75%), and it adhered to the system prompt constraints. These insights inform the design of GenAI-based learning support systems and highlight challenges in balancing guidance and flexibility in AI-assisted tools.
This study investigated the instructional effects of learner uptake following automatic corrective recast from artificial intelligence (AI) chatbots on the learning of the English caused-motion construction. 69 novice-level EFL learners in a Korean high school were recruited to investigate the instructional effects of corrective recast from AI chatbots on the learning of the English caused-motion construction. Results from the elicited writing tasks (EWT) revealed that statistically significant gains were observed in both immediate and delayed posttests for the production of the English caused-motion construction by experimental group participants. Also, the relationship between learner uptake from AI chatbots’ corrective recast and the learning of the English caused-motion construction were analyzed. The results demonstrated that learners’ successful repair from AI chatbots’ corrective recast was positively correlated with the learning gains in the two EWT posttests. The study concludes by highlighting the significance of noticeability in AI chatbots’ corrective feedback for foreign language learning.
Peer interaction constitutes a focal site for understanding learning orientations and autonomous learning behaviors. Based on 10 h of video-recorded data collected from small-size conversation-for-learning classes, this study, through the lens of Conversation Analysis, analyzes instances in which L2 learners spontaneously exploit learning opportunities from the on-task public talk and make them relevant for private learning in sequential private peer interaction. The analysis of extended negation-for-meaning practices in peer interaction displays how L2 learners orient to public repair for their learning opportunities in an immediate manner and in so doing, how different participation framework is being utilized to maximize their learning outcomes. As these extended repair practices are entirely managed by learners themselves, they yield both efficient and inefficient learning outcomes. Findings reveal that learners frequently resort to their peers to recycle the focal trouble words for learning opportunities, shifting their participating role from the on looking audience to active learners. By reporting the rather under-researched post-repair negotiation-for-meaning sequence in peer interactions, the study highlights the relevance between on-task classroom activities and private learning, contributing to understanding private learning behaviors in the language classroom and learning as a co-constructed activity locally situated in peer interaction.
This study investigates the use of Artificial Intelligence (AI)-powered, multi-role chatbots as a means to enhance learning experiences and foster engagement in computer science education. Leveraging a design-based research approach, we develop, implement, and evaluate a novel learning environment enriched with four distinct chatbot roles: Instructor Bot, Peer Bot, Career Advising Bot, and Emotional Supporter Bot. These roles, designed around the tenets of Self-Determination Theory, cater to the three innate psychological needs of learners - competence, autonomy, and relatedness. Additionally, the system embraces an inquiry-based learning paradigm, encouraging students to ask questions, seek solutions, and explore their curiosities. We test this system in a higher education context over a period of one month with 200 participating students, comparing outcomes with conditions involving a human tutor and a single chatbot. Our research utilizes a mixed-methods approach, encompassing quantitative measures such as chat log sequence analysis, and qualitative methods including surveys and focus group interviews. By integrating cutting-edge Natural Language Processing techniques such as topic modelling and sentiment analysis, we offer an in-depth understanding of the system's impact on learner engagement, motivation, and inquiry-based learning. This study, through its rigorous design and innovative approach, provides significant insights into the potential of AI-empowered, multi-role chatbots in reshaping the landscape of computer science education and fostering an engaging, supportive, and motivating learning environment.
Psycholinguistics has provided numerous theories that explain how a person acquires a language, produces and perceives both spoken and written language, including theories of proceduralization. Learners of English as a foreign language (hereafter referred to as EFL learners) often find it difficult to achieve oral fluency, a key construct closely related to the mental state or even mental health of learners. According to previous research, this problem could be addressed by the mastery of formulaic sequences, since the employment of formulaic sequences could often promote oral fluency in the long run, reflected in the positive relationship between formulaic sequence use and oral fluency. However, there are also findings contradicting the abovementioned ones, without adequate explanations. This study aims to explore the roles of formulaic sequences in oral fluency, taking into account the relationship between formulaic sequence use and oral fluency. This study investigated 120 pieces of spoken narratives by Chinese EFL learners, using both quantitative and qualitative methods, combined with artificial intelligence techniques. Results of canonical correlation analysis showed that the frequency of formulaic sequences was significantly related to speed fluency (r = 0.563, p = 0.000) and breakdown fluency (r = 0.360, p = 0.001), while the variety of formulaic sequences was significantly related to repair fluency (r = 0.292, p = 0.035). Case studies further demonstrated that formulaic sequences could contribute to oral fluency development by promoting speed and reducing pausing when retrieved holistically, but they sometimes lost processing advantages when retrieved and processed in a word-by-word manner. The inappropriate use of formulaic sequences also neutralized the facilitative effects of formulaic sequences on repair fluency and could mirror speakers’ occasional tendency to sacrifice repair fluency for the improvement of speed and breakdown fluency when using formulaic sequences. Pedagogical implications were provided accordingly to promote sustainable oral fluency development through the use of formulaic sequences.
ABSTRACT In task-based language teaching methodology, one way to familiarise learners with the upcoming task is through question-and-answer sequences that target task-relevant topics. In such instances, learners’ prior experiences and knowledge states may come into conflict with expectations held by educators, and their responses may require elaboration. Using multimodal conversation analysis, this study examines how these problems of acceptability arise in pre-task interaction at an experiential language-learning institution. A primary resource for signalling acceptability trouble is to seek confirmation of some previously stated information. Because this method masks the problem as one of hearing or understanding, the learner is left to diagnose the problematic nature of the response, occasionally resulting in a misconstrual. If so, a secondary resource used by educators is to employ post-expansive questions that locate a specific issue within the learner’s response. This unambiguously exposes the interactional trouble and works to pursue an alternative response. Resembling a natural ordering of multiple repair initiators from weak to strong, this study demonstrates the tendency for post-expansive questions to be deployed after confirmation checks fail to solicit a satisfactory account. Analysis of such sequences provides insight into the role of repair practices in promoting learner participation in pre-task interaction.
Situated L2 pronunciation instruction during small-group robot-assisted language learning activities
Chatbots and other conversational agents based on speech recognition and processing technologies have been gaining ground in the field of language education. Although previous research has shown that automatic recognition of second language (L2) speech is difficult, little attention has been paid to how L2 teachers and learners interact with such technology when used as an interactional participant in classroom settings. Addressing this gap, this article provides a qualitative analysis of interactional practices of unplanned and situated pronunciation instruction as a teacher and 10- to 13-year-old young learners of L2 English complete robot-assisted language learning (RALL) activities in a primary school English-as-a-foreign-language (EFL) context in Finland. Drawing on 14 hours of video recordings, we use multimodal conversation analysis (CA) to analyse extended repair sequences that involve interactional problems related to word recognition by a social robot. Through a sequential analysis of selected data extracts, we show how the teacher and learners correct these problems by establishing a corrective focus for providing instruction on and modifying learners’ word-level pronunciation, such as the quality of individual sounds or word stress. From the teacher’s perspective, this consists of drawing learners’ attention to pronunciation details by highlighting sounds in learners’ talk and the robot’s talk, using embodied conduct, and modelling a target-like word pronunciation. Our findings shed light on the interactional organisation of RALL activities and some of the real-life consequences of limitations in speech recognition technologies for L2 teaching and learning interactions with conversational agents. The work conducted by the teacher to convert interactional troubles into meaningful learning opportunities suggests that human agency is needed to optimally guide and mediate language learning interactions with conversational agents based on artificial intelligence (AI) and automatic speech recognition (ASR), as these agents are less capable of showing the kind of interactional and instructional adaptation that is part of human–human interaction.
This study examines the interactional competence (IC) and intercultural communicative competence (ICC) of beginner Chinese L2 learners through interactional engagement. Through conversation analysis (CA) of oral assessment transcripts and video recordings of two high-scoring (HS) and two low-scoring (LS) students, the study identifies key differences. HS students collaborate more effectively with the interlocutor, completing conversational sequences and employing a variety of repair strategies. LS students, by contrast, display more delayed or misaligned next actions and rely more on other-initiated repairs. Both groups experience overlaps, but HS students use them for collaborative engagement, while LS students’ overlaps more often result from delayed turn-taking. Clarification sequences appear only among HS students. ICC-related behaviors emerge when participants orient to culturally relevant practices, such as address terms or role-appropriate responses. These findings demonstrate that communicative competence extends beyond grammar and pronunciation. Turn and sequence management reflect participants’ moment-by-moment orientations to both interactional and intercultural contingencies.
Interactional competence (IC) is crucial for L2 learners to communicate effectively across diverse contexts (Roever [2022], Teaching and testing second language pragmatics and interaction: A practical guide). However, providing opportunities to practice interactive oral skills is time and resource intensive, and assessing IC is challenging due to the need for diverse contexts and interlocutors to capture the dynamic nature of communication. Spoken dialogue systems (SDS) can act as interlocutors, enabling learners to demonstrate linguistic skills, but often result in rigid, transactional exchanges (Timpe‐Laughlin et al. [2024], Computer Assisted Language Learning, 37, 149–178). Large language models (LLMs), such as ChatGPT, which are purportedly capable of generating human‐like responses (Kostka & Toncelli [2023], TESL‐EJ, 27(3), 1–19), may offer greater flexibility, enabling interactive conversations that could effectively elicit phenomena of IC. In this study, we explored interactional phenomena in the oral output of 50 participants who engaged in the same role‐play task with an SDS and an LLM interlocutor, respectively. Participants' interactions were audio‐recorded and analyzed for interactional features, including openings and closings, repairs, small talk, and recipient design. Findings revealed that while both systems elicited IC phenomena, participants engaged in significantly more repair sequences and recipient design in SDS interactions, likely due to the system's more constrained processing capabilities. By contrast, the LLM played a more active role in carrying the conversation, making inferences about participant intent while leaving fewer opportunities for participants to show their IC. These differences highlight distinct interactional affordances of each system. We discuss the affordances and limitations of both systems for practicing and assessing oral IC skills, suggesting a hybrid approach to move ahead.
Abstract Phenomenon: I examine simulation-based communication skills training as a practice of metadiscourse (or talk about talk) on three levels: (1) the conceptualization of communication as a skill; (2) the use of simulation-based approaches for teaching and assessing communication skills; and (3) the purposes of communication-skills training, specifically as it relates to outcomes of skilled communication. Within each, I explicate the following tensions: (1) communication as an individual skill vs. communication as a distributed dynamic; (2) communication as a process of information exchange vs. communication as mutual accountability; (3) communication for institutional outcomes vs. communication for multiple purposes. Approach: I use discourse-analytic approaches to reflexively analyze communication-skills training practices. My data are from a communication-skills practice exam for third-year medical students with simulated patients. The purpose of my analysis is to illustrate the metadiscursive tensions as they occur via (1) question-and-answer sequences; (2) repairs; and (3) orientations to institutional protocols. Findings: Through my analysis, I analyze the affordances and constraints of metadiscursive tensions. (1) Communication as an individual skill affords concrete and systematic frameworks for teaching and assessment, while communication as a distributed dynamic emphasizes the joint nature of talk and patient-centeredness. Additionally, simulation is a distinct genre of communication, specifically in how simulated patients communicate differently than actual patients, which can limit their utility for individual assessment. (2) Communication skills and communication-skills teaching embody the paradigm of cause and effect, which is in tension with communication as a process of mutual accountability. Conceptualizing communication skills and communication-skills learning as interventions in the possession of knowledge/skills affords claims of effectiveness but at the risk of essentializing students and patients as data points. (3) The institutional purposes of communication-skills training are often associated with positive outcomes for patients and providers but such findings often oversimplify the multifunctionality of talk, namely who we show ourselves to be through communication. Insights: To draw on the affordances of metadiscursive practices, I suggest incorporating video-based reflexive dialogues as addendums to simulation-based learning sessions. In video-based reflexive dialogues, medical students and simulated patients watch their simulated consultations together and discuss mutual goals, what communication strategies worked toward those goals, and what else talk accomplished. Retooling communication-skills teaching and learning to promote reflexivity as a “meta-skill” provides learners and practitioners the resources to reflect on and act in unison with patients toward mutual goals of health and well-being.
Establishing recipiency, an indispensable ingredient and manifestation of sustaining intersubjectivity, constitutes the continuous monitoring of an ongoing turn in an interaction. The present study intended to describe how interactants attending a freshman common course in an Ethiopian university elicit and display recipiency in instances of Divergent L2 contexts exhibiting DIUs. Naturally occurring video-recorded classroom interactions of the purposively selected interactants have been analyzed in light of the Conversation Analytic framework to show how interactants elicit and display recipiency. By deploying reactive tokens, incipient speakers negotiate their rights to shape and reshape trajectories of an ongoing thereby displaying recipiency. This contributes to a better understanding of how interactures, in this case the establishment of intersubjectivity and L2 contexts, interplay and unfold in moments of DIUs. Also, viewing interactants as incipient speakers, and thereby articulating turns in view of recipients is a condition for sustaining intersubjectivity through active engagement. This requires upholding unwavering belief about recipients’ stake in an interactional exchange. Practically, being attentive to recipients' states in the different trajectories of interactional development, especially, in moments of divergent L2 contexts that exhibit DIUs, would be illuminating. This is because the use of resources to elicit and display recipiency and thereby consider incipient speakers' levels of recipiency, on the part of floor-holding speaker, would enhance possibilities for intersubjectivity.
Within the communicative language teaching approach, current instructional materials often lack explicit guidance or fail to provide L2 learners with a wide range of resources in the target language. Conversation analysis (CA), which focuses on authentic talk, has been proposed as a potential resource for language classrooms. This study examines the effectiveness of using CA as a pedagogical approach in an EFL classroom and its impact on learners’ attitudes toward English language learning. The study engaged eight adult learners in a structured program, encompassing a pre-test, a 4-week explicit CA-informed instruction, and a post-test. After four weeks, learners demonstrated progress in their knowledge and skills of interaction, different aspects of English speaking, interactional competence, and confidence in speaking English. Additionally, the CA-informed instruction positively influenced learners’ attitudes toward English language learning and their appreciation of interactional features. The results strongly suggest that language teachers should consider incorporating CA insights into their teaching practices to enhance both linguistic and attitudinal outcomes.
This paper explores the reflexive relationship between pedagogy and interaction in second language (L2) classrooms, employing Seedhouse’s (2004) model and Conversation Analysis (CA) methodology. The analysis covers three contextual dimensions: form-and-accuracy, meaning-and-fluency, and task-oriented contexts. The study reveals the dynamic interplay between pedagogical focus and interactional organization, showcasing how participants negotiate linguistic forms, repair sequences, and engage in turn-taking to accomplish pedagogical goals. Despite critiques emphasizing the emic perspective’s limitations, the paper underscores CA’s contribution to understanding language use in teaching, offering pedagogical insights for language educators.
Although L2 learners are often encouraged to provide feedback on each other's performance of paired/group interaction tasks in collaboration and interaction, how they jointly engage in feedback talk in ways that are conducive to establishing shared understanding of the institutionally preferred actions is largely unknown. Using multimodal conversation analysis, this study examines real‐time peer feedback interactions in a synchronous video‐mediated study group and uncovers the ways L2 learners expand on each other's feedback contributions for collaboratively accomplishing peer feedback in and through interaction. The analysis will explicate (a) accounting as a justifying device and (b) depersonalizing as a mitigating device. The findings show that the participants, in follow‐up feedback turns, attend to the local interactional circumstances created by peer response, tailor their feedback to the institutional goals of the focal setting, contribute to intersubjectivity, and pursue feedback recipient's agreement and strong display of uptake. The analysis brings insights into the construct of L2 Interactional Competence (IC) necessary for following up on peer feedback turns. The study discusses the practical implications of the focal phenomenon for oral assessment preparation classes.
Abstract Adopting a conversation analysis approach to classroom interaction, this study investigates how learners spontaneously provide grammar explanations without prior solicitation from the teacher. While previous research has primarily focused on teachers’ explanatory practices, this article analyses learner-initiated explanations and teacher responses to these explanations, in order to provide a more comprehensive overview of explanatory practices in the classroom. The study draws on 50 hours of video-recorded interactions in a beginner-level second language (L2) French classroom with eight adult learners in Switzerland. The analysis shows that learners deploy a range of multimodal resources to deliver their explanations, including syntax, prosody and embodied actions. In particular, they formulate topic-related assertions, thereby displaying linguistic expertise and challenging knowledge asymmetries in the classroom. These findings contribute to a deeper understanding of learner initiatives in the L2 classroom and shed light on the specific dynamics of participation in language classes for beginner-level adult learners.
ABSTRACT This study uses multimodal conversation analysis to examine the influence of learning materials on a second language classroom task interaction. The data comes from tasks where beginner-level adult learners practice shop encounters in a roleplay. Two datasets are used, in which learners use two different learning materials in an otherwise similar task: 1) pictures of items to be bought on separate cards and 2) several pictures of items on one sheet of paper. The focus is on sequences where a learner in the buyer role expresses their intention to buy an item (‘buying turns’) and the actions that follow. The cards support the task as a concrete and joint activity, where a buying turn is frequently ascribed as a request and a card transfer is a visible resource to progress. With the sheet of paper, the task is progressed more verbally, usually by talking about prices or announcing more items to be bought. However, the non-transferability of an item can decrease the response relevance of buying turns and cause uncertainty about what is expected in the task. The findings contribute to the understanding of the use of learning materials in interaction, especially their implications as physical objects.
The paper explores teachers' interactional uses of stereotypes and prejudices in the Italian L2 classroom. Drawing from video-ethnographic research in a voluntary association, this study adopts a discursive approach to stereotypes and prejudices, analyzing their pragmatic uses during classroom activities. Even though previous literature has mostly argued against these social devices, the analysis illustrates that teachers make use of stereotypes and prejudices to pursue their local aims in the classroom. Specifically, teachers mobilize stereotyped talk to achieve specific social and didactic aims (e.g., to explain a lexical items or to prompt laughter). In the discussion, we critically consider the risks and opportunities of this kind of practice and advance few implications for teachers' professional practice, arguing for the relevance of video-based teacher training.
Using multimodal conversation analysis, this study examines how assessments function as interactional resources for managing second language (L2) discussion topics in conversation for learning (CFL) contexts. Drawing on nine hours of video-recorded discussions, we analyse how students initiate and expand topics through assessments directed at either the primary speaker or third parties. Our guiding research question is: How do first-position assessments in CFL discussions shape participation, topic progression, and the management of interactional contingencies? The analysis reveals that assessments directed at the primary speaker, whether positive or negative, prompt elaboration or justification, leading to extended participation. In contrast, assessments of third parties produce different interactional outcomes: positive assessments foster shared alignment without necessitating further elaboration, while negative assessments invoke moral accountability, prompting participants to justify or defend the assessed third party. Overall, this analysis highlights students’ collaborative efforts in managing assessments to create opportunities to practice L2. Furthermore, the assessment trajectories reflect the participants’ concern with managing social relationships. This study advances research on assessment-in-interaction and CFL while providing valuable insights for designing L2 speaking tasks that foster more dynamic and participatory discussions.
This article examines how second language (L2) interactional competence is manifested in students’ use of “and”‐prefaced turns when doing meaning‐focused oral tasks in pairs and small groups. Drawing on video recordings from English‐as‐a‐foreign‐language upper‐secondary classes recorded in Czechia and Finland, 86 sequences involving “and”‐prefaced turns were scrutinized using multimodal conversation analysis, focusing on language, gaze, and material resources. The findings suggest that by producing “and”‐prefaced turns, students orient to task progression. These turns have two functions: task managerial and contribution to the emerging task answer. By using task‐managerial “and”‐prefaced turns, the current speaker invites another student to participate, while in “and”‐prefaced contributions to the task answer, a participant adds to, generalizes, or modifies the previous task answer. The analysis shows that students mobilized their L2 interactional competence in producing “and”‐prefaced turns in close coordination with embodied resources and with respect to the spatio‐material surroundings and the nature of the task. These findings contribute to the multimodal reconceptualization of the grammar–body interface and research on turn‐initial particles within L2 interactional competence.
This study explores discourse markers (DMs) as they occur with compliment responses (CRs) in classroom interactions among Iranian learners of English as a foreign language (EFL). Using the tenets of conversation analysis, this paper draws on data from teacher–student interactions in several private language institutes in Iran. After audiorecording and transcription of the compliment–response exchanges, 148 DMs were identified within the responses. These sequences were analyzed to find out how DMs are combined with four distinct CR types; accept, mitigate, reject, and request interpretation. DMs were also identified and categorizedbased on their frequency of occurance and semantic features to allow comparison with previous findings. The results of this study revealed that Iranian EFL learners resorted to a limited number of DMs in responses to teacher‐generated compliments, with “linking” DMs being the most favorable type. Moreover, some DMs were accompanied by a specific CR type which helped the formation of an intended illocutionary force by the complimentee. It was also observed that a DM or a combination of these markers can stand alone as a legitimate and functional response to compliments, which further reveals that DMs can contribute to both semantic and pragmatic meaning. These findings clearly suggest that explicit teaching of DMs in English language classes should be taken into consideration, as these linguistic elements can provide learners with important tools to convey their intended meaning more smoothly and effectively.
Gestures produced by language learners have a positive impact on interactions; however, few studies have examined natural conversation data focusing on a learner’s spoken language proficiency level. This study investigates gesture use among learners of English as a second language with varying language proficiency levels (beginner, intermediate, and advanced) to determine whether gesture use and type (e.g., iconic, deictic, metaphoric, and beat gestures) differ by language proficiency level. This study examined 17 video-recorded dyadic interactions in English consisting of mixed-level and same-level pairs. Quantitative analysis followed by a data-driven approach demonstrated that more advanced learners employed gestures with speech more frequently than other groups. During interactions, iconic gestures were used more often by the beginner group, while deictic gestures were employed more by the advanced group. Moreover, the function of the gestures produced by each group during the interactions appeared to be qualitatively varied. These results indicate that gesture use and type may relate to learners’ language profi ciency levels. This study has revealed significant differences in gesture use among learners of English as a second language with varying language proficiency levels, providing insights into learners’ cognition process during verbal communication.
No abstract available
Extensive research on next speaker selection in L2 classrooms has predominantly examined teacher-initiated nominations (e.g., Mortensen, 2008; Lauzon & Berger, 2015) or student self-selection under teacher coordination (Waring, 2011). This study shifts the focus to how L2 Chinese learners accomplish learner-initiated self-selection in a real-world, technology-mediated environment without teacher presence or institutional scaffolding. Building on Sacks et al. (1974), we reconceptualise learner-initiated self-selection as an interactional trajectory – a sequentially and multimodally achieved process, rather than a competitive act of floor-taking. Using Multimodal Conversation Analysis (CA), we examine interactions in the Chinese Digital Kitchen (CDK), a task-based language learning environment where 72 beginner-to-advanced L2 Chinese learners cooked authentic recipes using the Linguacuisine App (Seedhouse et al., 2019). The app provided video, audio, image, and text instructions, but learners received minimal guidance and no teacher support. Analysis of the cooking sessions identifies four recurrent trajectories of learner-initiated self-selection: knowledge-display, sequential-organisation, technology-mediated opportunity, and embodied. These trajectories are not mutually exclusive but form overlapping pathways through which learners coordinate turns, manage task progression, and negotiate epistemic and procedural alignment. Theoretically, this study contributes to CA-for-SLA by reframing self-selection as a distributed, multimodal accomplishment shaped by technological and material affordances rather than institutional regulation. It extends CA-for-SLA into non-institutional, real-world environments, showing how learners mobilise verbal, embodied, and digital resources to self-organise participation and task completion. These findings offer portable analytic categories for examining learner-initiated interaction in informal, teacher-absent, technology-mediated L2 task, and inform the design of multimodal, learner-directed learning environment.
Technology-mediated task settings are rich interactional domains in which second language (L2) learners manage a multitude of interactional resources for task accomplishment. The affordances of these settings have been repeatedly addressed in computer-assisted language learning (CALL) literature mainly based on theory-informed task design principles oriented to the elicitation of structured learning outcomes. However, such focus on design and outcome has left unexplored the great diversity of emergent interactional resources that learners deploy in situ. With this in mind, and using conversation analysis (CA) as the research methodology, this study sets out to describe the task engagement processes of L2 learners who collaboratively engage in online tasks. A close look into screen-recorded interactions of geographically dispersed participants shows that they orient to numerous context-specific interactional resources, which also locates a process-oriented interactional development site for further examination. To this end, the study presents a longitudinal conversation analytic treatment of a focal participant’s context-specific interactional behaviors. The findings explicate the emergence and diversification of interactional resources, thus evidencing task-induced development of L2 interactional competence (IC). By providing participant-oriented, situated, qualitative insights into interactional development in and through online task-oriented L2 interactions, the study contributes to CALL, task design, and L2 IC based on methodological underpinnings of CA.
This conceptual article explores the role of pedagogical mediation in raising Japanese English as a foreign language (EFL) learners’ awareness of cross-culturally diverse roles of silence and conversational repair strategies during turn-taking in second language (L2) interaction as seen from an interactional perspective. This study delves into the nexus of scholarly and pedagogical perspectives, accommodating Japanese EFL learners’ interactional needs to self-mediate own silence as an interactional resource by using repair strategies in L2 interaction. It specifically examines the pedagogical approaches reflected in English language teaching (ELT) materials designed for Japanese EFL learners, aiming to raise awareness of multi-faceted use of silence and repair as a part of cross-culturally invisible turn-taking practices from three perspectives: (1) pedagogical approaches involving silence in L2 interaction in scholarly articles, (2) learning materials produced specifically for Japanese EFL learners and (3) Japanese EFL learners’ perspectives on Conversation Analysis-informed learning resources identified in empirical studies. Drawing on this analysis, this study aims to deepen our understanding of current practices and bridge the gap between theory and practice to facilitate L2 learners’ interactional repertoires through material development informed by a holistic perspective.
Abstract A micro-level analysis of second language (L2) peer feedback interactions specifically aimed at improving interactional abilities is lacking. Drawing on multimodal Conversation Analysis to examine 20 h of screen-recorded interactions of L2 learners in a video-mediated study group setting, this study demonstrates that in the collaborative accomplishment of L2 feedback in talk-in-interaction, peers’ follow-up contributions expand others’ feedback turns and open up space for further sequences of talk simultaneously. The follow-up contributions are realized through four interactional practices: (1) advising, (2) reformulating, (3) counterclaiming and (4) clarification-seeking. It is through such follow-up contributions that L2 learners change speakership, build turns contingent on previous contributions, perform diverse social actions, from resisting to clarifying, display their understanding and contribute to the ongoing feedback talk. We argue that being able to produce follow-up contributions is a crucial part of one’s L2 Interactional Competence (IC) and becomes a valuable interactional practice in securing intersubjectivity among the participants. The findings inform L2 language pedagogies about increasing learners’ sensitivity to the intricacies of dialogic and collaborative feedback talk from a micro-analytic perspective.
The emergence of Large Language Models (LLMs) has opened new possibilities for language learning through conversational interaction with chatbots. Yet, little empirical evidence exists on how students experience such interactions and how corrective feedback should be provided. Research suggests that immediate corrective feedback is generally more effective than delayed feedback. Nevertheless, learners' perception of this effectiveness and their preferences for feedback timing, particularly in the domain of Computer-Assisted Language Learning (CALL), remain underexplored. This study investigates the feasibility of providing immediate feedback and examines the impact of feedback timing on user experience and grammar learning gains in English. An in-the-wild experiment was conducted with 66 L2 English learners, who integrated chatbot sessions into their English course as an extracurricular activity over one semester. Participants were randomly assigned to two groups receiving feedback either during or after the conversation. Findings reveal no significant difference in learning gains, but immediate feedback enhanced user experience, leading to overall positive perceptions of the chatbot. Additionally, we explore users' perceptions of the chatbot's social role and personality, offering a roadmap for future enhancements. These results provide valuable insights into the potential of LLMs and chatbots for language learning.
Abstract Categories are inference-rich and do implicative work storing a great deal of knowledge that members of a society have about the society. Drawing on Membership Categorization Analysis, and sequential analysis from Conversation Analysis, this study explores participants’ categorial orientations in talk-in-interaction that is produced and/or treated as humorous in the Second Language (L2) classrooms. More specifically, this study presents an in-depth analysis of the way category-activity puzzles, which display incongruous combination of membership categories and category-bound activities, are formed, and made relevant and consequential in talk that is produced and/or treated as humorous. In doing so, it unpacks how participants invoke, negotiate, and deal with category-activity puzzles as resources for producing and/or treating utterances as humorous in L2 classrooms. The analysis will also illustrate the way participants use their understanding of common-sense knowledge and category memberships as a resource in managing and negotiating incongruities created through category-puzzles, which are treated as humorous. As such, this study contributes to the growing body of MCA and humor studies in classrooms and advances our understanding with regards to the categorial orientations of participants in L2 classrooms.
Abstract Although interaction-based research has investigated various second language (L2) testing formats, less attention has been given to group-based assessments compared to the more widely studied oral proficiency interview (OPI) and paired L2 assessment formats. The current study draws upon around 7 h of video-recorded group-based assessments at a university-affiliated higher education (HE) institution in the UK. During these assessments, test-takers are required to use English to engage in negotiations in which they discuss various options and decide which – as a group – to choose. However, in the midst of the L2 talk, candidates may veer from test-relevant activities, which can have a negative impact on their grades. As such, practices for re-aligning to test-relevant activities are highly important for ensuring that test-takers are assessed in the best light. This study adds to Conversation Analysis research on L2 Interactional Competence by revealing the methods test-takers use to re-align to test-relevant activities and re-orient to the institutionality of the testing interaction. This study contributes to Interactional Competence research in L2 testing settings and discusses the ways in which our findings can inform learner and assessor teaching and training materials.
This paper examines how students’ self-regulated learning (SRL) abilities influence their intents and engagement goals during chatbot-assisted argumentative writing. Through a systematic analysis of 229 conversation logs from 40 students, 15 distinct intents were identified and categorized into five engagement goals. The findings reveal that students with higher SRL abilities engage more actively with chatbots, particularly through the “Discuss” engagement goal, which involves seeking feedback and alternative perspectives. While the “Find” goal was the most frequently used across participants, its frequency also correlated positively with SRL abilities, highlighting a preference for active engagement. In contrast, the passive “Piggyback” goal was consistently employed regardless of SRL abilities, suggesting strategic use for convenience or cognitive load reduction. These results offer insights into the interplay between SRL abilities and chatbot interactions, providing a foundation for designing educational tools that foster active engagement and support diverse learning strategies.
This study explores the integration of ChatGPT into Social Learning Analytics (SLA) to support programming education among computer science students at Mustapha Stambouli University, Mascara, Algeria. Utilizing a mixed-methods approach, the research combines quantitative surveys and qualitative analysis of recorded interactions of one example and interviews to examine the effectiveness, challenges, and perceptions of ChatGPT’s use in programming tasks across Arabic, French, and English. The study involved 57 students and five teachers, providing a comprehensive view of ChatGPT’s impact on learning experiences, engagement patterns, and programming performance. Results indicate that ChatGPT is frequently used as a supplementary tool, especially for programming-related queries, debugging, and last-minute assistance before deadlines. The tool’s adaptability to students’ needs, combined with its ease of use, enhances its perceived value in supporting independent learning. However, the limitations of the free version—such as restricted access, slower response times, and occasional inaccuracies—were frequently cited as barriers to consistent, effective use. Teachers acknowledged ChatGPT’s role in easing instructional burdens but emphasized the need for critical oversight to prevent over-reliance on AI-generated content. Ethical concerns regarding data privacy, academic integrity, and the quality of AI feedback were highlighted as key issues requiring attention. Interestingly, a significant portion of students expressed the belief that AI, including ChatGPT, could potentially replace human programmers in the near future, reflecting both optimism and concern about the evolving role of AI in the field. Despite this, educators maintained that while ChatGPT can augment programming education, human intuition, creativity, and contextual understanding remain irreplaceable. The study concludes that ChatGPT’s integration into SLA offers substantial opportunities to enhance educational support and enrich data on student learning behaviors. However, addressing accessibility issues, enhancing multilingual support, and mitigating ethical challenges are critical for maximizing the tool’s effectiveness. The findings underscore the importance of a balanced approach that leverages AI’s strengths while maintaining the essential role of human expertise in education and programming.
最终分组结果构建了一个从微观到宏观、从技术到社会的完整研究图谱。研究首先立足于对话分析(CA)的方法论基础,深入剖析人机互动的微观语言特征;其次通过实证研究验证AI在语言习得中的成效;接着利用学习分析技术探索互动的动态路径;同时涵盖了系统设计与评估的工程视角;最后延伸至社会文化与批判性话语分析,并以传统课堂交互作为对比基准。这一体系全面覆盖了学生与AI聊天机器人互动研究的前沿方向。