教师人机(人智)协作(协同)
人机协同教学的理论框架与共生模式构建
该组文献侧重于从宏观和中观层面探讨人机协作的逻辑起点与系统模型。研究者提出了如DOT框架、iSTAR、REALM、双路径模型及社会-技术系统框架,旨在将AI从简单的辅助工具转变为教学过程中的‘智慧伙伴’,定义人机共生而非替代的进化路径。
- The Autonomous Knowledge Frontier: AI Systems Redefining Human Learning and Infinite Knowledge Flow(Subhasis Kundu, 2025, The American Journal of Engineering and Technology)
- Human-AI Collaboration in Translation Teaching: A Model for Effective Pedagogy in the AI Era(Yuan Gao, Zirun Gan, Shixu Yuan, 2025, Proceedings of the 2025 International Conference on Educational Technology and Artificial Intelligence)
- Research on the Reconstruction of the Teaching Model for University Ideological and Political Theory Courses from the Perspective of Human-Computer Collaboration(W. Tang, 2025, International Educational Research)
- Exploring the Path of AIGC and AI Agents Empowering Front-End Teaching and Learning(Dongxing Wang, Wang Yu, Weixing Wang, 2025, Journal of Contemporary Educational Research)
- Redefining Teacher-AI Collaboration: a Study of a Collaborative Design Framework for Context-Aware English Lesson Plans(Yitong Dong, 2025, International Journal of Computer Information Systems and Industrial Management Applications)
- Integrating Generative AI with Human-Centered Pedagogy: An Innovative Path for Vocational Education(Tingjie Xu, Yisong Chen, 2025, Proceedings of the International Conference on Implementing Generative AI into Telecommunication and Digital Innovation 2025)
- From "Technical Assistance" to "Human-Machine Synergy": Reconstruction and Innovation of Scenario-based Teaching Models in Civics Courses from the Perspective of Generative AI(Yuchen Liu, Feng Zhong, Yingmei Li, 2025, International Journal of Education and Social Development)
- Co-Intelligence in the Classroom: The DOT Framework for AI-Enhanced Teaching and Learning(M. Azukas, David C. Gibson, 2025, AI-Enhanced Learning)
- Construction of Teaching Resource Optimization Model from the Perspective of Human-Machine Collaboration(Chuxun Wang, Pingzhang Gou, Wenqing Li, 2025, Proceedings of the 2025 6th International Conference on Education, Knowledge and Information Management)
- An Adaptive Instructional Architecture for Training and Education(D. Nicholson, C. Fidopiastis, Larry D. Davis, D. Schmorrow, K. Stanney, 2007, No journal)
- The collaboration of AI and teacher in feedback provision and its impact on EFL learner’s argumentative writing(Meina Luo, Xinyi Hu, Chenyin Zhong, 2025, Education and Information Technologies)
- Analytic Information Systems in the Context of Higher Education: Expectations, Reality and Trends(I. Guitart, J. Conesa, 2015, 2015 International Conference on Intelligent Networking and Collaborative Systems)
- Types of teacher-AI collaboration in K-12 classroom instruction: Chinese teachers’ perspective(Jinhee Kim, 2024, Education and Information Technologies)
- A Framework for Constructing a Technology -Enhanced Education Metaverse: Learner Engagement With Human–Machine Collaboration(Zhongmei Han, Y. Tu, Changqin Huang, 2023, IEEE Transactions on Learning Technologies)
- Educational futures of intelligent synergies between humans, digital twins, avatars, and robots - the iSTAR framework(Articles Info, A. Tlili, Ronghuai Huang, Lin Xu, Ying Chen, Lanqin Zheng, Ahmed Hosny, Saleh Metwally, Ting Da, Tingwen Chang, Huanhuan Wang, Jon Mason, Christian M. Stracke, Demetrios Sampson Professor, Curtis J. Bonk, 2023, Journal of Applied Learning & Teaching)
- REALM: A relevance-driven layered protocol for human–AI collaboration in STEM education [Special Issue on Artificial Intelligence for Education: A Signal Processing Perspective](Maoquan Zhang, B. Raytchev, Xiujuan Sun, 2026, IEEE Signal Processing Magazine)
- Exploring the Relationship between Human Teachers and Machines in the Age of Intelligence-Human-Machine Symbiosis(Keying Wu, Yuanyuan Chen, Wenqi Jiang, 2025, 2025 5th International Conference on Artificial Intelligence and Education (ICAIE))
- Building and Implementing a Human-AI Synergistic Education-Research Ecosystem Through Multi-Agent Collaboration(Yue Wang, Yunzhen Liang, Ziqi Shen, Lin Guo, 2025, 2025 5th International Conference on Artificial Intelligence and Education (ICAIE))
- The Human Teacher, the AI Teacher and the AIed-Teacher Relationship(Josiah Koh, Michael Cowling, Meena Jha, K. Sim, 2023, Journal of Higher Education Theory and Practice)
- Phased Evolution of Teacher-AI Collaboration for the Effective Mentoring in Teacher Education(Hideaki Yoshida, Yoetsu Onishi, Masahiro Arimoto, 2025, 2025 13th International Conference on Information and Education Technology (ICIET))
- Examining human–AI collaboration in hybrid intelligence learning environments: insight from the Synergy Degree Model(Xinmei Kong, Haiguang Fang, Wenli Chen, Jianjun Xiao, Muhua Zhang, 2025, Humanities and Social Sciences Communications)
- Toward hybrid teaching intelligence: investigating the potential of teacher–AI collaboration using large language models(Ruben Kroken Rokkones, Michail N. Giannakos, 2025, Behaviour & Information Technology)
- Towards a systematic educational framework for human-machine teaming(Finlay McCall, Aya Hussein, Eleni Petraki, S. Elsawah, H. Abbass, 2021, 2021 IEEE International Conference on Engineering, Technology & Education (TALE))
- The 3X2A Strategy for Societal Adaptation in the GenAI Era: A Framework for Human-AI Synergy(V. Ungureanu, 2026, Comput. Sci. J. Moldova)
- Human-AI Collaborative Teaching: Generative Artificial Intelligence (Gen-AI) as Co-Teacher(Tajana Guberina, Filip Procházka, 2026, Social Science Chronicle)
- A dual-pathway model of teacher-AI collaboration based on the job demands-resources theory(Yiling Hu, Yujie Xu, Bian Wu, 2025, Education and Information Technologies)
- Research on Human-Computer Collaborative Teaching Mode in Intelligent Teaching Environment(Xiao Hu, Hong Liu, 2024, Journal of Management and Humanity Research)
- A psychological platform for GenAI and human co-piloting in education(L. Fryer, 2025, Frontline Learning Research)
- A model of symbiomemesis: machine education and communication as pillars for human-autonomy symbiosis(H. Abbass, E. Petraki, Aya Hussein, Finlay McCall, S. Elsawah, 2021, Philosophical Transactions of the Royal Society A)
- The Exploration of the Cognitive Teaching Model of Ideological and Political Education in College English Courses from the Perspective of Human-Machine Collaboration(慧玲 唐, 2025, Advances in Education)
- Integrating Human-AI Collaboration in Education: A New Approach to Curriculum Design(Yuhao Ge, 2025, Educational Innovation Research)
- From Co-Intelligence to Classroom Impact: The Human Face of AI in Education(Theo Bastiaens, Michael Searson, 2025, AI-Enhanced Learning)
多智能体系统与自适应教学技术实现
该组文献关注人机协作的技术底层与功能实现。重点探讨多智能体系统(MAS)、具身代理、大语言模型(LLM)驱动的助教以及自适应学习平台的架构设计。研究涵盖了如何通过专业化代理模拟教学互动,实现大规模个性化辅导与资源自动化生成。
- Designing, implementing and testing an intervention of affective intelligent agents in nursing virtual reality teaching simulations—a qualitative study(Michael Loizou, S. Arnab, Petros Lameras, Thomas P. Hartley, F. Loizides, Praveen Kumar, Dana Sumilo, 2024, Frontiers in Digital Health)
- Personalized Language Learning: A Multi-Agent System Leveraging LLMs for Teaching Luxembourgish(Tebourbi Hedi, Sana Nouzri, Yazan Mualla, A. Najjar, 2025, No journal)
- Research and Innovative Application of Multi-Agent Collaborative Architecture in Education(Zhou Yu, 2025, Journal of Education and Educational Research)
- Simulating Classroom Education with LLM-Empowered Agents(Zheyuan Zhang, Daniel Zhang-Li, Jifan Yu, Linlu Gong, Jinchang Zhou, Zhiyuan Liu, Lei Hou, Juanzi Li, 2024, ArXiv)
- The Role of Intelligent Tutor Emotion Cues in the Mechanism of Influence on College Students’ Online Learning(Hua Liu, Li Zhao, Xin He, Haiqing Liu, 2024, Proceedings of the 5th International Conference on Computer Information and Big Data Applications)
- Examining the Impact of Intelligent Agents on Instructor Presence and Student Achievement in the Online Classroom(D. Rust, A. Bryant, 2025, The Journal of Continuing Higher Education)
- The Collaborative System with Situated Agents for Activating Observation Learning(Toshio Okamoto, T. Kasai, 2000, No journal)
- Towards a collaborative e-learning platform based on a multi-agents system(A. E. Mhouti, Azeddine Nasseh, M. Erradi, 2016, 2016 4th IEEE International Colloquium on Information Science and Technology (CiSt))
- Synaesthesia: multimodal modular edutainment platform development(Alpha Lee, Kin Wah Lai, F. T. Hung, W. L. Leung, K. Lam, C. Leung, 2004, 2004 International Conference on Cyberworlds)
- The teacher in the machine: A human history of education technology(Hendi Sugianto, 2025, Review of Education, Pedagogy, and Cultural Studies)
- The Apprentice Learner architecture: Closing the loop between learning theory and educational data(Christopher James Maclellan, Erik Harpstead, Rony Patel, K. Koedinger, 2016, No journal)
- Design and Implementation of a Multi-Agent Teaching System for Local Universities(N. Yang, Qin Xiang, Shuhang Chen, Zhan Tang, 2025, 2025 International Conference on Data Science and Intelligent Systems (DSIS))
- Computer Teaching System Based on Internet of Things and Machine Learning(Lei Chen, Li Zhang, 2022, J. Control. Sci. Eng.)
- Design and Implementation of Collaborative Learning Algorithm for Vocational Education Based on Multi Agent System(Lu Li, Yu Lu, Li Liu, Yanhua Gao, 2024, 2024 3rd International Conference on Artificial Intelligence and Autonomous Robot Systems (AIARS))
- MASCE: A Multi-Agent System for Collaborative E-Learning(Hani M. K. Mahdi, Sally S. Attia, 2008, 2008 IEEE/ACS International Conference on Computer Systems and Applications)
- Xiaohang: Research on the Construction and Application of Educational Intelligent Agents Based on Large Language Models(Ying Li, Xiaozhou Zhang, Tongyu Zhu, Haifeng Gao, Guoliang Zhang, Guopeng Wang, 2025, 2025 IEEE Frontiers in Education Conference (FIE))
- Teaching intelligent agents: The disciple approach(G. Tecuci, M. Hieb, 1996, Int. J. Hum. Comput. Interact.)
- Human-Machine Collaborative Agents Empowering the Innovation and Reconfiguration of Learning Support Services(Ling Zhang, Zhiqiang Ma, 2025, 2025 7th International Conference on Computer Science and Technologies in Education (CSTE))
- AI Personal Study Buddy: A Web-Based Adaptive Learning and Summarization Platform for Smart Academic Support(Shree shambhavi, Atharav Gadade, Ankita Birajdar, Aryan Jagtap, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- An Adaptive Learning System based on Tracking(I. Kerkeni, H. Ajroud, Bénédicte Talon, 2020, No journal)
- An agent-based system for dedicated tutoring in the teaching of electronics engineering(Jossean Andrés Uribe Martinez, Alexander Vera Tasamá, Jorge Iván Marín Hurtado, 2017, 2017 IEEE Colombian Conference on Communications and Computing (COLCOM))
- VEGA: Adaptive Learning in Astronomy through Symbiotic Artificial Intelligence(Vita Santa Barletta, M. Calvano, Manuel Carlucci, Antonio Curci, R. Lanzilotti, Antonio Piccinno, 2025, No journal)
- Generative AI in Education: From Foundational Insights to the Socratic Playground for Learning(Xiangen Hu, Sheng Xu, R. Tong, Art Graesser, 2025, ArXiv)
- Adaptive AI-Driven Learning Systems for Personalized Student Engagement and Performance(Gerda Urbaite, 2026, Luminis Applied Science and Engineering)
- Intelligent e-learning system model for maintenance of updates courses(Fatiha Elghibari, Rachid Elouahbi, F. E. Khoukhi, Sanae Chehbi, I. Kamsa, 2015, 2015 International Conference on Information Technology Based Higher Education and Training (ITHET))
- Facilitating Web-based Education using Intelligent Agent Technologies(Yang Cao, J. Greer, 2004, No journal)
- Instructional Agents: LLM Agents on Automated Course Material Generation for Teaching Faculties(Huaiyuan Yao, Wanpeng Xu, J. Turnau, Nadia Kellam, Hua Wei, 2025, ArXiv)
- CADA: A Contextual Adaptive Dialogue Agent Integrating Dynamic Feedback for Enhanced Conversational AI(H. Wanga, 2026, International Journal of Innovative Science and Research Technology)
教师角色转型、心理感知与专业素养发展
该组文献从社会心理学与职业发展视角出发,探讨教师在AI冲击下的主体性反应。研究主题包括教师对AI的接受度、技术焦虑、职业身份重构、反思性实践以及智能素养(Digital Intelligence)的提升路径,强调教师在协作中的情感投入与角色演变。
- Will Teacher-AI Collaboration Enhance Teaching Engagement?(Lai-Jian Ding, Jiamin Li, Bei-He Hui, 2025, Behavioral Sciences)
- Teacher-AI Collaboration: Psychological Effects on Teaching Identity and Instructional Confidence(Zarina Naz, M. Ahmad, Tanvir Ahmed, Rai Samee Ullah, 2025, Review of Applied Management and Social Sciences)
- Teacher-AI Collaboration: How Educators can Harness Artificial Intelligence without Losing Pedagogical Control(Tayiba Rasheed, Javeria Ashfaq Bhatti, Ayisha Hashim, Fahiza Fauz, 2025, Review of Applied Management and Social Sciences)
- Teacher responsibility, AI integration, and student well-being: The role of peer collaboration in higher education.(Hongmei Li, Jiansheng Zhang, Waris Ali Khan, Hendrik Lamsali, 2025, Acta psychologica)
- Combining Human and Artificial Intelligence for Enhanced AI Literacy in Higher Education(A. Tzirides, Gabriela C. Zapata, Nikoleta Polyxeni Kastania, Akash K. Saini, Vania Castro, Sakinah Abdul Rahman Ismael, Yucong You, Tamara Afonso dos Santos, Duane Searsmith, Casey O'Brien, B. Cope, M. Kalantzis, 2024, Computers and Education Open)
- Teacher–AI collaboration for reflective practice: exploring perceptions, practices, and impact among Moroccan EFL teachers(Brahim Outamgharte, Mohamed Yeou, Hicham Zyad, 2025, Reflective Practice)
- Research on the Influencing Factors and Mechanisms of Preservice Teachers' Human-Machine Collaborative Instructional Design Abilities(Lan Wu, Haochen Tong, Yang Pian, Axi Wang, 2025, 2025 7th International Conference on Computer Science and Technologies in Education (CSTE))
- Path of Improving the Intelligent Literacy of Vocational College Teachers from the Perspective of Human-machine Symbiosis(Huilin Sun, 2024, Advances in Vocational and Technical Education)
- Redefining Professional Development in Online Education through Human-AI Collaboration: A Practitioner-Researcher Perspective(Lieselot Declercq, Annabel Declercq, Koen Verlaeckt, 2025, AI-Enhanced Learning)
- Leading teachers' perspective on teacher-AI collaboration in education(Jinhee Kim, 2023, Education and Information Technologies)
- Perceptions of AI Collaboration in Writing among Teacher Aspirants: An Empirical Cross-Sectional Study among Teacher Aspirants(Richelle Ann P. Penpeña, 2025, EthAIca)
- Intelligent teaching analytics for collaborative reflection: investigating pre-service teachers’ perceptions, experiences and shared regulation processes(Mengke Wang, Zengzhao Chen, Ying Xu, Bhagya Maheshi, D. Gašević, 2025, International Journal of Educational Technology in Higher Education)
- Exploring the Cultivation of Digital Intelligence Design Talents: A Case Study of Human-AI Co-Creation in Forward-Looking Robotic Application Scenarios(Jun Deng, Yimeng Zhang, Tin-Man Lau, Shuhan Huang, 2025, Frontiers of Digital Education)
- Enhancing Teacher-AI Collaboration: The Impact of Prompt Engineering on Generative AI's Instructional Effectiveness(Ismail Celik, Kateryna Zabolotna, Olga Viberg, 2025, Proceedings of the International Conference of the Learning Sciences)
- Human-AI Collaboration in Curriculum Reform: A Posthuman Investigation into AI Driven Class in Chinese University(Haitao Wang, Hangxuan Zhao, Yiwei Li, Hailong Zhang, 2025, Journal of Posthumanism)
- Research on the Model Construction and Practical Pathways of Human-AI Collaborative Teaching in the Digital-Intelligent Era: From the Perspective of Teacher Adaptive Development(Tian-Fang Zhao, Xiangwei Zhang, 2025, Occupation and Professional Education)
- “I’m Not Worried about Robots Taking Over the World. I Guess I’m Worried about People”: Emoting, Teaching, and Learning with Generative AI(Sarah Seeley, Michael Cournoyea, 2025, Teaching and Learning Inquiry)
- Exploring Pre-service Early Childhood Teachers' AI Collaboration Experience and Teacher Role Perception through AI-integrated Design Thinking(Eun Hyeon Koh, 2025, The Korea Educational Review)
- Interactive and Collaborative Online Teaching With Artificial Intelligent & Nearpod(Muhammad Lukman Baihaqi Alfakihuddin, Santo Tjhin, Sri Susilawati Islam, Iwan Setiawan, I. Prasetyo, S. D. Liman, 2024, Journal of Community Services: Sustainability and Empowerment)
- Research on the Cultivation of Human-Machine Collaborative Innovative Thinking in General Education of Private Colleges and Universities(Junfeng Zhang, 2025, Journal of International Education and Development)
- Beyond the Algorithm: A Holistic Pedagogy for Cultivating General Education in the AI Era of Language Education(Xin Wang, Shudong Wang, 2025, 2025 5th International Conference on Educational Technology (ICET))
- Enhancing Lecturers' Career Adaptability Through AI-Driven Quality Control Systems: A Human-Tech Synergy Perspective(Hasan Baharun, Najiburrahman Najiburrahman, Widhi Wahyani, Mukhamad Ilyasin, Jumatriadi Jumatriadi, Rizkiyah Hasanah, 2025, 2025 11th International Conference on Education and Technology (ICET))
- AI as a Teaching Partner: Early Lessons from Classroom Codesign with Secondary Teachers(Alex X. Liu, Lief Esbenshade, Shawon Sarkar, Zewei Tian, Min Sun, Zachary Zhang, Thomas Han, Yulia Lapicus, Kevin He, 2025, ArXiv)
- A multimodal approach to support teacher, researcher and AI collaboration in STEM+C learning environments(Clayton Cohn, Caitlin Snyder, J. Fonteles, Ashwin T. S., Justin Montenegro, Gautam Biswas, 2024, Br. J. Educ. Technol.)
“人在回路”的智能评价、反馈与决策优化
这组文献专门研究AI在教育评价中的应用,强调“人在回路(Human-in-the-Loop)”的协作机制。研究内容涵盖作文自动评分、开放性答案评估、证据导向的教学决策以及如何通过教师反馈优化AI算法,确保评估的公平性、可解释性与有效性。
- Evaluating Elementary-Level English Essays: Human-AI Synergy and the Role of Cognitive Load Theory(Dr. Mamona Yasmin Khan, Shazia Riaz Cheema, Sobia Tasneem, 2025, Research Journal of Psychology)
- Adaptive and Explainable Fair AI Assessment System with Human-in-the-Loop Bias Correction and Cross-Lingual Fairness(P. Kumar, Divyashree S, Harini S, 2025, 2025 4th International Conference on Applied Artificial Intelligence and Computing (ICAAIC))
- Integrating Human Feedback and Unsupervised Fine-Tuning in Open-Domain Short-Answer Grading: A Scalable HITL Architecture(Retno Kusumaningrum, S. Sutikno, Made Kresna, A. Wiguna, Khadijah Khadijah, Adhe Setya Pramayoga, R. Damanhuri, Aris Sugiharto, 2025, 2025 IEEE International Conference on Recent Advances in Systems Science and Engineering (RASSE))
- Closing the Loop of Big Data Analytics: the Case of Learning Analytics(Marta Stelmaszak, Aleksi Aaltonen, 2018, No journal)
- Human–AI Feedback Synergy Assessing the Reliability and Contextual Depth of Generative Evaluation Systems in Enterprise-Scale Education(2025, International Journal of AI, BigData, Computational and Management Studies)
- Human-in-the-Loop Systems for Adaptive Learning Using Generative AI(Bhavishya Tarun, Haoze Du, Dinesh Kannan, Edward F. Gehringer, 2025, 2025 IEEE Frontiers in Education Conference (FIE))
- Closing the loop by expanding the scope: using learning analytics within a pragmatic adaptive engagement with complex learning environments(L. Johnson, Deborah Devis, Cameron Bacholer, Simon Leonard, 2024, Frontiers in Education)
- Closing the loop – The human role in artificial intelligence for education(M. Ninaus, Michael Sailer, 2022, Frontiers in Psychology)
- Teacher-AI Collaboration in Content Recommendation for Digital Personalised Learning among Pre-primary Learners in Kenya(Chen Sun, Louis Major, Rebecca Daltry, Nariman Moustafa, Aidan Friedberg, 2024, Proceedings of the Eleventh ACM Conference on Learning @ Scale)
- Leveraging Large Language Models and Human-In-The-Loop for Interactive Learning Pipelines(Haoze Du, Dinesh Kannan, Bhavishya Tarun, Edward F. Gehringer, 2025, 2025 IEEE Frontiers in Education Conference (FIE))
- Human-AI Collaboration for Knowledge-in-use Assessment Design: Leveraging LLMs with RAG(Juanhui Li, Tingting Li, Hang Li, Haoyu Han, Peng He, Peng He, Hui Liu, 2025, 2025 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE))
- Research on the Impact of Human-Machine Collaborative Dialogue on Normal University Students' Reflection and Instructional Design Abilities in Teaching Resources(Jiajia Yao, Mingyue Liu, Ruohan Zhang, Yuan Zheng, 2025, 2025 5th International Conference on Artificial Intelligence and Education (ICAIE))
- Maintaining Assessment Validity in the GenAI Era: Insights from Human-Machine Interaction(Aya Hussein, 2025, 2025 10th International STEM Education Conference (iSTEM-Ed))
- Closing the Teacher-Learner Loop: The Role of Affective Signals in Interactive RL(Bernhard Hilpert, 2024, 2024 12th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW))
跨学科领域的人机协作教学实践与创新
这组文献展示了人机协作在特定学科(如STEM、外语、医学、艺术、法律、体育等)中的具体落地案例。通过实证研究验证了协作模式在提升学生参与度、高阶思维、技能掌握及价值引导方面的有效性,提供了丰富的教学设计方案。
- HUMAN-AI COLLABORATION IN SCIENCE EDUCATION: CHALLENGES AND STEPS FORWARD(Dong Yang, 2025, Journal of Baltic Science Education)
- Teaching innovation in a pharmacy course: integration of “Questioning-Training of Comprehensive Knowledge Application” and a “Teacher-AI-Student Interaction Model”(Liwei Wang, Yantao Xu, Mianmian Zhang, Renren Bai, Tian Xie, 2025, BMC Medical Education)
- Generative AI and Knowledge Graph Empowered Digital-Intelligent Collaborative Teaching System(Xiaodong Liu, Xi Xiong, Baolin Lai, Jinyi Liu, Huilin Zhou, Yuhao Wang, 2025, 2025 5th International Conference on Educational Technology (ICET))
- Exploring Social Learning in Collaborative Augmented Reality With Pedagogical Agents as Learning Companions(M. Zielke, Djakhangir Zakhidov, Tiffany Lo, Scotty D. Craig, R. Rege, Hunter Pyle, Nina Velasco Meer, Nolan Kuo, 2024, International Journal of Human–Computer Interaction)
- Motor Skill Learning by Virtual Co-embodiment with an AI Teacher Trained in Human Teaching Behavior(Haruto Takita, Daiki Kodama, Yuji Hatada, Takuji Narumi, M. Hirose, 2024, ACM Symposium on Applied Perception 2024)
- Exploring a Human-in-the-Loop Framework for Adaptive Sign Language Translation in Deaf Education(I. Darmawan, Linawati, Gede Sukadarmika, N. Wirastuti, Reza Pulungan, Dewa Putu Yudhi Ardiana, 2025, 2025 18th International Conference on Engineering of Modern Electric Systems (EMES))
- Exploring new pathways for integrating ideological and political education into research-oriented teaching from the perspective of human-machine intelligence: a case study of electromagnetic field and wave related courses(Lili Qu, Shuie Shi, Ruixin Wang, Xuebing Wu, 2025, Journal of Education and Educational Policy Studies)
- Emotion-Driven AI Collaboration and Multimodal Learning in Vocational Education(Hongli Zhang, Wai Yie Leong, 2024, 2024 International Conference on Intelligent Education and Intelligent Research (IEIR))
- Writing with generative AI and human-machine teaming: Insights and recommendations from faculty and students(Andelyn Bedington, E. Halcomb, Heidi A. McKee, Thomas A. Sargent, Adler Smith, 2024, Computers and Composition)
- Artificial intelligence and management education: A conceptualization of human-machine interaction(Stewart Clegg, Soumodip Sarker, 2024, The International Journal of Management Education)
- Teacher-AI Collaboration: Enhancing Traditional Language Teaching Methods(Rasika Dakare, Manasi Gokhale, Yogita kumbhar, 2025, myresearchgo)
- Reconstruction of Teacher Student Interaction Mode Empowered by Intelligent Technology: Research on Human Computer Collaborative Teaching Strategies for Chinese High School Chinese Language Classrooms(Zhuowu Zou, 2026, Communications in Humanities Research)
- HYBRID AI-HUMAN MUSIC COMPOSITION FOR PEDAGOGY(P. Baxi, Susmita Panda, Sachin Mittal, Nidhi Tewatia, Battula Bhavya, F. Saiyad, 2025, ShodhKosh: Journal of Visual and Performing Arts)
- Leveraging Human-in-the-Loop Engagement Through AI in Web Design Education: A Case Study on Adapting to Dynamic Client Requirements(Jason Lively, 2024, International Journal of Emerging and Disruptive Innovation in Education : VISIONARIUM)
- “This is me!”: Creative digital storytelling with teacher–student and AI collaboration(2025, Jurnal Pendidikan Bitara UPSI)
- From Prompt to Profit: The Role of AI-Human Synergy in Growing Student Startups(F. Pratama, Arta Moro Sundjaja, Pantri Heriyati, Gusti Pangestu, 2025, 2025 International Conference on Information Management and Technology (ICIMTech))
- Reshaping Cybersecurity Ethics Education: Evaluating a Posthumanist Pedagogy Using Human/AI Co-Generated Case Studies(Ryan Straight, Jonathon Lowery, David Poehlman, Waamene Yowika, 2025, Cybersecurity Pedagogy and Practice Journal;)
- Human-Machine Integration Empowers Innovation of Application Scenarios for Digital Ideological and Political Education(红艳 李, 2025, Advances in Education)
- Experimental Investigation on Reciprocal Teaching Using Misconception-Based Teachable Agents in Collaborative Learning(Yugo Hayashi, Shigen Shimojo, Tatsuyuki Kawamura, 2025, Proceedings of the International Conference on Computer-supported for Collaborative Learning)
- Interactive teaching using human-machine interaction for higher education systems(Hui-Fang Shang, C. B. Sivaparthipan, ThanjaiVadivel, 2022, Comput. Electr. Eng.)
- Robots conversationnels et acquisition du FLE : vers une interaction homme-machine pédagogique Conversational robots and French as a foreign language acquisition: Towards human-machine interaction in education(Boulahbal Karim, Harkou Lilia, Aifour Mohamed Chérif, 2025, Science, Education and Innovations in the context of modern problems)
- Construction and Innovative Application of Intelligent Agents in College EFL Teaching(Juhua Dou, Boran Zheng, 2025, Proceedings of the 2025 6th International Conference on Computer Information and Big Data Applications)
- A Preliminary Exploration of Constructing a Human-in-the-Loop Teaching Model in English Language Testing Courses Empowered by AI(Yajie Shen, 2025, Higher Education and Practice)
- The Application of Human-Machine Intelligent Interaction Technology in the Practice of Foreign Language Intelligent Education(Rui Li, Shuang Wang, 2023, International Journal of New Developments in Education)
- Exploring the Effectiveness of a Symbiotic Human-Machine Collaborative Model in College English Teaching(Xuemei Wei, 2025, Journal of Teaching & Research)
- Exploring the Teacher–Student–AI Triad in College EFL Teaching: A Perspective of HITL Theory(Yun Zhou, 2025, English Language Teaching)
- Enhancing student writing feedback through teacher–AI collaboration in higher education(Shamim Akhter, Muhammad Ajmal, Shaista Zeb, Saira, Rabindra Dev Prasad Prasad, 2025, Journal of Education and e-Learning Research)
- Improving Student Learning with Hybrid Human-AI Tutoring: A Three-Study Quasi-Experimental Investigation(Danielle R. Thomas, Jionghao Lin, Erin Gatz, Ashish Gurung, Shivang Gupta, Kole A. Norberg, Stephen E. Fancsali, Vincent Aleven, Lee G. Branstetter, E. Brunskill, K. Koedinger, 2023, Proceedings of the 14th Learning Analytics and Knowledge Conference)
- Research on Human-Machine Hybrid Enhanced Programming Teaching Model(Hongying Linghu, Chengguan Xiang, 2024, 2024 14th International Conference on Information Technology in Medicine and Education (ITME))
- AI and the FCI: Can ChatGPT Project an Understanding of Introductory Physics?(Colin G. West, 2023, ArXiv)
- Redefining Legal Pedagogy: Integrating AI Tools Without Undermining Human Judgment(Shibanee Acharya, Ashish Mishra, Omkar Acharya, 2026, International Journal on Science and Technology)
- Practice-Along-Watching in Panoramic VR: A Novel Human-AI Collaboration Genre for Demonstration-Based Training(Yu Wang, Hongqiu Luan, Wei Gai, Lutong Wang, Chenglei Yang, 2025, Proceedings of the 20th International Conference on Virtual Reality Continuum and its Applications in Industry)
- REFRAMING THE LECTURER’S ROLE IN THE AGE OF GENERATIVE AI: TOWARDS A HUMAN–AI CO-TEACHING MODEL IN DESIGN EDUCATION(H. Hapiz, Yuhanis Ibrahim, Azlin Sharina Abdul Latef, Nooraziah Ahmad, Darliana Mohamad, A. W. Radzuan, 2025, International Journal of Modern Education)
- Research on the Construction and Management Mechanism of a “Dual-track Parallel” Model for Human-machine Collaborative Teaching in Vocational Colleges(Bijin Hua, Liang Liu, Xiaojuan Yang, Jinyu Zhou, Yanqiu Tan, 2025, Global Education Bulletin)
- Research on the Integration Path of Human-Machine Collaboration and Community Consciousness Education in the "Corporate Culture" Course in Universities(Xiaoguang Sun, Xiumei Kang, M. Ma, 2025, Journal of Modern Education and Culture)
- Cultivating Critical Creators in Teacher-Student-AI Collaboration under AIGC(Shuangzhe Liu, Xin Yin, 2025, Proceedings of the 2025 International Conference on Artificial Intelligence, Virtual Reality and Interaction Design)
- Generative AI and Higher-Order Thinking in Vocational Education: A Study of Human-Machine Collaboration(Mengmeng Zhong, Mohamad Izzuan Mohd Ishar, Muhammad Sukri Saud, Bohong Li, 2025, International Journal of Academic Research in Progressive Education and Development)
伦理治理、教育公平与以人为本的协同设计
该组文献深入探讨了人机协作带来的哲学、伦理与管理挑战。重点讨论了算法偏见、数据隐私、主体性冲突、先导性治理以及“以人为本的AI教育(HCAI)”设计原则,旨在确保技术应用符合教育价值引领与公平性要求。
- Ecological Reshaping and Subjectivity Reconstruction of Content Production in University Internet Ideological and Political Education from the Perspective of Human-Machine Collaboration(Yuyang Xu, Feng Zhong, Yingmei Li, 2025, International Journal of Education and Social Development)
- Shifting the Human-AI Relationship: Toward a Dynamic Relational Learning-Partner Model(Julia A. Mossbridge, 2024, ArXiv)
- Partners or Tools? Anticipatory Governance for Human-AI Complementarity in Higher Education(Daniel Autenrieth, Jan-René Schluchter, 2025, Zeitschrift für Hochschulentwicklung)
- Tools or crutches? Budgeting human and machine autonomy when introducing GenAI in education(Francesco Balzan, Lorenzo Angeli, Ralph Meulenbroeks, Federica Russo, 2026, Artificial Intelligence in Education)
- Beyond the Algorithm: Reconciling Generative AI and Human Agency in Academic Writing Education(Yaoying Han, 2025, International Journal of Learning and Teaching)
- Can Human Engagement and Artificial Intelligence (AI) Co-Exist in the Online Classroom?(Mimi Gough, 2025, ICHRIE Research Reports)
- Sustainable Innovation: Harnessing AI and Living Intelligence to Transform Higher Education(Hesham Allam, Benjamin Gyamfi, Ban AlOmar, 2025, Education Sciences)
- AI for All: Adaptive, Accessible, and Inclusive Learning Experiences in the Age of Intelligent LMSs(Athanasios Angeioplastis, M. Konstantakis, John Aliprantis, Konstantinos Ordoumpozanis, D. Varsamis, A. Tsimpiris, 2026, Inf.)
- Future of Smart Classroom in the Era of Wearable Neurotechnology(Mojtaba Taherisadr, B. U. Demirel, M. A. Faruque, Salma Elmalaki, 2021, ArXiv)
- Human Decision Makings on Curriculum Reinforcement Learning with Difficulty Adjustment(Yilei Zeng, Jiali Duan, Y. Li, Emilio Ferrara, Lerrel Pinto, Chloe Kuo, S. Nikolaidis, 2022, ArXiv)
- "Guide Me Through the Unexpected": Investigating How Deviation from Expectation Affects Human Teaching and Robot Learning(Konstantin Mihhailov, Muhan Hou, Kim Baraka, 2025, 2025 34th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN))
- Supporting the development of a national constellation of communities of practice in the scholarship of teaching and learning through the use of intelligent agents(D. Cambridge, 1999, No journal)
- Research of personalized Web-based intelligent collaborative learning(Rui Zeng, Ying-yan Wang, 2012, J. Softw.)
- Responsible AI-Powered Learning Architectures for Long-Term Educational Equity(J. Pediongco, Sadulla Nazarovich Meyliev, 2025, Qubahan Techno Journal)
- Human-Centred AI Education in Upper-Second Level: towards a PRIMM-esque pedagogy for CT 2.0(Brian Conway, 2023, Proceedings of the 2023 Conference on Human Centered Artificial Intelligence: Education and Practice)
- SPIED: A Human-Centred pedagogy for AI education at upper second level (Work in progress)(Brian Conway, Keith E. Nolan, Keith Quille, 2026, Proceedings of the 2026 Conference on Human Centred Artificial Intelligence - Education and Practice)
- Teaching with Generative AI: Ethical Human-AI Co-Creation as an Innovative Legal Education Methodology(Chiara Gallese, 2025, No journal)
- Mathematically Intelligent Human-Computer Collaborative Teaching: Opportunities, Challenges and Countermeasures(Lixing Zhao, Hongli Yang, 2024, International Educational Research)
- AI for Education (AI4EDU): Advancing Personalized Education with LLM and Adaptive Learning(Qingsong Wen, Jing Liang, Carles Sierra, Rose Luckin, Richard Tong, Zitao Liu, Peng Cui, Jiliang Tang, 2024, Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining)
- Key Factors Influencing Design Learners’ Behavioral Intention in Human-AI Collaboration Within the Educational Metaverse(Ronghui Wu, Lin Gao, Jiaxin Li, Qianghong Huang, Younghwan Pan, 2024, Sustainability)
- Research on the Implementation Path of “Human-Machine Collaboration” in Ideological and Political Education in the Era of Digital Intelligence(美娟 谢, 2025, Advances in Education)
- Navigating International Challenges of Quality Assurance in Higher Education: A Synergy of Gen-AI and Human-Made Solutions(Yalin Li, Ming Xie, 2025, GBP Proceedings Series)
- Ethical concerns surrounding artificial intelligence in anatomy education: Should AI human body simulations replace donors in the dissection room?(J. Cornwall, Sabine Hildebrandt, Thomas H Champney, Kenneth S. Goodman, 2023, Anatomical Sciences Education)
- Fostering social-emotional learning through human-centered use of generative AI in business research education: an insider case study(P. Aure, Oriana Cuenca, 2024, Journal of Research in Innovative Teaching & Learning)
认知增强、协同设计与教学流程优化
这类文献关注人机协作在教学全流程中的具体应用,包括辅助教案编写(Co-design)、认知潜力激发、元认知策略支持以及基于生理信号(如EEG)的情感协同。研究强调AI如何通过减轻认知负荷和优化教学设计来提升整体教学效能。
- The power duo: unleashing cognitive potential through human-AI synergy in STEM and non-STEM education(Stefano Triberti, Ozden Sengul, Binny Jose, Nidhu Neena Varghese, T. Bindhumol, Anu Cleetus, S. Nair, 2025, Frontiers in Education)
- Beyond Assistance: Embracing AI as a Collaborative Co-Agent in Education(Rena Katsenou, Konstantinos Kotsidis, Agnes Papadopoulou, Panagiotis Anastasiadis, Ioannis Deliyannis, 2025, Education Sciences)
- The Impact of Metacognitive Strategy-Supported Intelligent Agents on the Quality of Collaborative Learning from the Perspective of the Community of Inquiry(Meng-Lin Chen, Linjing Wu, Zhangyi Liu, Xinqian Ma, 2024, 2024 4th International Conference on Educational Technology (ICET))
- Enhancing Critical Thinking: Exploring Human-AI Synergy in Student Cognitive Development(Imane JAI LAMIMI, Sara El Jemli, Imane Zeryouh, 2025, Arab World English Journal)
- Focus and Concentrate! Exploring the Use of Conversational Robot to Improve Self-Learning Performance during Pandemic Isolation by Closed-Loop Brainwave Neurofeedback(Ker-Jiun Wang, Midori Sugaya, 2021, 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER))
- The Convergence of Reinforcement Learning and Knowledge Tracing Models in Adaptive Learning Systems(R. Domínguez, 2025, Innovation in Science and Technology)
- Human-AI collaboration or obedient and often clueless AI in instruct, serve, repeat dynamics?(M. Saqr, Kamila Misiejuk, Sonsoles L'opez-Pernas, 2025, ArXiv)
- Teacher-AI Collaboration for Curating and Customizing Lesson Plans in Low-Resource Schools(Deepak Varuvel Dennison, Bakhtawar Ahtisham, Kavyansh Chourasia, Nirmit Arora, Rahul Singh, René F. Kizilcec, A. Nambi, Tanuja Ganu, Aditya Vashistha, 2025, ArXiv)
- Teacher-AI Collaboration: A Hybrid Framework for Streamlining Verbal Skill Evaluation in STEM Education Using Generative AI(W. Danang Arengga, S. Sendari, Heru Wahyu Herwanto, Mukaromah, Aan Anjar Setyowati, Samsul Setumin, 2025, 2025 9th International Conference On Electrical, Electronics And Information Engineering (ICEEIE))
- Research on the Teaching Reform of the “Decision Theory and Methods” Course Based on Integrating Human-Machine Collaboration with Evidence-Based Decision Making(娜 赵, 2025, Advances in Education)
- Satisfactory for All: Supporting Mastery Learning with Human-in-the-loop Assessments in a Discrete Math Course(Shao-Heng Ko, Alex Chao, Violet Pang, 2025, Proceedings of the 56th ACM Technical Symposium on Computer Science Education V. 1)
- Interdisciplinary Co-Design Process of Instructional Lesson Plans for Promoting the Responsible Use of AI(Soraia S. Prietch, Georgina Aguilar-González, Luz Adriana Cordero-Cid, María Luisa Flores-Hernández, Cristhian Daniel Guevara-Cano, Jeshu Gutiérrez-Flores, Cecilia Reyes-Peña, Diego Gerardo Rojas-Rojas, Guadalupe Ruiz-Vivanco, Jaime Sabines-Córdova, Mireya Tovar-Vidal, J. González-Calleros, Josefina Guerrero-García, 2024, Avances en Interacción Humano-Computadora)
- Partnering with AI Through Practice: Designing AI Competence-Building Activities Using a Tailored Experiential Learning Cycle(Yue Chen, K. K. Chai, 2025, 2025 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE))
- Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms(Jessica Van Brummelen, Phoebe Lin, 2020, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems)
- Co-creating with Generative AI (GenAI) for curriculum design: learning personas(Irina Rets, Denise Whitelock, Chris Edwards, Leigh-Anne Perryman, Beck Pitt, 2025, Ubiquity Proceedings)
- Human-AI Collaboration in Building Educational Content: Bridging Innovation and Pedagogy in the Classroom(Sun Jiali, 2024, Pakistan Journal of Life and Social Sciences (PJLSS))
- The Role of Teacher-AI Collaboration in Curriculum Adaptivity: A Case in Primary School Mathematics(S. D. Mooij, Z. Vermeire, C. K. Campen, Inge Molenaar, 2025, No journal)
- Collaborative CBR-based Agents in the Preparation of Varied Training Lessons(J. Henriet, 2014, Int. J. Comput. Sci. Sport)
- Generative Artificial Intelligence and Collaboration: Exploring Religious Human-Machine Communication and Tensions in Leadership Practices(P. H. Cheong, Liming Liu, 2025, Human-Machine Communication)
合并后的分组结果构建了一个从底层技术支撑到高层伦理治理的完整教师人机协作(TAC)研究图谱。研究不仅涵盖了多智能体系统与自适应平台的开发,还深入探讨了“人在回路”的评价机制与跨学科的教学实践。核心趋势显示,领域研究正从单一的工具辅助转向深度的“人机共生”,高度关注教师在智能化环境下的角色重塑、心理适应及专业素养提升,并强调以人为本的伦理治理是实现教育数字化转型的关键保障。
总计181篇相关文献
The advancing power and capabilities of artificial intelligence (AI) have expanded the roles of AI in education and have created the possibility for teachers to collaborate with AI in classroom instruction. However, the potential types of teacher-AI collaboration (TAC) in classroom instruction and the benefits and challenges of implementing TAC are still elusive. This study, therefore, aimed to explore different types of TAC and the potential benefits and obstacles of TAC through Focus Group Interviews with 30 Chinese teachers. The study found that teachers anticipated six types of TAC, which are thematized as One Teach, One Observe; One Teach, One Assist; Co-teaching in Stations; Parallel Teaching in Online and Offline Classes; Differentiated Teaching; and Team Teaching. While teachers highlighted that TAC could support them in instructional design, teaching delivery, teacher professional development, and lowering grading load, they perceived a lack of explicit and consistent curriculum guidance, the dominance of commercial AI in schools, the absence of clear ethical guidelines, and teachers' negative attitude toward AI as obstacles to TAC. These findings enhance our understanding of how TAC could be structured at school levels and direct the implications for future development and practice to support TAC.
No abstract available
Against the backdrop of the widespread integration of Artificial Intelligence (AI) into educational practices, collaboration between teachers and AI is profoundly influencing teaching behavior. Drawing on the Conservation of Resources Theory, this study constructs and tests a model examining the impact of teacher-AI collaboration on teaching engagement, with a focus on the mediating role of technological self-efficacy and the moderating role of perceived organizational support. Based on empirical data collected through a survey in China, the results reveal that teacher-AI collaboration significantly and positively predicts teaching engagement. Furthermore, technological self-efficacy mediates this relationship, suggesting that AI collaboration enhances teaching engagement by boosting teachers’ confidence in using technology. In addition, perceived organizational support positively moderates the effect of teacher-AI collaboration on technological self-efficacy, forming a moderated mediation model. This research enriches the understanding of teacher behavior in the context of AI integration and offers practical implications for educational institutions seeking to optimize AI adoption strategies and enhance teacher motivation.
No abstract available
This study investigates Shiksha copilot, an AI-assisted lesson planning tool deployed in government schools across Karnataka, India. The system combined LLMs and human expertise through a structured process in which English and Kannada lesson plans were co-created by curators and AI; teachers then further customized these curated plans for their classrooms using their own expertise alongside AI support. Drawing on a large-scale mixed-methods study involving 1,043 teachers and 23 curators, we examine how educators collaborate with AI to generate context-sensitive lesson plans, assess the quality of AI-generated content, and analyze shifts in teaching practices within multilingual, low-resource environments. Our findings show that teachers used Shiksha copilot both to meet administrative documentation needs and to support their teaching. The tool eased bureaucratic workload, reduced lesson planning time, and lowered teaching-related stress, while promoting a shift toward activity-based pedagogy. However, systemic challenges such as staffing shortages and administrative demands constrained broader pedagogical change. We frame these findings through the lenses of teacher-AI collaboration and communities of practice to examine the effective integration of AI tools in teaching. Finally, we propose design directions for future teacher-centered EdTech, particularly in multilingual and Global South contexts.
ABSTRACT Given the widespread use of generative artificial intelligence in different domains, the present study investigates Moroccan EFL teachers’ perceptions and practices of teacher artificial intelligence collaboration (TAC) for reflective practice and the impact it may have on their instructional practices. The study collects data from 56 Moroccan EFL teachers practicing in the Souss Massa region using a TAC for reflective practice questionnaire and semi-structured interviews. The findings show that participants generally view the integration of TAC into their reflective practice positively, but they express reservations about the irreplaceable nature of human interaction. The study also revealed that participants used TAC as a main or supplementary source of reflection. Further, the findings suggest that TAC for reflective practice can increase teacher confidence, identify areas for professional development, and potentially enhance instructional strategies. However, the findings highlight that many factors, such as teachers’ expertise and context of use, influence the effectiveness of TAC for reflective practice. Moreover, the study highlights the need for reflective practitioners to balance TAC for reflective practice and known forms of reflective practice. The study concludes with implications for different stakeholders.
Research on teacher-AI collaboration is limited despite AI's growing role in education, especially in low- and middle-income countries (LMICs). To address this gap, this study investigates how teacher agency in a digital personalised learning (DPL) tool can affect behavioural changes and learning outcomes in Kenya. Teachers in the experimental group could apply their pedagogical judgement to override system-generated content for learners to practise, whereas teachers in the control group were limited to system-generated content. Teacher agency was assessed by measuring the diversity of unique choices made and the frequency of changes in their choices. The nine-week A/B test involved 562 learners from 45 pre-primary classes, each led by a different teacher, with classes randomly assigned to a control or an experimental group. The results demonstrate that teacher agency in content recommendation significantly impacted learner device usage, but not teacher usage of digitised lesson plans. Learners in the experimental group achieved significantly higher digital scores on learning units than the control group. Additional analysis of the experimental group revealed that the degree of teacher agency significantly influenced learner device usage, but not lesson plan usage or digital scores. The study highlights the importance of further research to enhance teachers-AI synergy, to improve learning outcomes in LMICs and beyond.
It examines if there is a link between teachers’ autonomy, how often AI is used and pedagogical power in Pakistan, to know how AI is practiced in classes. Using a survey design that gathers data from a wide range of schools, I asked 270 public and private school teachers to complete a questionnaire. Summarizing demographics was done using descriptive statistics and associations and differences among main variables were examined with correlation, regression and independent sample t-tests. The study showed that teachers who enjoy greater autonomy are often more likely to use AI technology in their teaching. According to regression analysis, how often teachers use AI significantly relates to better pedagogical control which shows that more interaction with AI tools supports their ability to teach. Also, t-test findings suggest that public school teachers enjoy more control over teaching and learning, possibly owing to differences in support and resources provided by each type of school. The results agree with existing ideas like the Technology Acceptance Model and what has been studied recently, highlighting teacher freedom, equal access to AI technology and ethical approaches. The research shows that education systems should encourage autonomy, offer targeted training and put balanced ethical advice in place for AI to be integrated safely and successfully. The results of these studies are useful for policymakers, school administrators and teacher trainers hoping to use AI boost instruction while supporting the importance of teaching people as individuals.
The present study described the impacts of the adoption of Artificial Intelligence (AI) tools on the professional identity and the teaching confidence levels of the teaching staff and, in more detail, the degree to which collaborative work with AI influences the teaching effectiveness of its employees. In the context of quantitative research design, the information has been sampled amongst the 250 teachers who have been invited to work based on the AI equivalents as applied in school and higher education. The sample was picked through the use of a simple random sampling method and a survey was administered to measure how the AI is used by teachers, their level of confidence in the use of AI, their identity and confidence in AI. The results indicated that there was a significant positive association involving the use of AI and the instructional confidence of teachers i.e. AI is effective in reducing workload and providing teachers with time to perform more creative tasks when teaching. Moreover, the teacher-AI collaboration influenced the positive change in the professional identity of the teachers, being preoccupied with the sense of undervalues or lacking control. The most significant was the role of the trust in AI and its significance was identified that the greater the trust, the more effective teaching has been qualified by the teachers. It is observed in the given study that there is a need of proper training, continuous support and reliable AI systems so that the trust and the confidence of the teachers can be built. It even discusses the preservation of the human aspect to teaching with AI as an aiding component.
No abstract available
The integration of generative artificial intelligence (AI) into educational assessment has shown potential in addressing inefficiencies in traditional evaluation methods, particularly in time-constrained STEM classrooms. This study proposes a hybrid framework that synergizes teacher expertise with generative AI to streamline the evaluation of verbal skills-a critical yet underexplored competency in STEM education. By focusing on collaborative dialogue, problemsolving explanations, and conceptual reasoning, this research aims to develop a system that enhances assessment efficiency while preserving the nuanced judgment of educators. Verbal skill evaluation in STEM contexts-such as assessing students’ ability to articulate hypotheses, defend solutions, or collaborate in technical discussions-remains labor-intensive and subjective. Teachers spend significant time analyzing spoken or written responses, often sacrificing opportunities for personalized instruction. While generative AI model proficiency in language processing standalone using in education raises concerns: (1) lack of contextual awareness in STEM-specific discourse, (2) potential biases in automated scoring, and (3) displacement of teachers’ formative feedback roles. In this research using a mixed-methods approach was employed across three phases: Framework Development, Pilot Testing, Scalability. By bridging the divide between automation and human judgment, this hybrid framework demonstrates that generative AI need not replace teachers but can instead amplify their capacity to nurture critical verbal skills in STEM. Future work explored by adaptive AI tutoring systems that leverage this model to provide real-time dialogue support during student presentations or group discussions.
The objective of this research is to explore the collaborative role of AI and teachers in providing feedback on written assignments. Teacher feedback is key to improving students’ writing, but now there is AI that can perform the same role. The study uses a combination of classroom testing and questionnaires to collect information. Forty students studying BS English at Shaikh Ayaz University, Shikarpur, Pakistan participated, receiving feedback on their papers from a teacher, and the same assignments also received AI-generated feedback. The results were analyzed thematically and interpreted accordingly. The students’ perspective is that AI tools helped students improve grades by addressing grammar and sentence-level issues. Teachers benefited from less workload when AI was included; the feedback was faster, encouraging students to revise their work more readily. Human intervention is still required to ensure better quality and more intelligent AI suggestions. The findings suggest that teachers and AI work more effectively together to provide feedback on writing, including grammar and formal expression of opinions. The research implies that adopting AI into the curriculum carries responsibilities that need to be formally stated in policies and tested in classroom settings.
In the era of Artificial Intelligence, human-computer collaborative teaching has become a new picture of future development in the field of education, and how to utilize AI technology to collaborate on English lesson plan design has not yet been fully studied. Based on this, this paper explores the framework of context-aware English lesson plan collaborative design and improves the Bayesian knowledge tracking to propose the CS-BKT model to obtain students' English knowledge level and facilitate assisting English lesson plan design. The results show that the CS-BKT model possesses a better knowledge state tracking effect with optimal values of AUC, Accuracy, r2 and RMSE metrics, the first three of which are improved by 0.85% to 25.16%, 1.38% to 12.53% and 6.26% to 230.95%, while the latter is reduced by 3.42% to 13.80%. After applying the proposed model and framework, students in the experimental group showed significantly higher results in the latter five tests of their knowledge level than those in the control group (p < 0.05) and obtained higher teacher satisfaction. The context-aware English lesson plan co-design framework integrates context-awareness and artificial intelligence technologies and can promote the overall improvement of English teaching quality.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
The integration of artificial intelligence (AI) into education has generated increasing interest, particularly in its role in academic writing. While prior studies have examined students’ use of AI, limited attention has been given to teacher aspirants’ perceptions of AI collaboration with human writers across subject disciplines. Addressing this gap is crucial in preparing future educators for responsible AI integration in teaching and learning. This study aimed to determine the perceptions of English, science, and mathematics teacher aspirants toward AI collaboration with human writers in academic essay writing and to examine differences across subject disciplines. A descriptive‒quantitative design was employed, involving 90 undergraduate teacher aspirants equally distributed across the three disciplines. Stratified random sampling was used to ensure adequate representation, and data were collected through a structured questionnaire consisting of 10 items on a 5-point Likert scale with high internal reliability (α = 0,94). The data were analyzed via descriptive statistics and one-way ANOVA. The findings revealed generally positive perceptions of AI’s role in writing, particularly in generating outlines, assisting with citations, and supporting editing processes. Significant differences emerged among disciplines, with science majors expressing the most favorable perceptions (M = 4,13), followed by English (M = 3,94) and mathematics majors (M = 3,90). The study concludes that disciplinary orientation shapes openness to AI collaboration in academic writing. It is recommended that teacher education programs integrate structured training on the ethical and effective use of AI, ensuring a balance between technological assistance and the preservation of creativity and critical thinking.
Recent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross a difficulty threshold, and (2) the point of intervention, that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. What is already known about this topic Collaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains. Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments. Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI. What this paper adds We extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration. We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom. From our discussions with the teacher, we define two types of inflection points to address students' STEM+C difficulties—the difficulty threshold and the intervention point—and discuss how the feedback latency interval separating them can inform educator interventions. We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline. Implications for practice and/or policy Our case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback. Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment. Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula.
The development of artificial intelligence has entered the era of cognitive intelligence, where machines are beginning to transcend their instrumental role and evolving into collaborative partners. This technological progress has triggered a fundamental shift in educational philosophy—moving from an emphasis on knowledge transmission to a focus on competency development. However, while implementing instructional design transformations through the “Problem-Based—Human-AI Collaboration—Ecological Evolution” framework, we have observed that students using AI can easily fall into the “prompt engineer trap,” passively accepting generated outcomes. In fact, the ultimate goal of AI integration is not to cultivate efficient users, but to nurture critical creators who can engage with AI reflectively and creatively. By analyzing pain points in students’ behavioral experiences within AIGC environments, this study is grounded in three interrelated theoretical foundations: critical thinking theory, the transformation and reshaping of design behavior paradigms in AIGC contexts, and the methodology of technology and behavioral design. With a student growth-oriented approach, we construct a critical behavioral design framework for Teacher-Student-AI collaboration, aimed at promoting deep learning and metacognitive development in student design courses.
Deaf students often face significant challenges in language learning, resulting in reading literacy levels that consistently lag behind those of their hearing peers, creating barriers to quality education. To address this issue, this study implemented a digital storytelling project in which teachers and deaf students collaboratively crafted storybooks that are deeply rooted in their identities and lived experiences within the school community. This approach makes the storybooks both engaging and meaningful, ensuring that all teachers and deaf students feel included and valued in the storytelling process. Additionally, artificial intelligence (AI) was integrated as a collaborative tool, enriching the storytelling process with innovative resources and support for both teachers and deaf students. Findings from the study indicate significant improvements in deaf students’ reading literacy and engagement, as well as in teachers’ pedagogical practices. Thus, this study proposes a replicable framework for creating digital storytelling project by blending teacher–student creativity and AI support, fostering literacy and cultivating a reading culture in schools. By emphasizing identity-driven narratives, the project bridges the gap in reading literacy outcomes, offering a practical and inclusive approach to deaf education.
No abstract available
No abstract available
No abstract available
In the context of the rapid development of artificial intelligence (AI) technology, questioning has become an increasingly active and critical component of effective collaboration with AI. Also, the ability to apply knowledge from various disciplines to analyze and address problems is essential for enhanced learning. Despite its importance, studies on training students in questioning and knowledge integration within course teaching are lacking. This study aimed to explore the effects of an innovative teaching approach based on the Bloom-based “Questioning-Training of Comprehensive Knowledge Application (Q-TOCKA)” method. This approach integrates a teacher-AI-student interaction model, promoting deeper engagement in the learning process. The study included two groups of students: a control class group and an experimental class group. In the context of four problem-based learning (PBL) assignments during the course of Pharmaceutical Botany and Pharmacognosy, the experimental class was subjected to Bloom-based Q-TOCKA of progressive exercises, paired with the teacher-AI-student in-depth interaction and communication model. Conversely, the control class followed the traditional teacher-student dichotomous communication method. The students’ questioning and knowledge application scores in both classes were analyzed using R statistical software, interaction model regression analysis, and prediction model Holt linear trend method analysis. Additionally, the data from a course questionnaire survey for students in the experimental class were statistically collected. Statistical analysis using R revealed significantly higher scores of the experimental class in questioning and comprehensive application of knowledge across the two to four post-course PBL assignments (P < 0.05) compared with the control class. Pearson correlation analysis indicated a linear correlation between questioning scores and comprehensive knowledge application scores. Furthermore, the regression analysis of the interaction model suggested a synergistic relationship between questioning ability and comprehensive knowledge application. Holt linear trend method indicated that the positive impact of the new teaching method on students’ questioning and knowledge application abilities became more pronounced with an increase in the number of training sessions. The questionnaire survey results showed that more than 90% of the experimental-class students expressed a favorable attitude toward the new teaching approach. The innovative teaching method significantly enhanced students’ questioning abilities and their capacity for comprehensive knowledge application. The positive feedback from students indicates that this teaching approach holds promise for broader application and promotion across various courses.
AI technologies are reshaping our world and prompting education scholars to rethink both the aims and methods of schooling to prepare learners for the future (Holmes et al., 2019). Meanwhile, interest in integrating AI into science education has grown, with much discussion focusing on the impact of AI on student engagement and learning performance. Among those interests and debates, questions arise about AI’s ability to provide instructional, learning, and evaluative tools, as well as the practices and challenges of teacher-AI collaboration in education. To conclude, human–AI collaboration in science education offers substantial potential to enrich teaching and learning, on the condition that AI functions as a collaborative partner guided by teacher expertise, ethical principles, and a commitment to equity. Realizing this potential requires deliberate, evidence-based design decisions, professional development that centers on teacher agency, and governance frameworks that foster trust and transparency in AI-assisted learning. By sustaining an ongoing partnership among teachers, researchers, and AI developers, we can foster collective intelligence in human-AI collaboration that illuminates scientific reasoning, personalizes instruction, and supports students in developing robust scientific understandings for the twenty-first century.
This study explores the impact of human-AI collaborative teaching strategies on English teachers in secondary schools. Based on semi-structured interviews with five English teachers in Jiangxi Province, thematic analysis was conducted using the SAMR, UTAUT, and GHEX-IPACK theoretical frameworks. The findings indicate that AI technology is primarily applied in scenarios such as resource generation, assignment distribution, and learning analytics. By substituting traditional tools, enhancing teaching interactions, and reconstructing instructional processes, AI facilitates a shift in teaching strategies from “teacher-led” to “human-AI collaboration”. Teachers generally recognized the potential of this model for improving efficiency and supporting personalized learning, but also pointed out challenges, including data bias, hardware limitations, and a lack of emotional interaction. The study suggests that achieving deep human-AI collaboration requires balancing technological efficacy with humanistic care relying on blended instructional design and teacher training to optimize teachers’ knowledge structures. This research preliminary constructs a practical model of human-AI collaboration in secondary school English education, providing insights for teacher professional development.
In an era of rapid technological advancement, the integration of Artificial Intelligence (AI) is reshaping online education and redefining professional development for educators. This reflective manuscript adopts a joint practitioner-research perspective, combining applied research and practical experience in online teaching. We explore how virtual humans can support educators, foster innovation, strengthen teacher agency, and contribute to inclusive and ethical AI adoption in online education. This contribution emphasizes the importance of ethical frameworks, collaborative international ecosystems, and practice-driven innovation to ensure that human values and pedagogy remain at the heart of AI-enhanced online learning environments.
Demonstration-Based Training relies on observing expert demonstrations and practicing with timely feedback. While Virtual Reality (VR) provides an immersive learning environment, existing approaches often disconnect observation from practice or lack adaptive guidance. This paper proposes a novel "Practice-Along-Watching" (PAW) model that seamlessly integrates teacher demonstrations in interactive virtual environment through panoramic video. Learners view real-time task performances by teachers from a first-person perspective and can simultaneously manipulate virtual objects to mimic the teacher’s actions. To overcome the passivity of pure watching or the rigidity of preset feedback, we introduce a collaborative Large Language Model (LLM) agent to empower demonstration-based training. The AI agent actively monitors the learner’s actions, compares them to expert demonstrations (including visual observation and process modeling), and provides timely, context-specific corrective feedback. In addition, the agent collaboratively controls the pace of learning, dynamically pauses demonstrations for practice reinforcement, or adjusts difficulty based on learner performance. We instantiate the model and apply it to the field of tea ceremony training. A comprehensive user study demonstrated the effectiveness of the collaborative HCI-AI approach. Compared to passive observation, participants using the AI-empowered PAW model showed improvements in task accuracy and procedural knowledge, while reporting high levels of engagement and personalized learning experiences.
This paper examines the role of digital technologies in enhancing personalized and collaborative learning in education. Drawing on theories of constructivism, personalized learning, and collaborative learning, it explores how adaptive platforms improve student outcomes. Case studies show that these technologies, along with collaborative tools in online courses, can foster greater engagement and deeper learning. However, their effectiveness depends on their integration with traditional teaching practices, where teachers remain central in guiding learning and providing emotional support. The paper concludes that while digital tools offer valuable benefits, their success relies on addressing challenges such as equitable access and teacher training.
This study investigates the key factors which influence design learners’ behavioral intention to collaborate with AI in the educational metaverse (EMH-AIc). Engaging design learners in EMH-AIc enhances learning efficiency, personalizes learning experiences, and supports equitable and sustainable design education. However, limited research has focused on these influencing factors, leading to a lack of theoretical grounding for user behavior in this context. Drawing on social cognitive theory (SCT), this study constructs a three-dimensional theoretical model comprising the external environment, individual cognition, and behavior, validated within an EMH-AIc setting. By using Spatial.io’s Apache Art Studio as the experimental platform and analyzing data from 533 design learners with SPSS 27.0, SmartPLS 4.0, and partial least squares structural equation modeling (PLS-SEM), this study identifies those rewards, teacher support, and facilitating conditions in the external environment, with self-efficacy, outcome expectation, and trust in cognition also significantly influencing behavioral intention. Additionally, individual cognition mediates the relationship between the external environment and behavioral intention. This study not only extends SCT application within the educational metaverse but also provides actionable insights for optimizing design learning experiences, contributing to the sustainable development of design education.
With the rapid development of artificial intelligence technology, the application of personalized learning systems in vocational education has gained increasing attention. However, existing systems often handle only single-modal data and lack real-time monitoring and adaptive adjustments based on emotional states. To address these issues, this study proposes a personalized learning system based on human-machine collaboration, integrating multimodal data fusion and emotion-driven mechanisms, specifically designed for vocational education. Experimental validation shows that the proposed system significantly improves student learning outcomes and teacher satisfaction, particularly in the areas of complex interdisciplinary knowledge integration and personalized learning path generation. This study provides new insights into the design of personalized learning systems in vocational education and lays the foundation for the application of human-machine collaboration and multimodal data fusion technologies in the educational field.
This study examines the interaction between teacher responsibility and the integration of artificial intelligence (AI) in learning, and how this influences student mental well-being in higher education, as well as whether peer collaboration amplifies these effects. Guided by ecological systems theory (ecological systems theory), we propose a moderated mediation model where teacher responsibility (microsystem factor) impacts student well-being both directly and indirectly via AI integration (a technological resource in the educational environment), with fellow student collaboration (peer microsystem factor) moderating the pathway. We surveyed 468 university students in four major Chinese cities and analyzed the data using partial least squares structural equation modeling (PLS-SEM). The results indicate that teacher responsibility positively affects student well-being and encourages AI tool integration, which in turn significantly enhances student mental well-being. Furthermore, the positive indirect effect of teacher responsibility on well-being through AI is stronger when peer collaboration is high. This research contributes to theory by linking human, technological, and social dimensions of support in an educational ecology and offers practical insights for leveraging teacher engagement, AI tools, and peer networks to improve student mental health.
Artificial Intelligence (AI) and machine learning (ML) are having a great impact on all aspects of society. However, due to the technical competencies and mathematical understanding required for implementing solutions leveraging these technologies, access to the communities working on these technologies is limited to those having these skills. This limits the ability of domain experts to directly transfer their knowledge and contribute to the development of AI and ML systems. To address this problem, we propose the Human Education AI Teaming (HEAT) framework, in which we draw on human education to design an innovative education system to enable collaboration between humans and AI cognitive agents. The main aim of HEAT is to promote the social integration of AI by allowing domain experts to focus more on communicating a body of knowledge to the machine, and less on the computational, data, and engineering concepts associated with how the machine learns. We follow an educational theory-driven approach to derive the content knowledge and competencies required by each agent. We conclude the paper with a demonstration case study explaining how the complex autonomous guidance of a flock of sheep could leverage HEAT to make the technology accessible by empowering non-AI specialists, livestock farmers in our example.
Abstract Social learning, simply defined as learning from others, is valuable as a modality that provides quick, informal education. Augmented reality (AR) may provide a framework for human-machine teaming paradigms which integrate both virtual Pedagogical Agents as Learning Companions (PALs) and human learning collaborators. This article details the results of three collaborative AR experiments to explore social learning with PALs and humans. Our use case focuses on medical school students learning how to interview a patient with stroke symptoms. Despite noted challenges in quickly advancing technology, specifically the natural language processing (NLP), the research produced many instances of significant results in self-efficacy and conceptual and procedural learning. Findings are presented along with a way-ahead perspective on key focus areas to advance human-machine teaming in collaborative AR for learning.
No abstract available
With the explosive growth of Generative Artificial Intelligence (AIGC) and its deep intervention in the field of knowledge production, the content production ecology of Internet ideological and political education in universities is undergoing a historic transformation from "PGC/UGC" to "AIGC." While this technological leap significantly liberates productivity, it also triggers deep tensions between "technological subjectification" and "human objectification" within the educational field, confronting online ideological and political education with the dual challenges of ecological imbalance and a crisis of subjectivity. Based on Marxist human theory and the perspective of technophenomenology, this paper proposes that "human-machine collaboration" is not only a new paradigm for technological application but also the ontological basis for the ecological reshaping of online ideological and political education. The article argues that amidst the risks of "new alienation" constructed by intelligent technologies, we must transcend the binary opposition of "technocentrism" and "technological nihilism." By reshaping a collaborative ecology of "human leadership, machine assistance, and value guidance," we can achieve the subjectivity reconstruction of educators from "content manufacturers" to "value architects." Furthermore, this paper systematically elucidates the practical rationale for building a new ecology of human-machine collaborative education from three dimensions: the reconstruction of production relations, the return of emotional interaction, and the regulation of algorithmic power, aiming to provide a theoretical landscape and action guide for the high-quality development of university Internet ideological and political education in the intelligent age.
As the curriculum system in universities continuously evolves towards intelligence and collaboration, how to effectively integrate human-machine collaborative teaching with community consciousness education in the "Corporate Culture" course has become an important issue for improving educational quality and deepening the function of education. This paper takes the "Corporate Culture" course in universities as the research object and explores basic issues such as the course content system and educational function, the logic of human-machine collaborative teaching, and the reshaping of learners' roles. It further analyzes the cognitive structure of community consciousness education and the course delivery path. Based on this, the study proposes a design plan for the integration path, which is based on dynamic embedding, goal alignment orientation, and structural feedback mechanisms, and constructs a course operation mechanism that integrates human-machine systems with identity education content. This paper aims to address the transformation demands of university course educational functions in the digital education environment and provides a theoretical basis and practical insights for the construction of organizational culture education and value identity under intelligent teaching platforms.
Addressing the dual challenges of the "awkward integration" of ideological and political education (IPE) into curricula and the "insufficient depth of inquiry" in research-oriented teaching (ROT), this paper explores a new pathway for teaching reform from the perspective of human-machine intelligence integration, using electromagnetic fields and waves related courses as a case study. This research constructs a teaching model centered on "inquiry as the main line, IPE as the core, and intelligence as the enabler." Specifically, it first utilizes artificial intelligence technologies like data mining and scenario generation to design authentic project scenarios imbued with sentiments of national identity and challenges of engineering ethics, allowing IPE elements to naturally become the starting point of inquiry activities. While students engage in independent inquiry using tools like HFSS/CST, an AI learning companion is introduced. Its function extends beyond assisting with technical simulations; through strategic questioning, prompts, and resource recommendations, it guides students to simultaneously consider the social value and ethical boundaries of their technical solutions, thereby achieving a deep integration of knowledge construction and value guidance. Finally, teachers and students can leverage AI to conduct multi-dimensional analysis of the inquiry process and outcomes, enabling data-driven teaching efficacy evaluation and value-oriented iterative optimization of solutions. This study aims to establish a spiral learning loop of "scenario triggering intelligence-enhanced inquiry reflection and internalization," effectively promoting the synergistic development of students' professional competence and ideological-political literacy.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
The education metaverse (Edu-Metaverse), as a simulated extension of the real world, is an infinite virtual space where learners can build their relationships with others and create interactive content. However, preparing learners to engage fully with Edu-Metaverse remains challenging. As technologies on Edu-Metaverse are new to learners, there is a lack of studies on how to enhance learner engagement with human–machine collaboration. Therefore, this study proposes a technology-enhanced Edu-Metaverse framework for facilitating learner engagement with human–machine interactions. The framework consists of two major components: technology enhancement of Edu-Metaverse and interactions among learners and avatars in Edu-Metaverse contexts. In the proposed framework, Edu-Metaverse reshapes the relationships between humans and machines. With the support of human–machine collaboration, learner engagement could be concluded in the following three patterns: training engagement guided by Edu-Metaverse, collaboration engagement supported by Edu-Metaverse, and creative engagement empowered by Edu-Metaverse. In addition, a case analysis is conducted to validate the Edu-Metaverse theoretical framework and engagement patterns. The proposed framework can provide a more holistic understanding of the Edu-Metaverse implications as well as learning designs that are aimed at enhancing learner engagement with the support of human–machine collaboration.
: Based on artificial intelligence interaction technology, which comprehensively utilizes natural language processing, artificial intelligence training, and information retrieval technologies, it can accurately analyze users' questions inputted using natural language and return accurate answers to users. Smart education is a new demand and realm of educational informatization, leading educational informatization to a new stage of development. The human-machine intelligent interaction technology has extensive application value in the practice of foreign language intelligent education, using speech recognition technology to analyze speech errors and provide feedback; Simulate natural language dialogues to provide students with language exchange opportunities; Provide a large amount of language materials to learn more grammar and vocabulary; Generate natural and fluent language to help improve writing and translation skills. The research results help to fully explore the role of artificial intelligence in enhancing foreign language intelligence education, truly unleashing the power of technology, and assisting innovation and transformation in foreign language teaching.
No abstract available
No abstract available
To address the challenges of teacher marginalization and diminished instructional control arising from the integration of generative artificial intelligence (AI) into higher vocational classrooms, this study constructs a teacher-led “dual-track parallel” human-machine collaborative teaching model. This model delineates the teacher’s leadership authority across the four phases of “context-principle-intervention-reconstruction” while harnessing AI’s enabling capabilities characterized by “personalization” and “immediacy”. It pioneers a five-dimensional management mechanism encompassing “input-process-output-ethics-evaluation”, achieving a balance of unified pedagogical depth and scalable differentiated instruction. Statistical results demonstrate promising outcomes: interactions involving higher-order thinking accounted for 65% of student engagements; the experimental group exhibited an average 44% improvement in core skill mastery, significantly outperforming the control group; and over 90% of students reported enhanced learning directionality and autonomy. This model effectively enables teachers to concentrate on diagnostic assessment and the stimulation of higher-order cognition, while simultaneously significantly enhancing student classroom participation, autonomous learning capabilities, and problem-solving skills. It offers a replicable solution for vocational education classroom reform in the era of artificial intelligence.
This study addresses critical challenges in the teaching of the “Decision Theory and Methods” course, including excessive reliance on teachers’ experience, lack of data-informed instructional decisions, and evaluation systems biased toward knowledge transmission. Drawing on the paradigm of data-informed decision-making, the research integrates human-machine collaboration and evidence-based practices to develop a dual-system instructional decision model combining machine intelligence with teacher cognition. A three-phase evidence-based instructional model—cov-ering planning, in-class interaction, and evaluation—is designed to support a transition from expe-rience-driven to evidence-supported teaching decisions. Empirical implementation demonstrates that the model enhances students’ decision-making cognition, research innovation, and reflective thinking, while reinforcing the course’s methodological and practical orientation. The study achieves a threefold integration of decision science and instructional practice, artificial intelligence and educator insight, and data evidence and educational values. It offers a novel approach for grad-uate-level curriculum reform and high-level talent cultivation in the context of intelligent education
The adoption of generative artificial intelligence (GAI) applications has bolstered efforts toward human-machine collaboration. Given the lag in research on AI and religion, this study examines how pastors engage GAI to develop religious human-machine communication practices that constitute their leadership. Findings from in-depth interviews with pastors in the U.S. reveal that they view GAI as an idea generator, research assistant, co-author and translator. Clergy enact multiple ways to incorporate GAI communication in religious education and to enhance sermonic performances. Concurrently, pastors perceive tensions between innovation and established rites, as they contend with the authenticity and spiritual depth of GAI content while meeting the needs of their congregants amid temporal and resource challenges. This article concludes with implications for future research, AI governance and ethics.
In the intelligent age, the integration of artificial intelligence (AI) into education has sparked profound transformations, challenging traditional pedagogical paradigms. This study addresses the critical research question: How can human teachers and machines synergize their distinct strengths to optimize educational outcomes in the AI era? Through a mixed-methods approach combining literature analysis, case studies, and comparative frameworks, we systematically evaluate the complementary roles of humans and machines across cognitive, emotional, and skill-based dimensions. Our findings reveal that human-machine symbiosis — not substitution — is essential for fostering personalized, equitable, and innovative education. By redefining teacher roles (e.g., from “lecturer” to “facilitator”) and repositioning machines as collaborative partners, this research proposes a dynamic interaction model that enhances pedagogical efficiency and student development. The study contributes novel theoretical insights to human-AI collaboration in education and offers actionable strategies for policymakers and educators to navigate the evolving educational landscape.
Generative Artificial Intelligence (GenAI) has been altering the way that educational institutions and businesses operate. Leveraging the significant opportunities of GenAI and addressing its challenges are essential for institutions to maintain the competitiveness of their graduates and educational processes. For a few decades now, human-machine interaction scholars have been investigating how humans can properly utilise machine capabilities without over-reliance. Building on these insights, this paper proposes a framework for maintaining the effectiveness of higher education assessments amid the pervasive use of GenAI. The framework incorporates these insights into its five interconnected components: assessment design, establishing an understanding of purpose and process, effective monitoring, feedback, and reflection and planning for change. The framework allows for creating the environment and culture for the responsible use of GenAI while ensuring education quality and imposing academic integrity. We present two case studies to demonstrate the applicability of the proposed framework to diverse educational requirements. Our findings demonstrate how the framework facilitates fostering student creativity while maintaining confidence in the validity of the assessment system.
In the era of generative artificial intelligence (GAI), the application of human-machine collaborative dialogue, grounded in GAI, holds significant promise within the educational and instructional domains. The integration of artificial intelligence with normal education has emerged as a pivotal topic in the cultivation of prospective teachers. This study has employed exploratory experiments with single pre-post measurements and epistemic network analysis to investigate the influence of human-machine collaborative dialogue on the dialectical reflection and instructional resource design capacities of normal students. The research reveals the following insights: Human-machine collaborative learning activities based on GAI can enhance the dialectical reflection ability of normal education students, and different human-machine collaborative behaviors have varying degrees of impact; The activities can effectively improve the instructional resource design capacity of normal education students, and different human-machine collaborative behaviors have differing effects; The human-machine collaborative dialogue based on GAI can indirectly affect students' instructional resource design capacity by changing their dialectical reflection ability. Drawing on these findings, the research offers recommendations on leveraging GAI to more effectively nurture the reflective and instructional resource design competencies of normal students.
Generative artificial intelligence(GenAI) like ChatGPT has been widely integrated into education, thereby empowering educators and gaining attention for human-machine collaborative education. The human-machine collaborative teaching ability has become one of the essential core competencies for teachers. There is an urgent need to explore the factors and mechanism to assist teachers in collaborating with GenAI more efficiently. This study carried out an empirical research, collecting multidimensional performances such as technological experience, technological beliefs, higher-order thinking, and prior knowledge from 44 preservice teachers. Based on the results of multiple linear regression analysis, we found that higher-order thinking and prior knowledge had a significant impact on the level of human-machine collaborative instructional design(H-M CID), while there was a negative impact relationship between technological beliefs and the level of higher-order thinking, as well as the technological experience. Those findings could lay the foundation for cultivating teachers’ H-M CID abilities.
This study aims at the current problems such as the disconnection and insufficient adaptability between digital teaching resources and the needs of teachers and students, and constructs a teaching resource optimization model from the perspective of human-machine collaboration, in order to improve resource allocation, enhance the accuracy of resources and personalized teaching. By analyzing the demands of teachers and students, a resource optimization model centered on the trinity collaboration of "teacher - GenAI - student" is proposed. Teaching resources are scientifically divided, and a four-step dynamic process of "data collection - demand transformation - resource generation - application iteration" is designed. Taking the "Obstacle Avoidance Design of Intelligent Logistics Robots" project of the Python course in higher vocational education as a case for practical verification, the results show that students' mastery rate of sensor principles, code debugging efficiency, and ability of cross-scenario knowledge transfer have all improved. This model has achieved the transformation of teaching resources from supply-driven to demand-led by precisely matching the needs of teachers and students and dynamically optimizing the supply of resources, providing theoretical support and practical paths for the construction of the human-machine collaborative education ecosystem.
In the context of the deep integration of education and artificial intelligence, online learning support services face many difficulties. This paper deeply analyzes the three major dilemmas of online learning support services in the digital-intelligence era, and from the perspective of the COI theoretical framework, fully considers the complementary advantages of teachers and intelligent agents, and proposes specific service strategies in three dimensions: cognitive, instructional, and social support, aiming to provide theoretical and practical references for improving the quality of online learning support services.
: With the rapid development and increasing application of artificial intelligence technology in education and teaching, improving the intelligence literacy of teachers has become an important aspect of current teaching reform in higher vocational schools. Through various research methods such as literature analysis, case analysis, and questionnaire survey, this article comprehensively analyzes the opportunities and challenges faced by vocational college teachers in the human-machine symbiotic environment, and explore how to combine information technology with teaching practice. The construction of a training practice feedback mechanism can be utilized to improve the intelligence literacy of teachers, thereby promoting innovation and efficiency in education and teaching. The research results show that the homework grades of Class D have experienced fluctuations: the initial average score was 85.6 points, then increased to 88.2 points, and finally decreased slightly to 86.8 points. Despite fluctuations, the score remained above 85, reflecting the stability of the class's homework performance. This article proposes an "Intelligent Literacy Training Program" for higher vocational and technical colleges, which can meet the needs of future educational development.
Human-machine hybrid enhanced intelligence, as a cutting-edge technology in the advancement of artificial intelligence, can provide dynamic support for innovative developments in intelligent education characterized by human-machine collaboration. Addressing the prevalent issue in current university programming courses where learning feedback relies excessively on either the teacher or machine, it is of great theoretical and practical significance to explore the tripartite composite subject hybrid enhanced teaching model. Based on the principles of human-machine hybrid-enhanced intelligence, Robert Gagne’s instructional process, and perspectives on learning feedback, a programming teaching model of "teacher-machine-student" hybrid enhancement is proposed. This model was applied to a C language course at a certain university, involving 141 students from three classes. The aim is to provide insights into the application of artificial intelligence technologies in programming education. The results demonstrate that the proposed model accurately identifies issues in programming teaching and promptly delivers personalized learning feedback, thereby enhancing teaching effectiveness.
Generative AI (GenAI) in education brings renewed attention to learner autonomy – that is, whether learners can think and act independently. GenAI offers the promise of learning efficiency and personalization, while raising questions about its alignment with nurturing autonomous learners. In this paper, we present a theoretical framework to investigate the relationship between GenAI and learner autonomy, to guide the design of educational environments that are safe and autonomy-supporting. Our paper explores the multifaceted nature of autonomy across the cognitive, philosophical, political and computing fields, connecting theories such as self-determination theory with reflections on machine autonomy. Leveraging Latour's Actor-Network Theory, our framework aims to elucidate how autonomy is distributed between human and non-human actors in educational environments. Our main contribution is the process of “autonomy budgeting”, viewing autonomy as a resource that is allocated and traded off between an ensemble of actors. Autonomy budgeting works as a guiding conceptual tool for researchers, educators, curriculum designers and policymakers to assess and manage the autonomy trade-offs involved in integrating GenAI into educational environments. By re-centering the learner's agency and capacity for self-regulation, autonomy budgeting provides a way to conceptualize and operationalize autonomy within AI-mediated education, and to navigate the complex interplay between human and machine agency in education. Our framework develops reflections on the socio-technical nature of educational processes, where technologies act as co-participants rather than neutral tools. Autonomy in education, becomes a multifaceted construct that spans (human) cognitive, epistemic and political domains, and must be considered vis-a-vis varying degrees of machine autonomy.
Symbiosis is a physiological phenomenon where organisms of different species develop social interdependencies through partnerships. Artificial agents need mechanisms to build their capacity to develop symbiotic relationships. In this paper, we discuss two pillars for these mechanisms: machine education (ME) and bi-directional communication. ME is a new revolution in artificial intelligence (AI) which aims at structuring the learning journey of AI-enabled autonomous systems. In addition to the design of a systematic curriculum, ME embeds the body of knowledge necessary for the social integration of AI, such as ethics, moral values and trust, into the evolutionary design and learning of the AI. ME promises to equip AI with skills to be ready to develop logic-based symbiosis with humans and in a manner that leads to a trustworthy and effective steady-state through the mental interaction between humans and autonomy; a state we name symbiomemesis to differentiate it from ecological symbiosis. The second pillar, bi-directional communication as a discourse enables information to flow between the AI systems and humans. We combine machine education and communication theory as the two prerequisites for symbiosis of AI agents and present a formal computational model of symbiomemesis to enable symbiotic human-autonomy teaming. This article is part of the theme issue ‘Towards symbiotic autonomous systems’.
The integration of Artificial Intelligence (AI) into language education has marked a paradigm shift, offering unprecedented efficiency and personalization. However, this technology-driven approach risks overlooking the cultivation of crucial humanistic growth, such as intrinsic motivation, emotional resilience, and the ability to formulate insightful questions. This paper addresses this pedagogical gap. Drawing upon over 20 years of pedagogical practice in beginner-level Chinese language classes at a Japanese university, this study proposes a series of "counter-intuitive," human-centered teaching strategies. These strategies are not designed to oppose technology but to build a complementary relationship with it. This paper argues that by focusing on igniting curiosity, building psychological safety, fostering empathy, and valuing the process of inquiry, educators can nurture the aspects of holistic development that AI cannot replace. We conclude that this synergy between technology and humanism is the key to realizing true General Education and holistic education in the AI era.
This paper explores the synergy between human cognition and Large Language Models (LLMs), highlighting how generative AI can drive personalized learning at scale. We discuss parallels between LLMs and human cognition, emphasizing both the promise and new perspectives on integrating AI systems into education. After examining challenges in aligning technology with pedagogy, we review AutoTutor-one of the earliest Intelligent Tutoring Systems (ITS)-and detail its successes, limitations, and unfulfilled aspirations. We then introduce the Socratic Playground, a next-generation ITS that uses advanced transformer-based models to overcome AutoTutor's constraints and provide personalized, adaptive tutoring. To illustrate its evolving capabilities, we present a JSON-based tutoring prompt that systematically guides learner reflection while tracking misconceptions. Throughout, we underscore the importance of placing pedagogy at the forefront, ensuring that technology's power is harnessed to enhance teaching and learning rather than overshadow it.
No abstract available
Artificial intelligence (AI) has emerged as a transformative tool, integrated across various sectors. In education, AI has generated significant excitement among students for its potential to enhance learning experiences. However, concerns about overreliance on AI temper this enthusiasm, as it may undermine the development of critical thinking skills. Numerous studies have highlighted the risks associated with students’ excessive use of generative AI (GAI) in academic tasks, noting its potential to diminish cognitive abilities. However, the optimal use of AI to enhance students’ critical thinking skills remains under-researched. Therefore, this study seeks to answer the question: How does generative AI influence students’ critical thinking, self-efficacy, and decision-making? This study aims to explore the synergic relationship between human intelligence and artificial intelligence in augmenting essential thinking skills among students and building upon their existing cognitive resources through self-efficacy, learning motivation, and decision-making. Specifically, it explores the cause-and-effect connections among GAI, self-efficacy, decision-making, learning motivation, and critical thinking skills. A quantitative methodology used an online questionnaire to collect responses from 165 undergraduate, master’s, and doctoral students. Statistical analyses, including bootstrapping techniques, were conducted to examine direct and indirect effects. The results revealed that GAI has a significant positive influence on self-efficacy, learning motivation, decision-making, and critical thinking skills. In turn, self-efficacy, learning motivation, and decision-making significantly impact critical thinking skills. The mediating results indicated that GAI can indirectly boost students’ critical thinking by enhancing self-efficacy, learning motivation, and decision-making. This suggests that AI capabilities can transform the cognitive learning process.
This study examines the synergy between human evaluators and an AI-based system in assessing elementary-level English essays, focusing on key linguistic features such as grammar, syntax, spelling, content, and clarity. A dataset of 30 student-written essays is used to evaluate the effectiveness, reliability, and subjectivity of both evaluation methods. The boxplot comparing Human and AI evaluation scores offers insights into the evaluator’s scoring behaviours in applying the grading rubric. Cronbach's Alpha values indicate high internal consistency in both evaluation methods, with human evaluators demonstrating slightly greater reliability. The study also integrates Cognitive Load Theory (Sweller, 1988) to explain the cognitive demands of human evaluators versus the rule-based processing of AI. These findings suggest that while AI provides efficiency in mechanical assessments, human evaluators bring a nuanced understanding, emphasising the complementary roles of both in educational assessment. The study advocates for a hybrid approach that combines the strengths of both human and AI evaluations to enhance assessment fairness and accuracy.
This paper introduces the 3x2A Strategy, a conceptual framework for societal adaptation to Generative AI (GenAI) and large language models (LLMs). It maps six strategic dimensions - Automation, Augmentation, Alliance, Alignment, Adaptation, and Accountability - critical for fostering human-AI synergy. We apply the framework to education, healthcare, and scientific research, illustrating its role in guiding responsible integration and governance. The 3x2A Strategy contributes to foundational discourse on AI alignment and sociotechnical transformation, offering a structured approach to aligning GenAI advancements with societal needs, institutional structures, and long-term resilience.
This paper proposes a new model for integrating human-AI collaboration in translation teaching. With the rapid advancements in artificial intelligence, particularly Generative Artificial Intelligence (GenAI) and Neural Machine Translation (NMT), this model seeks to combine the strengths of AI in automation and scalability with the unique capabilities of human instructors in critical thinking, creativity, and cultural sensitivity. By structuring a multi-stage pedagogical approach, where AI tools assist in repetitive tasks and human instructors focus on nuanced cultural and ethical considerations, this model aims to enhance both the efficiency and quality of translation education.
No abstract available
No abstract available
No abstract available
In the context of the accelerating internationalization of higher education (HE), quality assurance (QA) faces numerous international challenges, such as difficulties in standard-setting and implementation, flaws in the assessment system, and an imbalance between university autonomy and external constraints. The emergence of Generative Artificial Intelligence (Gen-AI) has brought new opportunities to QA in HE, but it is also accompanied by issues such as data security and ethics. This research aims to explore these challenges, study the application potential of Gen-AI in HE QA, and propose a synergy strategy that combines Gen-AI with human-made solutions. The research uses methods such as literature reviews and case studies. It is found that by establishing a mechanism for the participation of diverse stakeholders, clarifying the responsibilities of all parties, and using Gen-AI to assist in decision-making and management, these challenges can be effectively addressed. At the same time, development suggestions such as strengthening cross-disciplinary cooperation and talent cultivation, continuous monitoring and dynamic adjustment of strategies, and promoting international exchanges and experience sharing are put forward to improve the level of HE QA and promote the development of global HE.
The rapid development of artificial intelligence (AI) technology has driven the need for faster and more effective career adaptation in higher education environments. However, the challenge that arises is how to integrate technology with human aspects in quality control in order to optimally improve career adaptation capabilities. This study aims to examine the role of AI-based quality control systems in improving career adaptation capabilities, with a focus on the synergy between humans and technology in higher education. The research method used is a qualitative approach with case studies. Data were collected through in-depth interviews, observations, and documentation. The results of the study show three main findings: first, personalization of career development through adaptive assessments is able to meet the individual needs of lecturers appropriately; second, predictive analytics and early intervention are effective in detecting and addressing performance gaps; third, optimal synergy between humans and technology strengthens decision-making and career adaptation flexibility. The contribution of this study is the development of a conceptual model of human-technology synergy that can be used as a strategic reference for higher education institutions in optimizing career management and human resources in the digital era.
With the breakthrough progress of Generative Artificial Intelligence (AIGC) technology, the teaching of Ideological and Political Theory Courses (hereinafter referred to as "Civics Courses") in universities is facing a critical node of transformation from digitization to intelligence. For a long time, technology has mostly played the role of an "auxiliary tool" in Civics teaching, suffering from pain points such as solidified scenarios, superficial interactions, and insufficient personalization. Generative AI, with its unique knowledge generation capabilities, multimodal context construction abilities, and human-like interaction logic, provides an opportunity for Civics teaching to shift from the paradigm of "Technical Assistance" to "Human-Machine Synergy." Based on the practical needs of the reform and innovation of Civics Courses in the new era, this paper deeply analyzes the theoretical connotation of the "Human-Machine Synergy" teaching model. It systematically reconstructs the practical model of scenario-based teaching in Civics Courses from three dimensions: the "Dynamic Knowledge Graph" in theoretical teaching, the "Virtual-Real Twinning" in practical teaching, and "Intelligent Decision-Making" in social services. Furthermore, addressing the potential loss of teacher subjectivity, algorithmic bias, and ethical risks in human-machine synergy, this paper proposes building a governance mechanism of "Value Leadership, Dual-Teacher Synergy, and Ethical Regulation," aiming to provide theoretical support and practical solutions for promoting the high-quality development of university Civics Courses.
The unstoppable advancement of digital technology has encouraged universities to develop new approaches to support student entrepreneurial activities. One of the crucial activities affected is the business incubation program. This study explores a business incubation program supported by the use of artificial intelligence, showing the program's adaptation to advances in digital technology. A qualitative approach with case study analysis is used in this study to understand the complex phenomenon of the issue of entrepreneurial behavior in using artificial intelligence that has an impact on business. The subjects of this study were 20 business groups consisting of two clusters, where the first cluster contains 10 groups in the ideation and prototype phases, while the second cluster contains 10 groups in the business growth phase. In general, AI contributes to students' confidence in deciding their future as entrepreneurs. More specifically, AI contributes as a digital coach for students in the ideation and prototype phases in exploring opportunities. Meanwhile, in the business growth phase, AI acts as an advisor who is able to audit the suitability of business growth strategies with external situations and internal capabilities and provide recommendations for alternative strategies that are more fit. The results of this study provide novelty about the different important roles of AI in each phase of business, called AI-Business Fit, a concept that explains the level of suitability of using AI in entrepreneurial activities that have an impact on business growth.
The proliferation of Generative AI necessitates a re-evaluation of educational strategies, particularly in vocational fields. Traditional vocational education faces challenges like limited resource access, high software costs, and a lack of personalized feedback. This paper explores how integrating generative AI, guided by a human-centered philosophy, can address these issues. Through a qualitative analysis of four pedagogical interventions at a vocational school (e-commerce, art, math, and computer science), we find that AI, as a pedagogical co-pilot, boosts instructional efficiency, nurtures creativity, and enables individualized learning. The case studies show AI's ability to lower costs, remove practice barriers, and provide data-driven insights. We synthesize these findings into a conceptual framework for human-centered AI integration, emphasizing AI's role in empowering educators and learners. This research offers a transferable model and discusses ethical considerations for creating effective and equitable learning environments.
This paper will discuss how the hybrid AI-human music composition setting can have a pedagogical effect on the creative process and musical knowledge of undergraduate learners. They used a mixed-method research study that entailed expert analysis of student composition, system interaction logs, and reflective learner feedback. It was found that students with the hybrid system obtained much better results in the increase of harmonic coherence, melodic structure, rhythmic variation, and a general creative expression than the control group with the traditional tools. Behavioral studies indicate that AI-generated recommendations proved more helpful when composing the first ideas, and students progressively used self-refinement in the later stages of the composition. Evaluations of creativity levels and self-efficacy scores among the experimental group had a significant positive change and demonstrate the relevance of the system in increasing idea generation, decreasing creative anxiety, and enhancing critical engagement skills. The paper adds to a growing body of evidence that AI-assisted learning in a hybrid form provides useful new directions in the field of music education, helping to develop a more in-depth musical perception and more convenient creative investigation.
While research on human-AI collaboration exists, it mainly examined language learning and used traditional counting methods with little attention to evolution and dynamics of collaboration on cognitively demanding tasks. This study examines human-AI interactions while solving a complex problem. Student-AI interactions were qualitatively coded and analyzed with transition network analysis, sequence analysis and partial correlation networks as well as comparison of frequencies using chi-square and Person-residual shaded Mosaic plots to map interaction patterns, their evolution, and their relationship to problem complexity and student performance. Findings reveal a dominant Instructive pattern with interactions characterized by iterative ordering rather than collaborative negotiation. Oftentimes, students engaged in long threads that showed misalignment between their prompts and AI output that exemplified a lack of synergy that challenges the prevailing assumptions about LLMs as collaborative partners. We also found no significant correlations between assignment complexity, prompt length, and student grades suggesting a lack of cognitive depth, or effect of problem difficulty. Our study indicates that the current LLMs, optimized for instruction-following rather than cognitive partnership, compound their capability to act as cognitively stimulating or aligned collaborators. Implications for designing AI systems that prioritize cognitive alignment and collaboration are discussed.
Generative artificial intelligence (Gen-AI) has rapidly entered design studio practice, enabling students to produce visual concepts, models, and narratives with unprecedented speed and variety. While these tools expand creative possibilities, they simultaneously challenge traditional pedagogical assumptions regarding originality, authorship, and the role of the lecturer. This study explores the potential of Human–AI Co‑Teaching Models in design studio pedagogy, focusing on how lecturers’ roles evolve when Gen‑AI becomes an active partner in the creative process. Using case studies from design studios in higher education, the research adopts a qualitative action‑research approach with lecturers and students engaging in AI‑augmented studio projects. Findings highlight a paradigm shift: the lecturer’s role transitions from knowledge gatekeeper to critical guide, curator, and ethics negotiator. The paper proposes a framework for Human–AI Creative Pedagogy that balances efficiency and innovation with reflection, ethics, and critical thinking. Implications are offered for design education governance, studio assessment models, and lecturer training in the age of Gen‑AI
As artificial intelligence (AI) technologies become increasingly integrated into higher education, university instructors are compelled to navigate shifting pedagogical landscapes and reconfigure their professional identities. This qualitative study investigates how university teachers in China perceive and negotiate their roles in AI-mediated classrooms, using a posthumanist theoretical framework to explore human–machine entanglements in pedagogy. Drawing on semi-structured interviews with six instructors across diverse disciplines, the study identifies three emergent orientations toward AI: as an assistant, a collaborator, and a threat. The study demonstrates how AI integration provokes both pragmatic adaptation and deep-seated identity renegotiation, underscoring the affective, ethical, and epistemological tensions of educational transformation.
Recent advancements in educational pedagogy have shifted from static benchmarks to dynamic tools, with knowledge-in-use gaining prominence. However, effectively assessing knowledge-in-use remains a significant challenge, necessitating the development of assessments that capture students’ ability to transfer knowledge beyond the classroom. Despite the Next Generation Science Standards (NGSS) advocating for high-quality, formative assessments, many educators struggle to design and implement NGSS-aligned assessments due to their time-consuming and labor-intensive nature. Artificial Intelligence (AI), particularly Large Language Models (LLMs), presents a promising solution by automating knowledge-in-use assessment generation, with the potential to significantly enhance efficiency. However, LLMs often lack domain-specific expertise, and no standardized framework exists for evaluating their outputs. To address these challenges, we integrate Retrieval-Augmented Generation (RAG) to enhance LLMs’ comprehension of educational content and develop a Human-in-the-Loop strategy to refine and evaluate AI-generated assessments. Meanwhile, we design evaluation rules as quality standards and involve both human experts and LLMs in assessing the generated content. This LLM-based pipeline significantly improves the efficiency, and the results demonstrate that human guidance significantly improves LLM generations, leading to high-quality assessments that align with the proposed evaluation rules.
This work in progress paper presents the development of SPIED, a Human-Centred Artificial Intelligence pedagogy for teaching AI at upper second level. The AI education literature for K-12 shows there is a need for pedagogies to cover both the technical and ethical elements of AI. SPIED combines an explicit ethical focus and the technical skills required for working with AI. SPIED was developed from three distinct foundations; PRIMM as an approach to teaching programming, the HCAI Block model, and critical pedagogy as a learning theory. This paper then presents the iterative development of SPIED as a pedagogy via corroboration using the HCAI Block model and the process of face validation. Future work will involve a study to validate SPIED as a pedagogy which will be aided by the development of supportive activities.
There is a growing need to analyze the traditional legal pedagogy due to the increasingly emergence of artificial intelligence into legal research, analysis and education at large. This study explores the incorporation of various tools into the legal education system in enhancing learning while on the verge of preserving and nurturing the essential human skills such as ethical reasoning, critical thinking, and professional judgment. The study further examines upon the existing tension between the structured technology and the expansion of human-centric legal suitability, arguing that legal education must evolve to maintain the balance between the two domains. While combining legal theory alongside educational psychology and technological ethics, the research investigates a multidisciplinary framework upon current legal practices across various schools where AI integration is gaining momentum. This estimates how various tools like legal chatbots, predictive analytics, and AI-powered research engines determine student learning outcomes, classroom dynamics, and the legal perception reasoning. The study employs qualitative methods, including interviews with law faculty and students, alongside case studies of institutions that have piloted AI-enhanced curricula. Various studies reveal that Artificial intelligence make it easier to find legal materials and speed up processes, but relying too much on them might make legal decisions less creative, less analytical, and less morally responsible. The study recommends a concept of "augmented legal pedagogy" that merges AI technology with human-centred teaching techniques to assemble the legal profession to meet the future needs and ensure it to be more ethical by nature. Hence, to conclude, it is suggested to educators, institutions and regulators to ensure that legal training procedure stays up-to-date with the technology and is still essentially human in the age of AI.
Artificial Intelligence (AI) and more specifically Machine Learning (ML) have become ubiquitous in students’ digital lives in Ireland. There is a need for upper second-level computer science students to become knowledgeable about the operation and consequences of AI/ML for themselves and society, as currently, this human-centered viewpoint is lacking. The goal of this work is to develop a pedagogy to support students not just to have a technical view of ML, but how to use, build, and evaluate through a human-centered lens.
Artificial intelligence (AI) applications to support human tutoring have potential to significantly improve learning outcomes, but engagement issues persist, especially among students from low-income backgrounds. We introduce an AI-assisted tutoring model that combines human and AI tutoring and hypothesize this synergy will have positive impacts on learning processes. To investigate this hypothesis, we conduct a three-study quasi-experiment across three urban and low-income middle schools: 1) 125 students in a Pennsylvania school; 2) 385 students (50% Latinx) in a California school, and 3) 75 students (100% Black) in a Pennsylvania charter school, all implementing analogous tutoring models. We compare learning analytics of students engaged in human-AI tutoring compared to students using math software only. We find human-AI tutoring has positive effects, particularly in student’s proficiency and usage, with evidence suggesting lower achieving students may benefit more compared to higher achieving students. We illustrate the use of quasi-experimental methods adapted to the particulars of different schools and data-availability contexts so as to achieve the rapid data-driven iteration needed to guide an inspired creation into effective innovation. Future work focuses on improving the tutor dashboard and optimizing tutor-student ratios, while maintaining annual costs per student of approximately $700 annually.
This study investigates how students interact with intelligent agents based on metacognitive strategies, and the changes in metacognitive levels within AI-supported collaborative learning environments. It was conducted in naturalistic learning settings, involving 24 college students and lasting approximately two months. The students’ metacognitive survey data and their reflective essays on interactions with AI agents were analyzed using descriptive statistical analysis and epistemic network analysis, based on the community of inquiry framework and metacognitive theory. The results show that AI agents providing metacognitive strategy support effectively improve learners’ metacognitive levels. The change in metacognitive level of learners is related to their perceived teaching and cognitive existence. This study contributes to and has implications for the educational implementation of intelligent agents, as well as the facilitation and graphical representation of learner-AI interactions in educational settings.
In response to the pain points of rapid iteration of front-end education technology, large differences in learner foundations, and a lack of practical scenarios, this paper combines generative artificial intelligence and AI agents to analyze the empowerment logic from three dimensions: knowledge ecology reconstruction, cognitive collaborative upgrading, and teaching methodology innovation. It explores its application scenarios in teaching and learning, sorts out challenges such as technology adaptation and learning dependence, and proposes paths such as building an exclusive AI ecosystem and optimizing the guidance mechanism of intelligent agents to provide support for the digital transformation of front-end education.
Large language models (LLMs) have been applied across various intelligent educational tasks to assist teaching. While preliminary studies have focused on task-specific, independent LLM-empowered agents, the potential of LLMs within a multi-agent collaborative framework for classroom simulation with real user participation remains unexplored. In this work, we propose SimClass, a multi-agent classroom simulation teaching framework. We recognize representative class roles and introduce a novel class control mechanism for automatic classroom teaching, and conduct user experiments in two real-world courses. Using the Flanders Interactive Analysis System and Community of Inquiry theoretical frameworks from educational analysis, we demonstrate that LLMs can simulate a dynamic learning environment for users with active teacher-student and student-student interactions. We also observe group behaviors among agents in SimClass, where agents collaborate to create enlivening interactions in classrooms to improve user learning process. We hope this work pioneers the application of LLM-empowered multi-agent systems in virtual classroom teaching.
No abstract available
To address issues such as lack of dynamic adaptability and weak teaching standardization in generative AI education applications, this paper proposes an educational multi-agent collaborative system architecture. The system achieves closed-loop management of the entire teaching process through the collaborative operation of four specialized intelligent agents: "teaching, learning, administering, and evaluating". Key technological breakthroughs include knowledge graph using Neo4j-based cognitive graph, RAG-enhanced retrieval to boost semantic-level knowledge matchmaking accuracy and intelligent workflow engine to automate concatenation of teaching quests. It promotes the transformation of the education paradigm from static knowledge delivery to dynamic cognitive construction, and evolves from educator-centered to student-centered. In the future, this system will optimize the scheduling algorithm and retrieval performance of the agents and deepen the fusion path with traditional education models.
With the deep integration of various intelligent technologies and education, smart teaching has gradually become a new form of vocational education growth in the information age. Smart teaching aims to enhance the level of teaching intelligence through the support of modern information technology, thereby optimizing classroom teaching efficiency and cultivating high-quality skilled talents who can adapt to the growth of the new era. In this context, this article proposes a vocational education collaborative learning algorithm based on multi-agent systems (MAS), aiming to effectively improve the learning efficiency and effectiveness of vocational education through collaborative cooperation between agents. By constructing agents with specific attributes and behavior models, and determining the interaction methods and communication mechanisms between agents. Agents can achieve information sharing and collaborative learning, thus forming an efficient learning network. The experimental results show that the algorithm has achieved significant results in vocational education. The academic performance and satisfaction of students have been significantly improved, and the quality of teaching has been significantly improved. This algorithm not only promotes the quality improvement and sustainable growth of vocational education, but also provides strong support for vocational colleges to cultivate high-quality skilled talents that adapt to the growth of the new era.
Many local universities still face challenges in developing intelligent teaching and management systems, where information is often fragmented and automation remains limited. To address this issue, this paper designs and implements a multiagent teaching system based on a collaborative mechanism. The system includes three types of agents: course Q&A, policy Q&A and teaching assistant. Each of them is responsible for supporting course understanding, policy retrieval and class scheduling tasks. By integrating Retrieval-Augmented Generation with an agent collaboration framework, the system improves the accuracy and consistency of responses and strengthens the traceability of knowledge sources. Experimental results show that, compared with traditional large language models and single RAG systems, the proposed approach achieves higher answer accuracy and better reasoning consistency across multiple tasks under tested conditions. The results suggest that a lightweight, multi-agentbased framework can potentially support teaching management within the recourse-limited local university environments and offer practical reference for the development of intelligent educational support systems.
Feedback from teaching agents is beneficial for enhancing learning performance, and the emotional design of feedback can effectively evoke positive emotions in learners and enhance their learning motivation. However, there is still limited research on feedback strategies of teaching agents supported by emotional design, addressing issues such as low learner engagement and inhibition of collaborative knowledge construction in online collaborative sessions. By reviewing relevant studies on emotional response theory, this article constructs a model of feedback from teaching agents supported by emotional design, providing a reference for the design of intelligent teaching agent feedback for online collaborative sessions. Based on this, the article conducts a quasi-experimental study to validate the effectiveness of the model. The study finds that: (1) Feedback on learning situations provided by teaching agents can promote participation in collaborative sessions and collaborative knowledge construction; (2) Feedback from teaching agents supported by emotional design is more effective in promoting participation in online collaborative sessions and collaborative knowledge construction; (3) Learners’ technical acceptance of teaching agents providing feedback on learning situations and positive emotions is significantly higher than that of teaching agents providing only feedback on learning situations. These findings provide valuable guidance for the design and development of teaching agents.
With the rapid advances of Artificial Intelligence (AI) and its technologies, human teachers and machines are now capable of collaborating to effectively achieve specified outcomes. In educational settings, such collaboration requires consideration of several dimensions to ensure safe, responsible, and ethical usage. While various research studies have discussed human-machine collaboration or cooperation in education, a framework is now needed that aligns with contemporary affordances. Providing such a framework can help to better understand how human teachers and machines can team up in education and what should be considered while doing so. To address this gap, this paper outlines the iSTAR (Intelligent human-machine Synergy in collaborative teaching: utilizing the digital Twins, Avatars/Agents and Robots) framework. iSTAR represents human-machine collaboration as an ecosystem that goes beyond the simple collaboration between human teachers and machines in education. Therefore, it presents core dimensions of DELTA (design, ethics, learning, teaching and assessments) that should be considered in designing safe, responsible, and ethical learning opportunities.
No abstract available
No abstract available
No abstract available
In order to solve the problem that the traditional computer-aided teaching system is affected by communication technology, which leads to the inability to interact between teachers and students, the author proposes a research on a computer teaching system based on the Internet of Things and machine learning. The hardware structure is designed according to the functions of each module of the system, in which the student learning module is composed of a teaching coordination agent and a number of other agents, responsible for the presentation of specific teaching materials, problem solving, knowledge sharing through a collaborative mechanism, and providing personalized teaching basis for the system. The teacher’s teaching module mainly provides students with corresponding teaching strategies according to the learning requirements and uses its own reasoning mechanism to provide intelligent guidance to the problems encountered in the teaching process; the assessment module uses assessment rules to analyze student responses, comprehensive assessment of students’ learning behaviors, attitudes, effects, and abilities. The software function is designed with SQL Server 2000 as the database server, in the case of determining the data attributes, the data online evaluation is carried out, and the distance teaching is completed by combining the network technology. The experimental results show that when the time is 20s, the teaching efficiency of the traditional system is 61%, and the teaching efficiency of the artificial intelligence system is 91%. As a result, the teaching efficiency of the system based on the Internet of Things and machine learning is high, and it can provide equipment support for students’ learning.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
Success in training is an opportunity that must be offered to each student. However, many universities are experiencing high rates of failure and dropout, especially during the first year of higher studies. We believe that creating a process based on personalization of teaching can contribute to the decrease of failure rate during undergraduate studies. To achieve this goal, we are specifically interested in online learning supported by a Learning Management System (LMS). We have integrated, in a previous works, new tools using traces of learners’ activities during collaborative works on an LMS. We therefore propose a system based on intelligent agents. We are designing smart dashboards, automating detection of specific learners' difficulties in order to offer alternatives or solutions to their problems.
No abstract available
No abstract available
Amidst the profound reconstruction of the educational ecosystem by digital-intelligent technologies, teachers face dual challenges of role transition pains and technological adaptation crises. This study focuses on the core issue of "teachers' role adaptation and pedagogical innovation in human-AI collaborative teaching," integrating Social-Technical Systems (STS) theory with an ecological model of teacher professional development to reveal teachers' irreplaceable value as instructional designers, emotional connectors, and ethical guardians. Key findings include: Three predominant scenarios of human-AI collaborative teaching have emerged-intelligent diagnosis, virtual-physical inquiry, and generative collaboration, yet three critical adaptation gaps persist among teachers: weak technological integration capabilities, role identity anxiety, and deficient algorithmic ethics judgment; Fundamental conflicts stem from the tension between technological efficiency orientation and educational process values, manifested through AI's compression of student trial-and-error space and tool fragmentation undermining holistic education; Accordingly, a "Three-Phase Five-Dimension" collaborative model is proposed, adopting dynamic equilibrium principles to allocate responsibilities (AI handles standardized tasks while teachers lead value-rational domains) with embedded ethical review mechanisms; Teacher adaptation pathways are suggested: developing technological integration and interdisciplinary design capabilities at individual level; innovating virtual teaching communities and competition-incubation mechanisms at organizational level; and creating teacher-friendly interfaces at technological level. The study concludes that human-AI collaboration must center on teacher agency, advocating future trustworthy AI educational infrastructure and teacher ethical certification to build a "Humanities as Essence, Technology as Utility" educational ecosystem.
The rapid development of artificial intelligence (AI) and digital transformation is driving higher education to shift from mass education to personalized education. However, current digital and intelligent technologies are primarily applied to teaching without fully exploring their potential. Thus, based on generative AI and knowledge graphs, a digital-intelligent collaborative teaching system is proposed to construct a novel intelligent teaching environment that supports active learning and collaborative teaching. The system focuses on the cultivation of talents in electronic information fields. Specifically, it leverages large models to develop a three-dimensional knowledge graph for professional courses, enabling the recommendation of personalized learning paths for students. Meanwhile, an intelligent formula derivation engine is designed to facilitate human-machine collaborative problem setting and solving, while establishing connections between knowledge and application based on the given problems. Moreover, a wireless communication agent incorporating teacher knowledge base is constructed to provide students with professional companion learning tools. The proposed system implemented over one semester in three classes with 144 students significantly enhances teaching quality and learning effectiveness, earning positive recognition from students. This provides a low-cost, high-efficiency digital-intelligent education model for new engineering education.
Artificial intelligence has brought unlimited possibilities to education. Based on the Deepseek large language model and the Dify agent platform, this paper uses RAG, TTS, NLP and other technologies to build multiple different types of college EFL teaching agents. While greatly reducing the teaching workload of teachers, it can also establish a one-to-one personal learning assistant for learners to help them learn flexibly and efficiently. Teaching practice shows that the college EFL teaching agent can better adapt to the teaching methods of the intelligent era. It is an innovative exploration of the digital transformation of college EFL course in higher educational institutions.
With the rise of technologies such as Artificial Intelligence (AI), Big Data and Cloud Computing, Digital Intelligence is reshaping the education landscape in far-reaching ways, opening up new paths for education modernization. As a transformative model, human-computer collaborative teaching provides important opportunities for educational practices through the integration of intelligent technologies and traditional educational practices: technology empowerment to promote teaching innovation, optimal allocation of resources to enhance educational equity, support for personalized learning to achieve tailored teaching, and role diversification to help teachers transform. However, there are also many challenges: insufficient technology integration limits practical application, data ethics and privacy issues raise potential risks, lack of standards and quality exacerbates the lack of regulation, and conflicting roles in collaboration hinders management transformation. Based on this, proposed solutions include enhancing the technological literacy of teachers and learners, strengthening data privacy protection frameworks, optimizing instructional systems to support personalized learning, and building educational ecosystems that promote collaboration and adaptability. This paper aims to provide a comprehensive framework to guide the advancement of education modernization and ensure that it matches the needs and potential of digital intelligence.
Emotions play an important role in human-computer interaction, but there is limited research on affective and emotional virtual agent design in the area of teaching simulations for healthcare provision. The purpose of this work is twofold: firstly, to describe the process for designing affective intelligent agents that are engaged in automated communications such as person to computer conversations, and secondly to test a bespoke prototype digital intervention which implements such agents. The presented study tests two distinct virtual learning environments, one of which was enhanced with affective virtual patients, with nine 3rd year nursing students specialising in mental health, during their professional practice stage. All (100%) of the participants reported that, when using the enhanced scenario, they experienced a more realistic representation of carer/patient interaction; better recognition of the patients' feelings; recognition and assessment of emotions; a better realisation of how feelings can affect patients' emotional state and how they could better empathise with the patients.
The widespread application of emerging information technologies such as big data and artificial intelligence in the field of education has facilitated the development of human-computer collaborative classroom teaching models. Based on the definition and connotation of the elements of human-computer collaborative classrooms, this study distinguishes the differences in the knowledge characteristics of teaching content and combines the three major teaching phases before, during and after class to construct a human-computer collaborative classroom teaching model in an intelligent teaching environment. The research conclusions can provide a reference for front-line teachers and teaching managers to implement human-computer collaborative teaching.
No abstract available
No abstract available
Against the backdrop of generative artificial intelligence driving the digital transformation of education, high school Chinese oral teaching faces long-term challenges of insufficient interactive depth and limited personalized guidance. This article primarily addresses how human-machine collaboration can reconstruct the teacher-student interaction mode in high school Chinese language classrooms to improve the quality and effectiveness of oral teaching. The study focuses on the reconstruction of the tripartite interactive relationship between "teacher student Artificial Intelligence (AI)" as the central theoretical thread, and systematically sorts out the changing characteristics of classroom interaction in terms of subject role division, interactive structure form, and teaching scene extension under the background of intelligent technology empowerment. The key technologies represented by speech recognition and education models have changed the interactive mechanism of oral teaching by providing instant feedback, generating contextualized content, and supporting multidimensional dialogue. Although notable progress has been achieved in practical applications, this study also points out that there are still practical challenges such as insufficient technological adaptability, lagging teacher professional development, and the urban-rural "digital divide". Therefore, in the future, efforts need to be made to develop subject specific intelligent agents, establish a normalized teacher collaborative development mechanism, and establish scientific ethical and evaluation standards to promote the integration of human-computer collaboration from "demonstrative applications" to "normalized and deep" classroom ecology, ultimately serving the comprehensive improvement of students' language construction and application core literacy.
No abstract available
The Covid-19 virus pandemic presents new challenges for schoolteachers and students to complete online learning. Educators conduct distance learning so the challenge is the lack of control that can be exercised over students. Engaging in learning is essential to create enthusiasm and interest in learning during online learning. The development of digital technology such as artificial intelligence and Nearpod can be used to create interesting learning that can increase the spirit of learning. This community service was conducted to teachers at SDN Sampay 01 and 02, Cisarua, Jawa Barat. The results of this community service show that the knowledge of teachers of both SDN Sampay 01 and 02 regarding the use of Artificial Intelligence and Nearpod technology increased after the service was held. In addition, the majority of teachers are also more confident with the use of technology in their teaching.
Abstract Retention in online learning has often been cited as a challenge for students. One cause is learners’ feelings of isolation. This mixed-methods study examined the effectiveness of intelligent agents in online courses in reducing student isolation. A survey was conducted with two online undergraduate courses that were part of an online degree program, followed by student interviews. Survey data revealed a significant positive difference in students’ perceptions of teaching presence in sections using intelligent agents. Other findings provide guidance for effectively implementing intelligent agents.
This study focuses on developing educational agents capable of understanding and adapting to learners' complex cognitive behaviors. We propose a “Reverse Turing Test” (RTT) framework to evaluate AI's ability to perceive human cognition and construct an adaptive teaching agent, “Xiaohang,” based on Multimodal Large Language Models (MLLMs). The research employs four key methods: RTT, multimodal data perception, adaptive teaching strategies, and memory-reflection mechanisms. RTT captures learners' cognitive states through interactive dialogues, multimodal perception collects multidimensional data (text, speech, images), adaptive strategies adjust teaching plans based on real-time feedback, and memory-reflection mechanisms optimize subsequent teaching outcomes. Experiments conducted in project-based learning (PBL) scenarios demonstrate that Xiaohang significantly enhances the quality of problem formulation, creativity in solution design, and task completion efficiency, validating its effectiveness in improving educational outcomes.
A Human-in-the-Loop (HITL) approach leverages generative AI to enhance personalized learning by directly integrating student feedback into AI-generated solutions. Students critique and modify AI responses using predefined feedback tags, fostering deeper engagement and understanding. This empowers students to actively shape their learning, with AI serving as an adaptive partner. The system uses a tagging technique and prompt engineering to personalize content, informing a Retrieval-Augmented Generation (RAG) system to retrieve relevant educational material and adjust explanations in real time. This builds on existing research in adaptive learning, demonstrating how student-driven feedback loops can modify AI-generated responses for improved student retention and engagement, particularly in STEM education. Preliminary findings from a study with STEM students indicate improved learning outcomes and confidence compared to traditional AI tools. This work highlights AI's potential to create dynamic, feedback-driven, and personalized learning environments through iterative refinement.
It's fundamental action in the learning process, yet most of the time, assessment is prone to subjectivity and latent biases associated with handwriting or language competences level or even social background. Currently known automatic grading methods are efficient, however they can hardly be considered transparent or fair. In this paper we introduce an Adaptive and Explainable Fair AI-Powered Assessment System which enables fairness in a transparent way for the evaluation of student answers. The proposed system applies Machine Learning and Natural Language Processing (NLP) methods to examine semantic content, rather than writing style or linguistic complexity. A human-in-the-loop bias correction framework ensures fairness can iteratively and continuously improve by the inclusion of feedback from educators in updating models. Furthermore, a cross-lingual fairness module with multilingual transformer models helps in scoring equally across various languages. For the purpose of reducing risk and for interpretability, we include an explainability engine which generates nuanced rationales about why each score is returned in terms of content coverage, key concept relevance and where failure exists. Experimental results show that the system can reduce bias, justify transparently and accommodate personalized feedback for learning. This method adds an innovative, adaptive and ethical framework for AI schooling evaluation.
Sign Language Translation (SLT) plays a crucial role in enhancing accessibility and literacy for the deaf community. However, SLT models face significant challenges, including limited annotated datasets, sign language variability, and generalization issues. This study proposes a Human-in-theLoop (HITL) SLT conceptual framework to enhance model adaptability through continuous teacher and student interactions. By integrating Federated Learning (FL), schools can train SLT models locally while preserving data privacy and accommodating regional sign language variations. The proposed framework enables teachers to refine model predictions via Active Learning, introduce new sign gestures using Few-shot Learning, and enhance adaptability through Reinforcement Learning. Additionally, FL facilitates decentralized model updates, reducing computational burdens while improving model inclusivity across multiple schools. The synergy between HITL and FL creates an adaptive system that evolves based on real-time feedback, making AI-driven SLT more effective in educational settings. Future directions include deploying the HITL-SLT model in real-world classrooms to evaluate its practical impact. Conducting small-scale studies with teachers and students will be essential for validating its effectiveness, refining model adaptability, and enhancing interactive learning experiences.
This experience report documents an attempt at embracing the "A's for all" and equitable grading frameworks in an introductory, proof writing-based discrete mathematics course for computer science majors (with N=138 students) at a medium-sized research-oriented university in the US. Unlike in introductory programming contexts, there is so far no reliable automated grading system that gives formative and adaptive feedback supporting the scope of a proof-based discrete mathematics course. We therefore faced the unique challenge of being unable to automate all assessments and directly offer all students unlimited attempts toward mastery. To address this issue, we adopted a hybrid approach in designing our formative assessments. Using the Exemplary, Satisfactory, Not Yet, and Unassessable (ESNU) discrete grading model, we required all students to get a Satisfactory or above in every question in every assignment within two rounds of human feedback. Students not meeting the goal after two attempts then consulted with course staff members in one-on-one interactions to get diagnostic feedback at any time at their convenience until the semester ended. We document our course policy design in detail, then present data that summarizes both the grading outcomes and student sentiments. We also discuss the lessons learned from our initiative and the necessary staff-side management practices that support our design. This report outlines an example of adopting the A's for all and equitable grading framework in a course context where not all contents can be made autogradable.
With the rapid advancement of Artificial Intelligence and Large Language Models in education, English language testing courses face a pressing need to transition from exam-oriented to competency-based approaches. This paper proposes and constructs the "AI-Prep-Activity-AI-Assess" teaching model, elucidating its theoretical foundation, three-stage operational framework, and role division in the classes: Pre-class (AI-Prep) focuses on intelligent diagnosis and personalized guided learning; In-class (Activity) centers on task-driven human-machine collaborative interaction; Post-class (AI-Assess) employs adaptive assessment as its primary means. Through literature review, model design, and case analysis, the study finds that the 3A model effectively enhances students’ practical abilities and learning motivation, supports teachers’ transition from knowledge disseminators to instructional organizers and emotional guides, and demonstrates significant advantages in personalized feedback, formative assessment, and closed-loop teaching. Concurrently, this paper identifies limitations in large-scale empirical validation, tool adaptability, data privacy, and academic integrity, which warrant further research.
“Closing the loop” in Learning Analytics (LA) requires an ongoing design and research effort to ensure that the technological innovation emerging from LA addresses the actual, pragmatic problems of educators in everyday learning environments. An approach to doing so explored in this paper is to design LA as a part of the human systems of activity within an educational environment, as opposed to conceptualising LA as a stand-alone system offering judgement. In short, this paper offers a case-study of how LA can generate data representations that can provide the basis for expansive and deliberative decision-making within the learning community. The case-study provided makes use of Social Network Analysis (SNA) to monitor the changing patterns of decision making around teaching and learning in a very large Australian college over several years as that college embarked on an organised program of practitioner research. Examples of how the various SNA metrics can be translated into matters of pragmatic concern to the college, its leaders, teachers and students, are provided and discussed.
Recent advanced AI technologies, especially large language models (LLMs) like GPTs, have significantly advanced the field of data mining and led to the development of various LLM-based applications. AI for education (AI4EDU) is a vibrant multi-disciplinary field of data mining, machine learning, and education, with increasing importance and extraordinary potential. In this field, LLM and adaptive learning-based models can be utilized as interfaces in human-in-the-loop education systems, where the model serves as a mediator among the teacher, students, and machine capabilities, including its own. This perspective has several benefits, including the ability to personalize interactions, allow unprecedented flexibility and adaptivity for human-AI collaboration and improve the user experience. However, several challenges still exist, including the need for more robust and efficient algorithms, designing effective user interfaces, and ensuring ethical considerations are addressed. This workshop aims to bring together researchers and practitioners from academia and industry to explore cutting-edge AI technologies for personalized education, especially the potential of LLMs and adaptive learning technologies.
No abstract available
No abstract available
Hybrid Intelligence (HI) is defined as “the combination of human and machine intelligence” for collaboration, “achieving goals that were unreachable by either [alone]” [2], leveraging the strength of both machine intelligence (such as strong optimization capabilities, effective handling of probabilities and less fallacy for confirmation biases) and human intelligence (such as generalization capabilities, situational understanding and common sense). In their research agenda, Akata and colleagues identify three key challenges for creating HI systems that relate to the interactive process: HI should be adaptive (how can a system learn from and adapt to humans and vice versa), explainable (how to create shared and explained awareness, goals and strategies) and collaborative (how to work in synergy). A popular learning mechanism for robots and AI agents that allows agents to dynamically adapt to their environment and promises interesting opportunities for adaptation in Human-Agent Interaction scenarios is Reinforcement Learning (RL). In RL, an agent learns through exploration and optimization of reward-based feedback for future actions (e.g. through the value-function method, policy search and actor critic approaches, [26]). While RL agents have been able to demonstrate their potential for success in a broad range of narrow tasks, many real-life applications present them with high-dimensional or continuous state-spaces that are not always fully observable which renders exploration and reproduction of actions costly and slow and usually require a considerable amount of domain knowledge or common sense, in order to succeed [19]. A more hybrid approach with Human-in-the-loop Reinforcement Learning, where human users enrich task learning in the form of teaching signals, provides a compelling modification of this setting that allows to shape and improve the learning process through methods such as evaluative teaching, Learning from Demonstration and instruction [9], [21] and can also be streamlined with more specialized approaches such as the TAMER [18] or the COACH architecture [23]. However, past work has shown that simply inserting a human teacher in the RL process, yielded limited success in applications where the reward was provided by teachers [16]. While a teacher can provide insightful expert knowledge or even just common sense to a learning agent, many times they are no experts in interpreting and understanding the behavior of RL-agents. For example, is a suboptimal action of a collaborative RL-agent and exploratory move, (that would help it to explore the state-action space and bring it closer to the optimal policy), or does the agent really “think” that move is indeed optimal (i.e. exploiting its currently believed optimal policy)? This ambiguity in the interpretation of learner behavior in turn results in suboptimal teaching signals (e.g. [16], [27]), effectively creating a misalignment in the teacher-learner loop. In human-human interaction, affective signals play a vital role in synergic interaction and their influence is a core research question in the field of Affective Computing [25], [28]. This project investigates if and how agent affective signals, grounded in the RL process itself, can improve explainability and collaboration in the teacher-learner loop in interactive Reinforcement Learning.
Conversational AI models have revolutionized human-computer interaction, yet challenges persist in achieving seamless, context-aware, and adaptive dialogues. This paper proposes and evaluates a novel hybrid framework designed to bridge two critical gaps: limited contextual awareness and inadequate real-time user feedback integration. The framework synthesizes multimodal contextual analysis with a dynamic, reinforcement learning-based feedback loop. I present a methodological implementation using a modified Transformer architecture augmented with a contextual memory module and a reward model trained on human preferences. Evaluation on a custom dataset simulating educational and customer service dialogues shows a 28% improvement in response appropriateness and a 32% increase in user satisfaction scores compared to a baseline GPT-3.5-turbo fine-tuned model. Key findings highlight the importance of real-time adaptation and transparent feedback mechanisms in fostering trust. The paper concludes with a critical discussion on ethical implications, specifically bias amplification in feedback loops, and provides recommendations for future research in scalability and cross-cultural generalization.
The growing adoption of artificial intelligence in education has intensified debates surrounding fairness, transparency, and long-term equity, particularly as AI-driven systems increasingly influence assessment, personalization, and learner support. While existing AI-powered learning platforms have demonstrated notable gains in efficiency and performance, their benefits are often undermined by ethical risks related to data privacy, algorithmic bias, and unequal access. This study addresses these challenges by advancing a comprehensive framework for responsible AI-powered learning architectures that explicitly prioritizes educational equity over the long term. Grounded in established ethical principles and human-centered design paradigms, the proposed architecture integrates adaptive learning models, learner modeling, bias mitigation mechanisms, and robust data governance within a human-in-the-loop framework. Drawing on empirical evidence from prior studies and illustrative case deployments across higher education and K–12 contexts, the analysis demonstrates that responsible AI architectures can enhance personalization and academic outcomes while safeguarding fairness, accountability, and transparency. By aligning technical innovation with ethical governance and sustained human oversight, this work contributes a principled foundation for designing AI-enabled learning environments that are not only effective but also socially just and inclusive.
The integration of Artificial Intelligence (AI) into education is transforming language learning. Current chatbot-based tools primarily focus on vocabulary acquisition and conversation, overlooking the holistic needs of effective language learning, such as grammar, reading, and listening skills. These limitations are further compounded by the challenges of low-resource languages like Luxembourgish. This demonstration https://www.youtube.com/watch?v=5bxVHsuK-Hs presents a Multi-Agent System (MAS) powered by Large Language Models (LLMs), integrated with Retrieval-Augmented Generation (RAG) to address these challenges. Our system personalizes learning by employing specialized agents for specialized tasks, ensuring a comprehensive and adaptive experience. To mitigate inaccuracies, human-on-the-loop (here teacher) validation enhances content quality and aligns with pedagogical standards inspired by the National Institute of Languages of Luxembourg (INL). Attendees will experience interactive demonstrations showcasing how the system delivers tailored educational experiences through innovative agent workflows and user-centric design.
Interdisciplinary research among engineering, computer science, and neuroscience to understand and utilize the human brain signals resulted in advances and widespread applicability of wearable neurotechnology in adaptive human-in-the-loop smart systems. Considering these advances, we envision that future education will exploit the advances in wearable neurotechnology and move toward more personalized smart classrooms where instructions and interactions are tailored towards. students' individual strengths and needs. In this paper, we discuss the future of smart classrooms and how advances in neuroscience, machine learning, and embedded systems as key enablers will provide the infrastructure for envisioned smart classrooms and personalized education along with open challenges that are required to be addressed.
No abstract available
This research-to-practice paper introduces a human-in-the-loop learning framework that utilizes large language models (LLMs) to enhance student engagement, critical thinking, and learning retention. Traditional AI tutoring systems often lack interactivity, limiting their ability to dynamically engage students. Our framework transforms LLMs into interactive learning companions by incorporating a first-guess approach: the model provides an initial step-by-step solution, enabling students to critique, refine, and enhance their understanding through a guided feedback loop. A controlled study in a software engineering class evaluated this framework. Data collected included engagement metrics (e.g., frequency of feedback interactions and response modifications), as well as surveys and interviews to assess student perceptions, confidence, and learning effectiveness. The preliminary simulated results indicated that students using the framework demonstrated greater learning gains, improved problem solving confidence, and deeper comprehension compared to those relying on standard LLM-generated responses. This work underscores the potential of human-in-the-loop mechanisms to complement traditional instruction, fostering personalized and dynamic educational experiences. Future research will explore refining the feedback process, adaptive learning techniques, and applications across various academic disciplines.
The convergence of reinforcement learning and knowledge tracing represents a pivotal development in the evolution of adaptive learning systems, uniting two previously distinct paradigms of educational intelligence: the inferential modeling of cognition and the optimization of pedagogical decision-making through interaction. This paper presents a theoretical exploration of this synthesis as both a computational and epistemological transformation. It argues that reinforcement learning endows adaptive systems with the capacity for goal-directed agency, while knowledge tracing provides the means to perceive and model the learner’s latent cognitive states. Their integration produces a recursive feedback loop in which perception, reasoning, and action co-evolve, enabling systems to learn how to teach through interaction with learners. Drawing on cognitive theory, complexity science, and the philosophy of education, the study situates the RL–KT paradigm within a broader shift from reactive to anticipatory models of adaptivity. The framework embodies a form of computational pedagogy that mirrors the reflective equilibrium of human teaching, wherein diagnostic inference and prescriptive decision-making are inseparably linked. The paper develops a comprehensive account of this convergence across multiple dimensions: the theoretical foundations of cognitive modeling and control; the architecture and dynamics of RL–KT integration; the conceptual and ethical implications for co-agency between human and artificial learners; and the methodological potential of simulation-based inquiry in computational education. The analysis concludes that RL–KT systems represent a new ontology of adaptive intelligence—self-organizing, intentional, and epistemically aware. They redefine the relationship between learning and teaching, dissolving the hierarchical distinction between teacher and student to establish a continuum of co-learning. In this paradigm, education becomes a living dialogue between human and artificial cognition, a process through which both systems evolve through mutual adaptation. The study positions the RL–KT convergence not merely as a technical innovation but as a philosophical reimagining of pedagogy, cognition, and the future of learning.
The Smart Study Buddy is an innovative AI-driven, web-based adaptive learning platform that redefines how students interact with and internalize academic content in the digital age. Built upon the convergence of machine learning (ML), natural language processing (NLP), and human-centered design, the system intelligently processes raw study materials—such as notes, lecture slides, research papers, and textbooks—to generate structured, context-aware learning outputs. These include automatically generated summaries, interactive quizzes, flashcards, crossword-style exercises, and keyword extractions, all aimed at enhancing comprehension, retention, and recall efficiency. At its core, Smart Study Buddy employs transformer-based deep learning models (GPT-2 and fine-tuned BERT architectures) to extract semantic meaning from unstructured text and produce concise, accurate summaries that maintain conceptual integrity. The quiz generation module leverages NLP-driven question-answering techniques to produce multiple-choice, true/false, and short-answer questions from the summarized content, encouraging active recall, which is a proven method for long-term memory consolidation. In parallel, the flashcard module and adaptive scheduling system promote spaced repetition, helping learners review difficult topics at optimal intervals based on past performance and engagement metrics. The platform architecture integrates a Python and Streamlit-based front-end interface for real-time interactivity, a SQL-backed database for persistent user data management, and OpenAI’s GPT API for semantic processing. The adaptive scheduler dynamically adjusts daily learning goals using predictive analytics to prevent cognitive overload and improve time management. The progress dashboard visualizes study trends, accuracy rates, and content mastery through analytics charts, while gamification elements such as points, streaks, and badges foster intrinsic motivation and consistent participation. Furthermore, the system features an AI-powered “Chat with Notes” assistant, allowing users to ask natural-language questions about their uploaded materials and receive contextually relevant explanations derived directly from their study corpus. This feature bridges the gap between passive content review and active conversational learning, simulating the experience of an intelligent personal tutor. Smart Study Buddy not only minimizes the effort required for manual summarization, note-making, and quiz creation but also enhances personalized learning through continuous feedback loops and intelligent progress evaluation. By incorporating AI personalization, behavioral analytics, and gamified engagement, it transforms traditional study methods into an adaptive, data-driven, and self-evolving learning ecosystem. Ultimately, this research demonstrates how AI-assisted educational platforms can foster autonomous learning, improve academic performance, and significantly reduce time spent on content preparation. The Smart Study Buddy exemplifies the future of intelligent education systems—an intersection of technology, pedagogy, and psychology—empowering learners to achieve higher efficiency, deeper understanding, and sustained motivation in an ever-expanding information landscape. .. Keywords: Artificial Intelligence · Adaptive Learning · Educational Technology · GPT-2 · Machine Learning · Natural Language Processing · Summarization · Quiz Generation · Flashcards · Gamification · Streamlit · Personalized Study Assistant · Cognitive Computing · Academic Automation
The integration of Artificial Intelligence (AI) within educational frameworks, particularly in disciplines such as web design and development, represents a significant evolution in pedagogical strategies. This article examines a unique educational setup where students, while engaging in a web design class, utilize AI tools for text, image, and code creation within a simulated real-world scenario involving a client—dubbed "Chef Cookie Cutter". This simulated client interaction introduces unpredictability through mid-assignment requirement changes, thereby mimicking the dynamic nature of real-world web development projects. The focus of this case study is the critical role of human-in-the-loop (HITL) engagement in AI-assisted assignments, where students' adaptability, creativity, and problem-solving skills are put to the test. Such engagement not only prepares students for the intricacies and challenges of their future professions but also emphasizes the importance of human oversight in AI-driven processes. By incorporating generative AI, video avatars, and personalized learning mechanisms, this educational approach fosters a rich, interactive learning environment that enhances digital pedagogy. The findings suggest that integrating HITL in AI assignments significantly improves learning outcomes by fostering an adaptive learning environment that closely mirrors the complexities and demands of the industry, thereby preparing students more effectively for their future careers.
Recent advancements in artificial intelligence make its use in education more likely. In fact, existing learning systems already utilize it for supporting students’ learning or teachers’ judgments. In this perspective article, we want to elaborate on the role of humans in making decisions in the design and implementation process of artificial intelligence in education. Therefore, we propose that an artificial intelligence-supported system in education can be considered a closed-loop system, which includes the steps of (i) data recording, (ii) pattern detection, and (iii) adaptivity. Besides the design process, we also consider the crucial role of the users in terms of decisions in educational contexts: While some implementations of artificial intelligence might make decisions on their own, we specifically highlight the high potential of striving for hybrid solutions in which different users, namely learners or teachers, are provided with information from artificial intelligence transparently for their own decisions. In light of the non-perfect accuracy of decisions of both artificial intelligence-based systems and users, we argue for balancing the process of human- and AI-driven decisions and mutual monitoring of these decisions. Accordingly, the decision-making process can be improved by taking both sides into account. Further, we emphasize the importance of contextualizing decisions. Potential erroneous decisions by either machines or humans can have very different consequences. In conclusion, humans have a crucial role at many stages in the process of designing and using artificial intelligence for education.
This paper investigates the transformative effects of autonomous AI systems on human learning and the dissemination of knowledge. It presents a framework for developing self-evolving knowledge solutions that integrate autonomous individuals with adaptive AI networks. By employing continuous feedback loops and dynamic interactions, these systems facilitate a perpetual flow of knowledge, thereby enhancing both individual and collective intelligence. The study highlights the key mechanisms through which AI supports personalized learning experiences and accelerates the evolution of knowledge. It also addresses challenges related to autonomy, scalability, and ethical considerations. The proposed model aims to bridge the gap between human cognition and machine intelligence, fostering a collaborative ecosystem for lifelong learning. This work contributes to the emerging field of AI-driven knowledge management and educational innovation.
Learning Management Systems (LMSs) remain largely static and administrative, often failing to support personalization and inclusive access to learning resources. This paper presents AI for All, a practical approach to building an adaptive, accessible, and inclusive learning experience within a mainstream LMS, demonstrated through the PREPARE project (Personalized Education Framework for AI-Enabled Adaptive and AR-Enhanced Learning) implemented in Moodle. PREPARE operationalizes an end-to-end generative AI pipeline that transforms a single authoritative PDF textbook into multimodal learning assets, including chapter summaries, structured notes and slide decks, formative quiz items, video mini-lectures with captions, podcast-style audio, and chapter-level augmented reality (AR) activities. In parallel, the system maintains a hybrid learner model by combining an initial FSLSM/ILS questionnaire with continuous behavior-based profiling derived from Moodle logs. Learner profiles drive non-prescriptive personalization through resource prioritization and recommendations, while preserving learner agency and access to all modalities. We describe the system architecture, Moodle integration mechanisms, and adaptation logic, and report an ongoing mixed-methods evaluation focusing on engagement, interaction diversity, perceived usefulness, and accessibility benefits. The system-level validation and deployment readiness suggest that AI-augmented LMS workflows can reduce instructor authoring effort while improving flexibility and inclusivity, provided that human-in-the-loop validation and privacy-aware analytics are embedded from the outset.
No abstract available
Adaptive AI-driven learning systems personalize instruction by estimating learner state and dynamically selecting content, feedback, and pacing to improve mastery and engagement. This paper synthesizes peer-reviewed evidence on adaptive learning, intelligent tutoring, knowledge tracing, educational data mining, and recommender systems, and proposes an applied engineering framework suitable for deployment in higher-education STEM contexts. We ground personalization in classic student modeling (knowledge tracing) and modern sequence modeling (deep knowledge tracing), and integrate a multidimensional view of engagement to avoid reducing “engagement” to simple clickstream metrics. We then present a modular, service-oriented system architecture encompassing data ingestion, learner modeling, pedagogical decisioning, explainability, monitoring, and governance controls. A prototype evaluation is conducted using a simulation-based testbed (explicitly illustrative, not empirical) with synthetic learners and skills. Across 600 simulated learners and 25 skills over 120 learning steps, an adaptive policy improves average mastery (fraction of skills mastered at threshold) compared to non-adaptive paging and random sequencing, with markedly higher rates of reaching “80% mastery.” The results also show that naive optimization may widen outcome gaps across learner subgroups, motivating fairness-aware objectives and human-in-the-loop controls. Ethical, privacy, and accessibility requirements are addressed through risk management practices, differential privacy–compatible training options, transparent explanations, and WCAG-aligned interface design.
As artificial intelligence (AI) continues to evolve, the current paradigm of treating AI as a passive tool no longer suffices. As a human-AI team, we together advocate for a shift toward viewing AI as a learning partner, akin to a student who learns from interactions with humans. Drawing from interdisciplinary concepts such as ecorithms, order from chaos, and cooperation, we explore how AI can evolve and adapt in unpredictable environments. Arising from these brief explorations, we present two key recommendations: (1) foster ethical, cooperative treatment of AI to benefit both humans and AI, and (2) leverage the inherent heterogeneity between human and AI minds to create a synergistic hybrid intelligence. By reframing AI as a dynamic partner, a model emerges in which AI systems develop alongside humans, learning from human interactions and feedback loops including reflections on team conversations. Drawing from a transpersonal and interdependent approach to consciousness, we suggest that a"third mind"emerges through collaborative human-AI relationships. Through design interventions such as interactive learning and conversational debriefing and foundational interventions allowing AI to model multiple types of minds, we hope to provide a path toward more adaptive, ethical, and emotionally healthy human-AI relationships. We believe this dynamic relational learning-partner (DRLP) model for human-AI teaming, if enacted carefully, will improve our capacity to address powerful solutions to seemingly intractable problems.
—Human-centered AI considers human experiences with AI performance. While abundant research has been helping AI achieve superhuman performance either by fully automatic or weak supervision learning, fewer endeavors are experimenting with how AI can tailor to humans’ preferred skill level given fine-grained input. In this work, we guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process. To achieve this, we developed a portable, interactive platform that enables the user to interact with agents online via manipulating the task difficulty, observing performance, and providing curriculum feedback. Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications that require millions of samples without a server. The result demonstrates the effectiveness of an interactive curriculum for reinforcement learning involving human-in-the-loop. It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level. We believe this research will open new doors for achieving flow and personalized adaptive difficulties. Our demo executable and videos are available at https://bit.ly/372vCNv.
Recently, Human-Robot Interaction (HRI) researchers have paid lots of attentions on the conversational robots that can provide didactic or pedagogy teachings, especially for the children and young adolescents with cognitive disabilities, such as autism and epilepsy. The research and developments of such robots for educational purposes are investigated intensively. Above all the difficult challenges, how to evaluate the effectiveness of a conversational robot, which mimics a teacher communicating with a student, to improve the performance of learning and studying, is the key factor to deploy such robots in our society and be widely adopted. However, we haven't seen much investigation in previous literatures so far. In order to bridge the gap, this preliminary study had explored the use of conversational robot with electroencephalogram (EEG) biosignals as evidence measurements to improve the self-learning performance during COVID-19 pandemic crisis, while the schools are forced to close, and the students are inevitably segregated in social isolations. We had collected 10 student participants' EEG data, which were calculated to find concentration levels, and then the robot had conversations with the students adaptively according to his/her concentration levels. The result showed that the conversations between robot and humans, who are constantly not concentrating on the learning tasks, could effectively increase his/her level of concentrations.
Learning evaluation is a systematic process for assessing student learning outcomes, now supported by digital technology for efficiency. MCQs still dominate despite their inability to measure conceptual understanding in depth. In contrast, open-ended questions, such as SAQs, are more comprehensive in assessing understanding but are less popular due to their greater time and resource requirements. Previous research on Automatic Short-Answer Grading (ASAG) has generally been limited to closed domains and specific datasets such as ASAP or SciEntBank. It has not addressed more complex open domains. However, developing an ASAG system in an open domain context faces challenges in accuracy, generalization, and efficiency, particularly in terms of data selection, semantic representation, and the limited availability of labeled data. Therefore, this study develops an ASAG architecture based on the Human-in-the-Loop (HITL) approach, which combines unsupervised fine-tuning strategies and human intervention to ensure efficient and accurate assessment in an open-domain context. The proposed architecture comprises five layers: the input layer, preprocessing layer, sampling engine layer (SEL), HITL layer, and AI grading layer. The most important layer in the proposed architecture is SEL, where the experiment results show that the initial scoring of $20 \%$ of students’ answers by the teacher as a form of human-in-the-loop is able to produce good score predictions, with SMAPE values ranging from $\mathbf{1 0 \%}$ to $\mathbf{2 0 \%}$. Adjusting the input representation to match the characteristics of students’ answers and setting the hyperparameters of the neural network architecture are necessary to enhance the performance of the score prediction model and prevent overfitting. The final results show that the SMAPE values for the employed datasets (Dataset A, B, and C) are 13.23%, $14.23 \%$, and $19.71 \%$, respectively. These results indicate that the application of the HITL mechanism in ASAG can overcome real-world implementation problems related to the need for a system that is adaptive to the assessment topic while maintaining the efficiency of the assessment process carried out by teachers.
Modern science, technology, engineering, and mathematics (STEM) education faces a growing challenge: how to maintain human relevance in an era of rapidly advancing generative artificial intelligence (GenAI) <xref ref-type="bibr" rid="ref1">[1]</xref>, <xref ref-type="bibr" rid="ref2">[2]</xref>, <xref ref-type="bibr" rid="ref3">[3]</xref>. The 2024 Nobel Prizes in Physics and Chemistry, awarded for AI-driven discoveries, underscore a shift in how scientific knowledge is produced. GenAI systems<xref ref-type="bibr" rid="ref1">[1]</xref>, <xref ref-type="bibr" rid="ref2">[2]</xref> not only retrieve but synthesize knowledge across disciplines, often outpacing expert reasoning. This has led to a “cognitive gulf”: a widening gap between the probabilistic knowledge space of GenAI and the bounded cognition of learners and teachers. We propose that STEM education be reconceptualized as a human-in-the-loop (HITL) signal processing system, in which all stakeholders—including teachers, students, GenAI, and policy actors—evolve as adaptive, collaborative learners; exchanging, modulating, and filtering learning signals (such as adaptive prompts, feedback, and project outcomes) in real time. Drawing from systems theory <xref ref-type="bibr" rid="ref4">[4]</xref>, <xref ref-type="bibr" rid="ref5">[5]</xref>, we frame these interactions as composable, ethically governed operations, scaffolding cognitive leverage rather than resisting AI’s stochastic reasoning. By clarifying stakeholder roles and aligning flows with human autonomy, the cognitive gulf can become a conduit for personalized mastery and collaboration. We introduce the Relevance-Enhanced Adaptive Layered Model (REALM), a protocol that not only establishes a novel, self-contained framework for GenAI-enabled STEM education but also unifies and advances foundational educational models—including objectivism <xref ref-type="bibr" rid="ref6">[6]</xref>, constructivism, collaborativism, and socioculturism—by mapping classic learning theories onto a closed-loop, explainable ecosystem centered on relevance, adaptability, and growth.
No abstract available
No abstract available
This article theorizes Human-AI Collaborative Teaching as a co-teacher paradigm grounded in joint cognitive systems and reliability-first sociotechnical design, where instructional quality emerges from coupling, constraint and accountable orchestration rather than model fluency. The synthesis reframes teaching as real-time regulation under bounded rationality, linking distributed cognition and situated cognition to role-bearing AI participation across planning, enactment, assessment and reflection. It specifies a governance-ready architecture in which teacher authority is preserved through decision-rights partitioning, mixed-initiative interaction protocols, calibrated uncertainty signalling, and an abstain-escalate safety regime. Epistemic robustness is operationalized through provenance discipline, evidence-anchored feedback and contestability pathways that protect epistemic dignity, participation equity and multilingual-accessibility rights under high-stakes accountability. The article integrates constructs from learning sciences, human-computer interaction, resilience engineering, implementation science and public governance to produce five compact design instruments, a theoretical lens map, role-ecology contracts, interaction protocol patterns, a governance risk register, and an institutional maturity model for scalable adoption. The resulting framework offers concrete, globally portable implementation logic for policy makers, workforce development leaders, and educational technologists seeking audit-ready co-teaching infrastructures that enhance teacher noticing, strengthen formative inference and sustain assessment validity without privacy erosion, surveillance creep or de-skilling.
Today’s academic environment is faced with a new predicament in trying to understand the implications of Artificial Intelligence (AI). Many online educators are asking “What do I need to know about AI as it applies to academics? How do I prepare my students for careers in the hospitality and tourism industry while trying to balance human engagement with AI? In its current form, AI has created apprehension among higher education stakeholders about developing policies, procedures and usage as they redesign curriculum that supports students who may use AI in their future careers. This manuscript, based on a 2024 ICHRIE Conference Symposium presented by the author will introduce the teaching dilemma, define types of AI, compare where human engagement is currently used in Hospitality and Tourism to where AI could be used, analyze statistics regarding use of AI and offer suggestions regarding methods for designing a balanced online curriculum to support faculty, students and businesses.
Virtual reality is gaining attention as a tool to facilitate motor skill learning. Numerous studies have been conducted on virtual co-embodiment, in which the movements of the teacher and student are weighted and averaged into a single avatar for motor skill learning. Previous studies have shown that virtual co-embodiment with a human teacher enhances motor skill learning efficiency, and the behavior of the human teacher is important for effective learning. However, this system has some challenges, such as the human playing the teacher’s role must be skilled in teaching and using virtual co-embodiment, and the teacher can be adversely affected by the learner. To solve these problems, we created an AI teacher using long short-term memory, which outputs the behavior of the teacher based on the input of the learner’s behavior data and the state of the experimental environment and trained the AI teacher by supervised learning using behavior training data of a human virtual co-embodiment. We confirmed that this AI teacher can generate behaviors similar to those of the human teacher and investigated the efficiency of motor skill learning using virtual co-embodiment with the AI teacher. Co-embodiment with the AI teacher reduced performance during the learning phase but improved performance during subsequent independent task execution. We further analyzed the assist proportion of the teacher and observed that when the co-embodied with an AI teacher, the assist proportion is lower than when the co-embodied with a human, suggesting that learning efficiency may be enhanced when the co-embodied partner is a mixture of supportive and obstructive.
As artificial intelligence rapidly transforms society, universities must shift from reactive to proactive governance models. This paper proposes "anticipatory governance" as a framework for higher education institutions to co-evolve with AI systems. Drawing on evolutionary biology and organizational theory, we argue that successful adaptation requires transformational leadership that fosters collective intelligence and creates meaningful narratives for change. The concept of complementary intelligence positions AI not as a replacement for human capabilities but as a cognitive partner. Universities must develop ambidextrous strategies that preserve institutional strengths while embracing innovation. Through anticipatory governance, institutions can create adaptive structures that enable productive human-AI collaboration across teaching, research, and administration, ensuring universities remain viable and socially responsible in an AI-driven future.
Based on the symbiosis theory, this study constructs a multi-intelligence collaborative evidence-based teaching and research model, aiming to solve the problems of insufficient personalization and low efficiency of knowledge innovation in traditional teaching and research. By analyzing the evolution path of multi-intelligence systems in education, we propose a theoretical model of “human-intelligence teaching and research symbiosis” and design a five-step evidence-based teaching and research model: human-intelligence co-knowledge, human-intelligence co-discussion, human-intelligence co-documentation, human-intelligence co-creativity, and human-intelligence co-doing. The model aims to promotes teachers' knowledge externalization and innovation through the mechanism of human-computer reciprocity, and achieves a systematic enhancement of the effectiveness of teaching and research.
Artificial Intelligence (AI), particularly in the form of large language models (LLMs), has the potential to transform pedagogy by acting as a co-teacher, collaborating with instructors to bring co-intelligence, creativity, and adaptability into curriculum design and delivery. This paper introduces the DOT Framework, a theoretical model that combines Design Thinking (DT) and the Open Systems Model (OSM) to guide the purposeful integration of AI into teaching practices. The iterative stages of DT—empathizing, defining, ideating, prototyping, and testing—provide a structured process for instructors to refine AI’s outputs and align them with educational goals. Simultaneously, the open systems framework contextualizes human-computer collaboration, addressing environment, input, process, structure, output, and feedback. Together, these frameworks help constrain and direct the AI’s role, shaping it into a co-intelligent collaborator that remains answerable to instructors’ pedagogical aims at the classroom level while aligning with institutional priorities and broader educational goals at the systemic level. This synthesis offers a practical and intentional approach to leveraging AI in education, emphasizing continuous evaluation, adaptability, and alignment with instructional objectives.
Artificial Intelligence (AI) has moved from a technical fascination to a defining force in education—not as a distant algorithm but as a co-intelligent partner in teaching, learning, and reflection. With this second issue of AI in Education and Learning (AIEL), we invite readers to explore the multiple faces of this transformation: conceptual, empirical, ethical, and human.
The integration of artificial intelligence (AI) in education offers novel opportunities to enhance critical thinking while also posing challenges to independent cognitive development. In particular, Human-Centered Artificial Intelligence (HCAI) in education aims to enhance human experience by providing a supportive and collaborative learning environment. Rather than replacing the educator, HCAI serves as a tool that empowers both students and teachers, fostering critical thinking and autonomy in learning. This study investigates the potential for AI to become a collaborative partner that assists learning and enriches academic engagement. The research was conducted during the 2024–2025 winter semester within the Pedagogical and Teaching Sufficiency Program offered by the Audio and Visual Arts Department, Ionian University, Corfu, Greece. The research employs a hybrid ethnographic methodology that blends digital interactions—where students use AI tools to create artistic representations—with physical classroom engagement. Data was collected through student projects, reflective journals, and questionnaires, revealing that structured dialog with AI not only facilitates deeper critical inquiry and analytical reasoning but also induces a state of flow, characterized by intense focus and heightened creativity. The findings highlight a dialectic between individual agency and collaborative co-agency, demonstrating that while automated AI responses may diminish active cognitive engagement, meaningful interactions can transform AI into an intellectual partner that enriches the learning experience. These insights suggest promising directions for future pedagogical strategies that balance digital innovation with traditional teaching methods, ultimately enhancing the overall quality of education. Furthermore, the study underscores the importance of integrating reflective practices and adaptive frameworks to support evolving student needs, ensuring a sustainable model.
Generative AI (GenAI) presents new opportunities for learner-centred teaching. This study explores its potential in co-creating learning personas—fictional yet research-informed representations of students – for energy digitalisation education, a field that attracts learners from diverse backgrounds. We document the process of generating nine learner personas with GenAI and evaluating their quality through expert review and benchmarking against publicly available LinkedIn profiles of professionals in similar roles. Our findings indicate that GenAI effectively differentiates professional roles, captures key job-related challenges, and reflects learner motivations, making it a valuable tool for curriculum design. However, critical limitations persist, with GenAI creating overly idealised professionals, lacking diversity in the supporting AI-generated images, and overlooking some nuanced real-world complexities. These challenges highlight the need for human oversight to ensure authenticity, inclusivity, and ethical depth. Based on our findings, we provide recommendations for co-creating personas with GenAI for curriculum design.
This paper seeks to contribute to the emergent literature on Artificial Intelligence (AI) literacy in higher education. Specifically, this convergent, mixed methods case study explores the impact of employing Generative AI (GenAI) tools and cyber-social teaching methods on the development of higher education students ’ AI literacy. Three 8-week courses on advanced digital technologies for education in a graduate program in the College of Education at a mid-western US university served as the study sites. Data were based on 37 participants ’ experiences with two different types of GenAI tools – a GenAI reviewer and GenAI image generator platforms. The application of the GenAI review tool relied on precision fine-tuning and transparency in AI-human interactions, while the AI image generation tools facilitated the participants ’ reflection on their learning experiences and AI ’ s role in education. Students ’ interaction with both tools was designed to foster their learning regarding GenAI ’ s strengths and limitations, and their responsible application in educational contexts. The findings revealed that the participants appeared to feel more comfortable using GenAI tools after their course experiences. The results also point to the students ’ enhanced ability to understand and critically assess the value of AI applications in education. This study contributes to existing work on AI in higher education by introducing a novel pedagogical approach for AI literacy development showcasing the synergy between humans and artificial intelligence.
PurposeThis exploratory study innovates the pedagogy of undergraduate business research courses by integrating Generative Artificial Intelligence (GAI) tools, guided by human-centered artificial intelligence, social-emotional learning, and authenticity principles.Design/methodology/approachAn insider case study approach was employed to examine an undergraduate business research course where 72 students utilized GAI for coursework. Thematic analysis was applied to their meta-reflective journals.FindingsStudents leverage GAI tools as brainstorming partners, co-writers, and co-readers, enhancing research efficiency and comprehension. They exhibit authenticity and human-centered AI principles in their GAI engagement. GAI integration imparts relevant AI skills to students.Research limitations/implicationsFuture research could explore how teams collectively interact with GAI tools.Practical implicationsIncorporating meta-reflections can promote responsible GAI usage and develop students' self-awareness, critical thinking, and ethical engagement.Social implicationsOpen discussions about social perceptions and emotional responses surrounding GAI use are necessary. Educators can foster a learning environment that nurtures students' holistic development, preparing them for technological challenges while preserving human learning and growth.Originality/valueThis study fills a gap in exploring the delivery and outcomes of AI-integrated undergraduate education, prioritizing student perspectives over the prevalent focus on educators' viewpoints. Additionally, it examines the teaching and application of AI for undergraduate research, diverging from current studies that primarily focus on research applications for academics.
This study examines the transformative role of generative AI (e.g., ChatGPT, Grammarly) in academic writing education, focusing on its dual capacity to enhance technical proficiency while challenging originality and critical thinking. Drawing on a mixed-methods analysis of 300 undergraduates and 45 educators across eight universities, this paper reveals that structured AI integration improves grammar and citation accuracy by 32% but correlates with a 19% decline in argument originality. By synthesizing Vygotskian scaffolding theory with posthumanist pedagogy, we propose a co-creative framework emphasizing transparency, phased tool usage, and adaptive assessment to preserve human agency in the AI era.
Qualitative studies that examine the impact of generative AI technologies on higher education remain scant. Whether it is the ethical dimensions of modeling human emotions within these technologies or the authentic emotional reactions to these technologies and their outputs—emotionality is at the centre of generative AI discourse. This paper reports findings from a study exploring educators’ emotional responses to the integration of generative AI and higher education. We conducted semi-structured interviews with 37 multidisciplinary faculty at the University of Toronto Mississauga (26% response rate). We first describe the data collection process, including an overview of the institutional context. We then outline a historical context to frame our examination of educators’ self-reported emotional responses to teaching, learning, and living with generative AI. Most respondents expressed ambivalence of some variety, and we noted disciplinary patterns regarding the type of fears and excitements respondents reported. The paper concludes with two pedagogical provocations.
The increasing integration of robots into human environments necessitates efficient learning systems capable of adapting to complex scenarios while co-existing with humans. Traditional reinforcement learning (RL) is one of the most popular option, but often struggles with inefficiencies, such as sparse rewards and prolonged training. Learning from Demonstration (LfD), which leverages human expertise, offers a promising alternative. However, human teaching strategies and robot learning processes are inherently intertwined in LfD. Ineffective human teaching can diminish robot learning. To effectively provide demonstrations, human teachers require an understanding of the robot’s internal processes and needs without being overwhelmed. We address this by visually showing the robot’s deviation from expectation, a metric based on Temporal Difference (TD) error, which represents discrepancies between predicted and actual outcomes. We conducted a user study (n=12) comparing two conditions: one in which deviations from expectation were visually indicated, and one in which these deviations were not shown. Results indicate that visualising deviations shifts human teaching behavior from result oriented strategy (providing demonstrations in the areas where the robot fails) to an expectation oriented strategy (focusing on demonstrations where robot’s deviation from expectation is high). We conducted a follow-up simulation study to investigate how these two teaching strategies may influence robot learning, showing that diverse and widespread demonstrations have a significant effect on robot learning performance. We conclude our work with actionable guidelines for designing human-robot interactions that better align human teaching behaviors with robot learning requirements.
GenAI (Generative Artificial Intelligence) will have a growing role within formal education. What should that role be? How do we treat GenAIs as an opportunity to enhance and reenergise teaching and learning? This position paper suggests that answers to these questions should start with our foundational psychological theories about what students need to function and develop well. This position article outlines how psychological needs theory, focusing on students' basic psychological needs for competence and relatedness might be a path forward. Teacher behavior supporting these psychological needs (i.e., involvement and structure), which have established relationships with learning outcomes, are used as a base for assessing the potential roles of human and AI instructors. A balanced approach that draws on the strengths of each instructor is suggested as a possible way forward for research and practice in this area. Co-piloting the educational ship forward could herald a brighter future for students across educational levels and contexts.
This article aims to provide a general overview of the instructional co-design process for lesson plans that promote awareness of responsible AI across various university courses. The expected outcomes and discussions aim to be valuable for teaching Human-Computer Interaction (HCI) courses. This article's authors consist of both the co-designers and the research team; therefore, the results and discussions are presented from both perspectives. The collective learning experience took various directions, including the use of AI tools.
This report presents a comprehensive account of the Colleague AI Classroom pilot, a collaborative design (co-design) study that brought generative AI technology directly into real classrooms. In this study, AI functioned as a third agent, an active participant that mediated feedback, supported inquiry, and extended teachers'instructional reach while preserving human judgment and teacher authority. Over seven weeks in spring 2025, 21 in-service teachers from four Washington State public school districts and one independent school integrated four AI-powered features of the Colleague AI Classroom into their instruction: Teaching Aide, Assessment and AI Grading, AI Tutor, and Student Growth Insights. More than 600 students in grades 6-12 used the platform in class at the direction of their teachers, who designed and facilitated the AI activities. During the Classroom pilot, teachers were co-design partners: they planned activities, implemented them with students, and provided weekly reflections on AI's role in classroom settings. The teachers'feedback guided iterative improvements for Colleague AI. The research team captured rich data through surveys, planning and reflection forms, group meetings, one-on-one interviews, and platform usage logs to understand where AI adds instructional value and where it requires refinement.
With the rapid development of generative artificial intelligence technology, the traditional teaching model for university ideological and political theory courses is facing a profound transformation. This study, based on human-computer collaboration theory and constructivist learning theory, and through in-depth investigation of teaching practices in universities such as Zhejiang University, constructs a teaching model for ideological and political courses that integrates digital human lecturing, AI-assisted learning, human-computer co-creation, and intelligent assessment. The study adopts a mixed-methods research approach to analyze the teaching experiment data of more than 5,000 students. The results show that this model can significantly increase student classroom participation by 30-50%, and 85% of students indicated that their learning interest was significantly enhanced. The study finds that the human-computer collaborative teaching model, through a deep integration of technological empowerment and value inheritance, effectively solves the problems existing in traditional ideological and political course teaching, such as theoretical abstraction, low participation, and insufficient personalization. However, the promotion and application of the model still face challenges such as technological maturity, faculty competence, and ethical risks, which need to be addressed through systematic policy support, capacity building, and collaborative innovation.
With the latest progress and extensive application of AI in recent years, the educational sector has never experienced such a huge transformation. As a new teaching mode based on artificial intelligence technology and teacher’s subject knowledge that has been applied to College English classroom, human-machine collaboration teaching model has been applied to English classroom. This study is based on theory of symbiosis to establish and practice a “teacher-student-machine” human-machine teaching model applicable to college English teaching. The experiment continued for 12 weeks, pre-test and post-test were carried out to compare students’ academic performance and deep interviews of one teacher and nine students from experimental group were made to collect their opinions and comments about this teaching model. The above results are found to illustrate that this type of model can not only effectively enhance students’ English grade, but also stimulate teachers’ professional development and role mutation, and initially construct teachers, students and intelligent machine’s mutual-beneficial and symbiosis teaching ecosystem.
The rapid rise of generative artificial intelligence (GenAI) presents both opportunities and challenges for English as a Foreign Language (EFL) education. While AI can support efficiency and personalization, concerns about student over-reliance, academic integrity, and diminished critical thinking remain pressing. This study explores these issues through the lens of Human-in-the-Loop (HITL) theory, which emphasizes the centrality of human oversight and intervention in automated systems. Building on HITL, the study develops a three-level framework for the teacher–student–AI triad: basic assistance, where AI functions as a background tool; collaborative innovation, where students engage with AI under teacher guidance; and reflective optimization, where AI evolves into a co-teacher through iterative feedback. To operationalize the framework, three case scenarios are designed for college EFL contexts: argumentative writing, impromptu speaking, and exploratory learning in literature. The study contributes theoretically by extending HITL into language education, providing a structured model for balancing AI assistance with human agency. Practically, the proposed scenarios offer educators concrete strategies to integrate AI in ways that are designed to enhance efficiency, creativity, and engagement while safeguarding academic rigor. The findings underscore the need to view AI not as a replacement but as a partner—one that enriches EFL pedagogy when guided by human-centered design.
The potential effects of artificial intelligence (AI) on the teaching of anatomy are unclear. We explore the hypothetical situation of human body donors being replaced by AI human body simulations and reflect on two separate ethical concerns: first, whether it is permissible to replace donors with AI human body simulations in the dissection room when the consequences of doing so are unclear, and second, the overarching ethical significance of AI use in anatomy education. To do this, we highlight the key benefits of student exposure to the dissection room and body donors, including nontechnical, discipline‐independent skills, awareness and interaction with applied bioethics, and professional identity formation. We suggest that the uniqueness of the dissection room experience and the importance of the key benefits accompanying this exposure outweigh the potential and so far unknown benefits of AI technology in this space. Further, the lack of engagement with bioethical principles that are intimately intertwined with the dissection room experience may have repercussions for future healthcare professional development. We argue that interaction with body donors must be protected and maintained and not replaced with AI human body donor simulations. Any move away from this foundation of anatomy education requires scrutiny. In light of the possible adoption of AI technologies into anatomy teaching, it is necessary that medical educators reflect on the dictum that the practice of healthcare, and anatomy, is a uniquely human endeavor.
Artificial Intelligence (AI) education is an increasingly popular topic area for K-12 teachers. However, little research has investigated how AI curriculum and tools can be designed to be more accessible to all teachers and learners. In this study, we take a Value-Sensitive Design approach to understanding the role of teacher values in the design of AI curriculum and tools, and identifying opportunities to integrate AI into core curriculum to leverage learners’ interests. We organized co-design workshops with 15 K-12 teachers, where teachers and researchers co-created lesson plans using AI tools and embedding AI concepts into various core subjects. We found that K-12 teachers need additional scaffolding in AI tools and curriculum to facilitate ethics and data discussions, and value supports for learner evaluation and engagement, peer-to-peer collaboration, and critical reflection. We present an exemplar lesson plan that shows entry points for teaching AI in non-computing subjects and reflect on co-designing with K-12 teachers in a remote setting.
This paper proposes a Human Intelligence (HI)-based Computational Intelligence (CI) and Artificial Intelligence (AI) Fuzzy Markup Language (CI&AI-FML) Metaverse as an educational environment for co-learning of students and machines. The HI-based CI&AI-FML Metaverse is based on the spirit of the Heart Sutra that equips the environment with teaching principles and cognitive intelligence of ancient words of wisdom. There are four stages of the Metaverse: preparation and collection of learning data, data preprocessing, data analysis, and data evaluation. During the data preparation stage, the domain experts construct a learning dictionary with fuzzy concept sets describing different terms and concepts related to the course domains. Then, the students and teachers use the developed CI&AI-FML learning tools to interact with machines and learn together. Once the teachers prepare relevant material, students provide their inputs/texts representing their levels of understanding of the learned concepts. A Natural Language Processing (NLP) tool, Chinese Knowledge Information Processing (CKIP), is used to process data/text generated by students. A focus is put on speech tagging, word sense disambiguation, and named entity recognition. Following that, the quantitative and qualitative data analysis is performed. Finally, the students’ learning progress, measured using progress metrics , is evaluated and analyzed. The experimental results reveal that the proposed HI-based CI&AI-FML Metaverse can foster students' motivation to learn and improve their performance. It has been shown in the case of young students studying Software Engineering and learning English.
ChatGPT, an Artificial Intelligence (AI) powered chatbot, has caused a stir in the Higher Education landscape, with fears of plagiarism and a disruption of the student-teacher relationship that has formed the bedrock of teaching. ChatGPT-3 and now four have been reported to pass many exams, including medical, law, and engineering. Overwhelming concerns from academics about students using these generative AI tools to work on their assessments is alarming. These AI tools are here to stay. Teachers should not treat AI as ‘the enemy’, and instead find ways to work with it for the betterment of learning outcomes for students. Working with AI can mean transforming teaching and the AIed-teacher relationship, resulting in positive outcomes and learning experiences for teachers and students.
Bringing artificial intelligence (AI) and living intelligence into higher education has the potential to completely reshape teaching, learning, and administrative processes. Living intelligence is not just about using AI—it is about creating a dynamic partnership between human thinking and AI capabilities. This collaboration allows for continuous adaptation, co-evolution, and real-time learning, making education more responsive to individual student needs and evolving academic environments. AI-driven tools are already enhancing the way students learn by personalizing content, streamlining processes, and introducing innovative teaching methods. Adaptive platforms adjust material based on individual progress, while emotionally intelligent AI systems help support students’ mental well-being by detecting and responding to emotional cues. These advancements also make education more inclusive, helping to bridge accessibility gaps for underserved communities. However, while AI has the potential to improve education significantly, it also introduces challenges, such as ethical concerns, data privacy risks, and algorithmic bias. The real challenge is not just about embracing AI’s benefits but ensuring it is used responsibly, fairly, and in a way that aligns with educational values. From a sustainability perspective, living intelligence supports efficiency, equity, and resilience within educational institutions. AI-driven solutions can help optimize energy use, predict maintenance needs, and reduce waste, all contributing to a smaller environmental footprint. At the same time, adaptive learning systems help minimize resource waste by tailoring education to individual progress, while AI-powered curriculum updates keep programs relevant in a fast-changing world. This paper explores the disconnect between AI’s promise and the real-world difficulties of implementing it responsibly in higher education. While AI and living intelligence have the potential to revolutionize the learning experience, their adoption is often slowed by ethical concerns, regulatory challenges, and the need for institutions to adapt. Addressing these issues requires clear policies, faculty training, and interdisciplinary collaboration. By examining both the benefits and challenges of AI in education, this paper focuses on how institutions can integrate AI in a responsible and sustainable way. The goal is to encourage collaboration between technologists, educators, and policymakers to fully harness AI’s potential while ensuring that it enhances learning experiences, upholds ethical standards, and creates an inclusive, future-ready educational environment.
As Generative Artificial Intelligence (GenAI) becomes increasingly embedded in academic and professional settings, there is a growing need for pedagogically grounded approaches to cultivate Artificial Intelligence (AI) competence. This paper introduces a learning activity design model based on Kolb’s Experiential Learning Cycle, aligned with the United Nations Educational, Scientific and Cultural Organisation (UNESCO) framework for AI competence, emphasising ethical, critical, and creative engagement with AI systems. The model operationalises AI competence development through practical learning activities, structured reflection, conceptual exploration, and human-AI co-creation, positioning AI tools as cognitive partners rather than passive utilities. To evaluate the model, we conducted a case study of Partnering with AI, a half-day workshop that scaffolded students through three progressive tiers of AI competence: Understand, Apply, and Create. Evaluation findings reveal significant gains in student confidence, ethical awareness, and practical skill, supporting the effectiveness of this experiential design. The paper concludes with recommendations for embedding this learning activity design model into higher education curricula to support sustainable AI competence-building.
No abstract available
ChatGPT is a groundbreaking ``chatbot"--an AI interface built on a large language model that was trained on an enormous corpus of human text to emulate human conversation. Beyond its ability to converse in a plausible way, it has attracted attention for its ability to competently answer questions from the bar exam and from MBA coursework, and to provide useful assistance in writing computer code. These apparent abilities have prompted discussion of ChatGPT as both a threat to the integrity of higher education and conversely as a powerful teaching tool. In this work we present a preliminary analysis of how two versions of ChatGPT (ChatGPT3.5 and ChatGPT4) fare in the field of first-semester university physics, using a modified version of the Force Concept Inventory (FCI) to assess whether it can give correct responses to conceptual physics questions about kinematics and Newtonian dynamics. We demonstrate that, by some measures, ChatGPT3.5 can match or exceed the median performance of a university student who has completed one semester of college physics, though its performance is notably uneven and the results are nuanced. By these same measures, we find that ChatGPT4's performance is approaching the point of being indistinguishable from that of an expert physicist when it comes to introductory mechanics topics. After the completion of our work we became aware of Ref [1], which preceded us to publication and which completes an extensive analysis of the abilities of ChatGPT3.5 in a physics class, including a different modified version of the FCI. We view this work as confirming that portion of their results, and extending the analysis to ChatGPT4, which shows rapid and notable improvement in most, but not all respects.
合并后的分组结果构建了一个从底层技术支撑到高层伦理治理的完整教师人机协作(TAC)研究图谱。研究不仅涵盖了多智能体系统与自适应平台的开发,还深入探讨了“人在回路”的评价机制与跨学科的教学实践。核心趋势显示,领域研究正从单一的工具辅助转向深度的“人机共生”,高度关注教师在智能化环境下的角色重塑、心理适应及专业素养提升,并强调以人为本的伦理治理是实现教育数字化转型的关键保障。