生成式AI赋能混合式教学模式研究
生成式AI底层技术能力与智能化教学工具开发
该组文献聚焦于生成式AI(如GPT-4、大语言模型)的基础技术演进及其在教育场景下的工具化实现。涵盖了模型微调(LlamaFactory)、知识图谱集成、AI链式交互,以及自动化出题(MCQGen)、虚拟面试、个性化反馈系统等具体教学辅助工具的开发与技术逻辑。
- Sparks of Artificial General Intelligence: Early experiments with GPT-4(Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, Yi Zhang, 2023, arXiv (Cornell University))
- Cross-Data Knowledge Graph Construction for LLM-enabled Educational Question-Answering System: A Case Study at HCMUT(Tan-Trung Bui, Tran Thi Mai Oanh, Phuong Mai Nguyen, B. Y. K. HO, Long Nguyen, Thang H. Bui, Tho Quan, 2024, arXiv (Cornell University))
- AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts(Tongshuang Wu, Michael Terry, Carrie J. Cai, 2022, CHI Conference on Human Factors in Computing Systems)
- LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models(Yaowei Zheng, Richong Zhang, Junhao Zhang, YeYanhan YeYanhan, Zheyan Luo, 2024, No journal)
- Generative <scp>AI</scp> and multimodal data for educational feedback: Insights from embodied math learning(Giulia Cosentino, Jacqueline Anton, Kshitij Sharma, Mirko Gelsomini, Michail N. Giannakos, Dor Abrahamson, 2025, British Journal of Educational Technology)
- Enhancing personalized learning: AI-driven identification of learning styles and content modification strategies(Md. Kabin Hasan Kanchon, Mahir Sadman, Kaniz Fatema Nabila, Ramisa Tarannum, Riasat Khan, 2024, International Journal of Cognitive Computing in Engineering)
- AI-Driven Virtual Mock Interview Development(Prabhat Mishra, Arun Kumar Arulappan, In-Ho Ra, Thanga Mariappan L, Gina Rose G, Youngseok Lee, 2024, No journal)
- MCQGen: A Large Language Model-Driven MCQ Generator for Personalized Learning(Ching Nam Hang, Chee Wei Tan, Pei-Duo Yu, 2024, IEEE Access)
- Generative Agents: Interactive Simulacra of Human Behavior(Joon Sung Park, Joseph O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein, 2023, No journal)
- Custom-Trained Large Language Models as Open Educational Resources: An Exploratory Research of a Business Management Educational Chatbot in Croatia and Bosnia and Herzegovina(Nikša Alfirević, Daniela Garbin Praničević, Mirela Mabić, 2024, Sustainability)
混合增强智能与人机协作教学理论框架
此类文献从宏观视角探讨生成式AI与教育的融合机理,构建了包括元宇宙教育框架、混合增强智能(Hybrid Intelligence)、人机协同演化以及以人为本的教育哲学。强调在AI时代重构学习科学理论,如自我调节学习(SRL)与建构主义的数字化转型。
- The metaverse in education: Definition, framework, features, potential applications, challenges, and future research topics(Xinli Zhang, Yuchen Chen, Lailin Hu, Youmei Wang, 2022, Frontiers in Psychology)
- A Multifaceted Vision of the Human-AI Collaboration: A Comprehensive Review(Maite Puerta-Beldarrain, Oihane Gómez–Carmona, Rubén Sánchez, Diego Casado–Mansilla, Diego López–de–Ipiña, Liming Chen, 2025, IEEE Access)
- Learning design to support student-AI collaboration: perspectives of leading teachers for AI in education(Jinhee Kim, Hyun-Kyung Lee, Young Hoan Cho, 2022, Education and Information Technologies)
- AI and personalized learning: bridging the gap with modern educational goals(Kristjan-Julius Laak, Jaan Aru, 2024, arXiv (Cornell University))
- Manifesto in Defence of Human-Centred Education in the Age of Artificial Intelligence(Margarida Roméro, Thomas B. Frøsig, Amanda M. L. Taylor-Beswick, Jari Laru, Bastienne Bernasco, Alex Urmeneta, Oksana Strutynska, Marc-André Girard, 2024, Palgrave studies in creativity and culture)
- Combining human and artificial intelligence for enhanced AI literacy in higher education(Anastasia Olga Tzirides, Gabriela C. Zapata, Nikoleta Polyxeni Kastania, Akash K. Saini, Vania Castro, Sakinah A. Ismael, Yu-ling You, Tamara Afonso dos Santos, Duane Searsmith, Casey O'Brien, Bill Cope, Mary Kalantzis, 2024, Computers and Education Open)
- Hybrid intelligence: Human– <scp>AI</scp> coevolution and learning(Sanna Järvelä, Guoying Zhao, Andy Nguyen, Haoyu Chen, 2025, British Journal of Educational Technology)
- Hybrid Intelligence in Academic Writing: Examining Self-Regulated Learning Patterns in an AI-Assisted Writing Task(Andy Nguyen, Faith Ilesanmi, Belle Dang, Eija Vuorenmaa, Sanna Järvelä, 2024, Frontiers in artificial intelligence and applications)
- A theoretical and empirical analysis of tensions between learning objects and constructivism(Jan Erik Dahl, Anders I. Mørch, 2025, Education and Information Technologies)
- The Role of AI in Ecohumanistic Education(Attila Kővári, István András, Mónika Rajcsányi-Molnár, 2024, Journal of Ecohumanism)
- Against Artificial Education: Towards an Ethical Framework for Generative Artificial Intelligence (AI) Use in Education(Andrew Swindell, Luke Greeley, Antony Farag, Bailey Verdone, 2024, Online Learning)
混合式教学模式创新设计与学习成效评估
该组研究侧重于具体的教学法创新,如项目式学习(PBL)、设计思维、协作学习等在混合式环境下的应用。同时,通过实证研究探讨了AI对学生参与度、认知负荷、高阶思维能力及学习成绩的实际影响,旨在提供可操作的教学设计策略。
- Research on Blended Teaching Design of International Trade Practice Course Based on AIGC(Qinpei Fan, 2025, OALib)
- Effects of Generative Chatbots in Higher Education(Galina Ilieva, Tania Yankova, Stanislava Klisarova-Belcheva, Angel Dimitrov, Marin Bratkov, Delian Angelov, 2023, Information)
- Enhancing Student Engagement: Harnessing “AIED”’s Power in Hybrid Education—A Review Analysis(Amjad Almusaed, Asaad Almssad, İbrahim Yitmen, Raad Z. Homod, 2023, Education Sciences)
- Creative Learning for Sustainability in a World of AI: Action, Mindset, Values(Danah Henriksen, Punya Mishra, Rachel E. Stern, 2024, Sustainability)
- A multimodal approach to support teacher, researcher and <scp>AI</scp> collaboration in <scp>STEM</scp> +C learning environments(Clayton Cohn, C. R. Snyder, Joyce Horn Fonteles, T. S. Ashwin, Justin Montenegro, Gautam Biswas, 2024, British Journal of Educational Technology)
- The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis(Jin Wang, Wenxiang Fan, 2025, Humanities and Social Sciences Communications)
- ChatGPT: The cognitive effects on learning and memory(Long Bai, Xiangfei Liu, Jiacan Su, 2023, Brain‐X)
- Adaptive Learning Using Artificial Intelligence in e-Learning: A Literature Review(Ilie Gligorea, Marius Cioca, Romana Oancea, Andra-Teodora Gorski, Hortensia Gorski, Paul Tudorache, 2023, Education Sciences)
- The impact of ChatGPT on blended learning: Current trends and future research directions(Ali Alshahrani, 2023, International Journal of Data and Network Science)
- Assigning AI: Seven Approaches for Students, with Prompts(Ethan Mollick, Lilach Mollick, 2023, SSRN Electronic Journal)
- Exploring human and AI collaboration in inclusive STEM teacher training: A synergistic approach based on self-determination theory(Tingting Li, Zehui Zhan, Yu Ji, Tianwen Li, 2025, The Internet and Higher Education)
- Theoretically Rich Design Thinking: Blended Approaches for Educational Technology Workshops(Howard Scott, 2023, International Journal Of Management and Applied Research)
- The Usage of AI in Teaching and Students’ Creativity: The Mediating Role of Learning Engagement and the Moderating Role of AI Literacy(Min Zhou, Song Peng, 2025, Behavioral Sciences)
- Developing Future Computational Thinking in Foundational CS Education: A Case Study From a Liberal Education University in India(Balaji Kalluri, Prajish Prasad, Prakrati Sharma, Divyaansh Chippa, 2024, IEEE Transactions on Education)
- Chat GPT a project based professional learning as an alternative learning to traditional writing : A quick response generator to improve writing skills(Shalini Sharma, Vandana Sharma, Sukhmani Kaur, 2025, Journal of Information and Optimization Sciences)
多学科背景下的AIGC教学实践与语言教育应用
这部分文献详细记录了生成式AI在特定专业领域的落地案例,包括建筑、医学、工程、艺术(ACG)、编程及翻译等。特别突出了在语言教育(ESP/EFL)中利用LLMs进行口语培养、阅读理解增强及数据驱动学习的创新实践。
- Enhancing Architectural Education through Artificial Intelligence: A Case Study of an AI-Assisted Architectural Programming and Design Course(Shitao Jin, Huijun Tu, Jiangfeng Li, Yuwei Fang, Qu Zhang, Fan Xu, Kun Liu, Yiquan Lin, 2024, Buildings)
- Spectrogram-Based Deep Learning for Flute Audition Assessment and Intelligent Feedback(Manu Agarwal, Ross Greer, 2023, No journal)
- Evaluating Reading Comprehension Exercises Generated by LLMs: A Showcase of ChatGPT in Education Applications(Changrong Xiao, Sean Xin Xu, Kunpeng Zhang, Yufang Wang, Lei Xia, 2023, No journal)
- Enhancing Computational Thinking in Programming Learning with Generative Artificial Intelligence Tools for college students(Chengzheng Li, Diao Yong-feng, Yijie Ding, 2025, Atlantis highlights in social sciences, education and humanities/Atlantis Highlights in Social Sciences, Education and Humanities)
- Blending Mixed Reality and Generative AI to Teach Geography: An MR+GenAI Learning Environment(Yupei Duan, Xinhao Xu, Hao He, Shangman Li, Yuanyuan Gu, 2025, Journal of Interactive Learning Research)
- A Diffusion Modeling-Based System for Teaching Dance to Digital Human(Linyan Zhou, Jingyuan Zhao, Jialiang He, 2024, Applied Sciences)
- Student Perceptions of ChatGPT Use in a College Essay Assignment: Implications for Learning, Grading, and Trust in Artificial Intelligence(Chad C. Tossell, Nathan L. Tenhundfeld, Ali Momen, Katrina Cooley, Ewart J. de Visser, 2024, IEEE Transactions on Learning Technologies)
- Beyond Traditional Pathways: Leveraging Generative AI for Dynamic Career Planning in Vocational Education(Jingyi Duan, 2024, International Journal of New Developments in Education)
- The Robots Are Here: Navigating the Generative AI Revolution in Computing Education(James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Petersen, Raymond Pettit, Brent N. Reeves, Jaromír Šavelka, 2023, No journal)
- The analysis of generative artificial intelligence technology for innovative thinking and strategies in animation teaching(Yao Xu, Ying Zhong, Weiran Cao, 2025, Scientific Reports)
- The use of generative artificial intelligence in surgical education: a narrative review(L Malleswara Rao, Eric Yang, S.A.R.R.P. Dissanayake, Roberto Cuomo, Ishith Seth, Warren M. Rozen, 2024, Plastic and Aesthetic Research)
- Construction of New Engineering Talent Training Mode from the Perspective of Innovation Ecology—Optimization and Innovation Path of University and Industry Cooperation Mechanism(Dan Wu, Yawen Hu, 2024, Journal of Human Resource Development)
- Exploration and Practice of AIGC Technology in the MOOC Course of 3D Animation Design(M. Zhang, Han Yang, 2024, No journal)
- Teaching AI with games: the impact of generative AI drawing on computational thinking skills(Ting‐Chia Hsu, Tai-Ping Hsu, 2025, Education and Information Technologies)
- Crisis and Responses to Design and Design Education in the AIGC Era(Yi Luo, Le Wang, 2023, No journal)
- Let’s Chat: Integrating Large Language Models into Blended Learning of English for Specific Purposes(Hengbin Yan, 2023, No journal)
- Exploring the Application of ChatGPT to English Teaching in a Malaysia Primary School(Yihan Lou, 2023, Journal of Advanced Research in Education)
- Communicative Approach in Foreign Language Teaching: Advantages and Limitations(Shehla Salmanova, 2025, EuroGlobal Journal of Linguistics and Language Education.)
- Emerging Trends in Oral Proficiency Cultivation: A Dual-Perspective Analysis of Chinese and Global Research via CiteSpace(J J Liu, Wei Hu, 2025, International Journal of Language & Linguistics)
- Data-driven Learning Meets Generative AI: Introducing the Framework of Metacognitive Resource Use(Atsushi Mizumoto, 2023, Applied Corpus Linguistics)
- Capabilities of GPT-4 on Medical Challenge Problems(Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, Eric Horvitz, 2023, arXiv (Cornell University))
- The Role of AI in Reshaping Medical Education: Opportunities and Challenges(Majid Ali, 2025, The Clinical Teacher)
- REVOLUTIONIZING TRANSLATOR TRAINING THROUGH HUMAN-AI COLLABORATION: INSIGHTS AND IMPLICATIONS FROM INTEGRATING GPT-4(Hussein Abu-Rayyash, 2023, CURRENT TRENDS IN TRANSLATION TEACHING AND LEARNING E)
- Enhancing Language Learning Through Generative Artificial Intelligence in Blended Learning: An Empirical Study on Productive and Receptive of Informal Digital Learning English(T.H. Lee, Vincent Cho, 2025, Journal of Educational Technology Systems)
- Custom Generative Artificial Intelligence Tutors in Action: An Experimental Evaluation of Prompt Strategies in STEM Education(Rok Gabrovšek, David Rihtaršič, 2025, Sustainability)
- Fostering Continuous Innovation in Creative Education: A Multi-Path Configurational Analysis of Continuous Collaboration with AIGC in Chinese ACG Educational Contexts(Juan Huangfu, Rui Li, Junping Xu, Younghwan Pan, 2024, Sustainability)
师生感知、评价体系变革与伦理治理研究
该组文献关注教育生态中的“人”与“规制”。研究涵盖了师生对AI的接受度模型(TAM/IMTA)、技术焦虑、代际差异,以及AI在自动化评分与反馈中的应用。同时深入探讨了学术诚信、数据隐私、算法偏见等伦理挑战与治理对策。
- Exploring the potential of artificial intelligence tools in educational measurement and assessment(Valentine Joseph Owan, Kinsgley Bekom Abang, Delight Omoji Idika, Eugene Onor Etta, Bassey Asuquo Bassey, 2023, Eurasia Journal of Mathematics Science and Technology Education)
- Augmenting assessment with AI coding of online student discourse: A question of reliability(Kamila Misiejuk, Rogers Kaliisa, Jennifer Scianna, 2024, Computers and Education Artificial Intelligence)
- The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers?(Cecilia Ka Yuk Chan, Katherine K. W. Lee, 2023, Smart Learning Environments)
- Applications and Challenges of AIGC in Empowering Personalized Learning for University Students(<p>Wang Chuner</p>, 2024, Frontiers in Educational Research)
- Revisiting Integrated Model of Technology Acceptance Among the Generative <scp>AI</scp> ‐Powered Foreign Language Speaking Practice: Through the Lens of Positive Psychology and Intrinsic Motivation(Chenghao Wang, Xueyun Li, Bin Zou, 2025, European Journal of Education)
- Generative Artificial Intelligence in Education: From Deceptive to Disruptive.(Marc Alier, Francisco José García‐Peñalvo, Jorge D. Camba, 2024, International Journal of Interactive Multimedia and Artificial Intelligence)
- Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning(David Baidoo-Anu, Leticia Owusu Ansah, 2023, Journal of AI)
- The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney(Thomas K. F. Chiu, 2023, Interactive Learning Environments)
- ChatGPT for good? On opportunities and challenges of large language models for education(Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, J. Weller, Jochen Kühn, Gjergji Kasneci, 2023, Learning and Individual Differences)
- Empowering teachers' professional development with LLMs: An empirical study of developing teachers' competency for instructional design in blended learning(Xu Wang, Jianwei Niu, Bei Fang, Guangxin Han, HE Ju-hou, 2025, Teaching and Teacher Education)
- Streamlining Educational Assessment: A User-Centric Analysis of an AI-Powered Examination App(Rahil Parikh, Himanshu Nimonkar, R. T. D. Ramesh Gandhi, Tarash Budhrani, Ashwini Dalvi, Irfan Siddavatam, 2023, No journal)
- Does ChatGPT Play a Double-Edged Sword Role in the Field of Higher Education? An In-Depth Exploration of the Factors Affecting Student Performance(Jiangjie Chen, Ziqing Zhuo, Jiacheng Lin, 2023, Sustainability)
- Challenges for higher education in the era of widespread access to generative AI(Krzysztof Walczak, Wojciech Cellary, 2023, Economics and Business Review/The Poznań University of Economics Review)
- ChatGPT Promises and Challenges in Education: Computational and Ethical Perspectives(Amr Adel, Ali Ahsan, Claire Davison, 2024, Education Sciences)
- Exploring perceived sustainable competencies in relation to curricula, generative AI tool usage, and knowledge sharing in blended learning(Muhammad Zaheer Asghar, Zarqa Farooq Hashmi, Pirita Seitamaa‐Hakkarainen, Muhammad Zaheer Asghar, 2025, Scientific Reports)
- The impact of artificial intelligence on learner–instructor interaction in online learning(Kyoungwon Seo, Joice Tang, Ido Roll, Sidney Fels, Dongwook Yoon, 2021, International Journal of Educational Technology in Higher Education)
- Education AI: exploring the impact of artificial intelligence on education in the digital age(Ayush Singh, M. K. Kiriti, Himanshi Singh, Abhishek Shrivastava, 2025, International Journal of Systems Assurance Engineering and Management)
- Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT(Rosario Michel‐Villarreal, Eliseo Luis Vilalta-perdomo, David Ernesto Salinas-Navarro, Ricardo Thierry-Aguilera, Flor Silvestre Gerardou, 2023, Education Sciences)
- Future of education in the era of generative artificial intelligence: Consensus among Chinese scholars on applications of ChatGPT in schools(Ming Liu, Yiling Ren, Lucy Michael Nyagoga, Francis Stonier, Zhongming Wu, Liang Yu, 2023, Future in Educational Research)
- Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy(Yogesh K. Dwivedi, Nir Kshetri, Laurie Hughes, Emma Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah, Alex Koohang, Vishnupriya Raghavan, Manju Ahuja, Hanaa Albanna, Mousa Ahmad Albashrawi, Adil S. Al-Busaidi, Janarthanan Balakrishnan, Yves Barlette, Sriparna Basu, Indranil Bose, Laurence Brooks, Dimitrios Buhalis, Lemuria Carter, Soumyadeb Chowdhury, Tom Crick, Scott W. Cunningham, Gareth H. Davies, Robert M. Davison, Rahul Dé, Denis Dennehy, Yanqing Duan, Rameshwar Dubey, Rohita Dwivedi, John S. Edwards, Carlos Flavián, Robin Gauld, Varun Grover, Mei‐Chih Hu, Marijn Janssen, Paul Jones, Iris Junglas, Sangeeta Khorana, Sascha Kraus, Kai R. Larsen, Paul Latreille, Sven Laumer, Tegwen Malik, Abbas Mardani, Marcello Mariani, Sunil Mithas, Emmanuel Mogaji, Jeretta Horn Nord, Siobhán O’Connor, Fevzi Okumus, Margherita Pagani, Neeraj Pandey, Savvas Papagiannidis, Ilias O. Pappas, Nishith Pathak, Jan Pries‐Heje, Ramakrishnan Raman, Nripendra P. Rana, Sven‐Volker Rehm, Samuel Ribeiro‐Navarrete, Alexander Richter, Frantz Rowe, Suprateek Sarker, Bernd Carsten Stahl, Manoj Tiwari, Wil van der Aalst, Viswanath Venkatesh, Giampaolo Viglia, Michael Wade, Paul Walton, Jochen Wirtz, Ryan Wright, 2023, International Journal of Information Management)
- Findings from a survey looking at attitudes towards AI and its use in teaching, learning and research(Edward Palmer, Daniel Lee, Matthew Arnold, Dimitra Lekkas, Katrina Plastow, Florian Ploeckl, Amit Kumar Srivastav, Peter Strelan, 2023, ASCILITE Publications)
- A Study on Teachers’ Willingness to Use Generative AI Technology and Its Influencing Factors: Based on an Integrated Model(Haili Lü, Lin He, Hao Yu, Tao Pan, Kefeng Fu, 2024, Sustainability)
- Exploring the role of AI in higher education: a natural language processing analysis of emerging trends and discourses(Nora Gavira Durón, Ana Lorena Jiménez Preciado, 2025, The TQM Journal)
- Students’ use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances(Margherita Bernabei, Silvia Colabianchi, Andrea Falegnami, Francesco Costantino, 2023, Computers and Education Artificial Intelligence)
- On the Teaching and Learning in the Information Age of “Big Data + Internet?” — Some Thoughts on the Application of ChatGPT in Teaching(Bowen Zhang, Jinru Mao, 2023, Atlantis highlights in social sciences, education and humanities/Atlantis Highlights in Social Sciences, Education and Humanities)
合并后的分组构建了一个从“技术底座”到“理论框架”,再到“教学实践”与“治理评估”的完整研究闭环。报告全面覆盖了生成式AI赋能混合式教学的五个关键维度:1) 探讨GPT-4等大模型的技术能力与智能化工具开发;2) 确立人机协作与混合增强智能的理论根基;3) 创新混合式教学设计并评估其对认知与成效的影响;4) 展示多学科(尤其是语言、医学、艺术)的深度应用案例;5) 审视师生技术接受度、评价体系变革及伦理治理挑战。这为研究生成式AI如何驱动教育数字化转型提供了系统性的文献支撑。
总计89篇相关文献
Learning technologies often do not meet the university requirements for learner engagement via interactivity and real-time feedback. In addition to the challenge of providing personalized learning experiences for students, these technologies can increase the workload of instructors due to the maintenance and updates required to keep the courses up-to-date. Intelligent chatbots based on generative artificial intelligence (AI) technology can help overcome these disadvantages by transforming pedagogical activities and guiding both students and instructors interactively. In this study, we explore and compare the main characteristics of existing educational chatbots. Then, we propose a new theoretical framework for blended learning with intelligent chatbots integration enabling students to interact online and instructors to create and manage their courses using generative AI tools. The advantages of the proposed framework are as follows: (1) it provides a comprehensive understanding of the transformative potential of AI chatbots in education and facilitates their effective implementation; (2) it offers a holistic methodology to enhance the overall educational experience; and (3) it unifies the applications of intelligent chatbots in teaching–learning activities within universities.
Designing sustainable and scalable educational systems is a challenge. Artificial Intelligence (AI) offers promising solutions to enhance the effectiveness and sustainability of blended learning systems. This research paper focuses on the integration of the Chat Generative Pre-trained Transformer (ChatGPT), with a blended learning system. The objectives of this study are to investigate the potential of AI techniques in enhancing the sustainability of educational systems, explore the use of ChatGPT to personalize the learning experience and improve engagement, and propose a model for sustainable learning that incorporates AI. The study aims to contribute to the body of knowledge on AI applications for sustainable education, identify best practices for integrating AI in education, and provide insights for policymakers and educators on the benefits of AI in education delivery. The study emphasizes the significance of AI in sustainable education by addressing personalized learning and educational accessibility. By automating administrative tasks and optimizing content delivery, AI can enhance educational accessibility and promote inclusive and equitable education. The study’s findings highlight the potential benefits of integrating AI chatbots like ChatGPT into education. Such benefits include promoting student engagement, motivation, and self-directed learning through immediate feedback and assistance. The research provides valuable guidance for educators, policymakers, and instructional designers who seek to effectively leverage AI technology in education. In conclusion, the study recommends directions for future research in order to maximize the benefits of integrating ChatGPT into learning systems. Positive results have been observed, including improved learning outcomes, enhanced student engagement, and personalized learning experiences. Through advancing the utilization of AI tools like ChatGPT, blended learning systems can be made more sustainable, efficient, and accessible for learners worldwide.
This study examines “VirtualGeo”, a Mixed Reality and Generative AI platform designed to enhance U.S. geography knowledge among international students. By integrating immersive technologies, VirtualGeo allows students to engage with spatial content within an interactive digital landscape. Using a mixed-methods approach, the study evaluated the platform’s impact through pre- and post-tests measuring geographic knowledge, map-drawing skills, and cognitive load, complemented by qualitative interviews to capture students’ learning experiences. Findings show significant gains in international students’ geographic understanding and underscore VirtualGeo’s potential to support adaptive and student-centered learning. Implications suggest that VirtualGeo is a scalable tool to teach geography and other spatial disciplines, which highlights new possibilities for research, policy, and practice in digital education.
No abstract
In the rapidly advancing era of educational technology, customized learning materials have the potential to enhance individuals’ learning capacities. This research endeavors to devise an effective method for detecting a learner’s preferred learning style and subsequently adapting the learning content to align with that style, utilizing artificial intelligence AI techniques. Our investigation finds that analyzing learners’ web tracking logs for activity classification and categorizing individual responses for feedback classification are highly effective methods for identifying a learner’s learning styles, such as visual, auditory, and kinesthetic. A custom dataset has been constructed in this research comprising approximately 506 samples and 22 features utilizing the Moodle learning management system (LMS), successfully categorizing students into their respective learning styles. Furthermore, decision tree, random forest, support vector machine (SVM), logistic regression, XGBoost, blending ensemble, and convolutional neural network (CNN) algorithms with corresponding optimized hyperparameters and synthetic minority oversampling technique (SMOTE) have been applied for learning behavior classification. The blending ensemble technique with the XGBoost meta-learning model accomplished the best performance for learning style detection with an accuracy of 97.56%. Next, the text content of the electronic documents is modified by employing different natural language processing (NLP) techniques, including named entity recognition of spaCy, knowledge graph, generative pre-trained transformer 3 (GPT-3), and text-to-text transfer transformer (T5) model, to accommodate diverse learning styles. Various approaches, such as color coding, audio scripts, mind maps, flashcards, etc., are implemented to adapt the content effectively for the detected categories of learners. The spaCy NLP-based named entity recognition (NER) model demonstrates a 94.16% F1 score and 0.92 exact match ratio for color coding text generation of ten electronic documents comprising 790 distinct individual words. These modifications aim to cater to the unique preferences of learners, fostering a more personalized and engaging educational experience. To the best of our knowledge, this is the first time an integrated learning style detection and content modification system has been developed in this work utilizing efficient AI techniques and a private dataset.
In an era marked by unprecedented global challenges, including environmental degradation, social inequalities, and the rapid evolution of technology, the need for innovative educational approaches is critical. This conceptual paper explores the intersection of sustainability, creativity, and technology for education, focusing on artificial intelligence (AI) as an example. We propose a framework that synthesizes sustainability principles and creative pedagogies, detailing its components to guide the integration of AI into sustainability education. The paper illustrates how blending creative pedagogies with the notion of sustainability as a frame of mind offers a framework that allows teachers to support creative learning and problem solving, with and through technology. Using the example of AI technology, we illustrate the potential benefits and inherent challenges of integrating new technologies into education. Generative AI is a cogent example, as it presents unique opportunities for personalizing learning and engaging students in creative problem solving around sustainability issues. However, it also introduces significant environmental and ethical concerns to navigate. Exploring the balance between technological innovation and sustainability imperatives, this paper outlines a framework for incorporating technology into education that promotes environmental care with creative exploration. Through a synthesis of sustainability principles and creative pedagogies, we highlight the benefits and challenges of using AI in education, offering strategic insights to leverage technology for a sustainable and just future.
This paper explores the intersection of data-driven learning (DDL) and generative AI (GenAI), represented by technologies like ChatGPT, in the realm of language learning and teaching. It presents two complementary perspectives on how to integrate these approaches. The first viewpoint advocates for a blended methodology that synergizes DDL and GenAI, capitalizing on their complementary strengths while offsetting their individual limitations. The second introduces the Metacognitive Resource Use (MRU) framework, a novel paradigm that positions DDL within an expansive ecosystem of language resources, which also includes GenAI tools. Anchored in the foundational principles of metacognition, the MRU framework centers on two pivotal dimensions: metacognitive knowledge and metacognitive regulation. The paper proposes pedagogical recommendations designed to enable learners to strategically utilize a wide range of language resources, from corpora to GenAI technologies, guided by their self-awareness, the specifics of the task, and relevant strategies. The paper concludes by highlighting promising avenues for future research, notably the empirical assessment of both the integrated DDL-GenAI approach and the MRU framework.
The arrival of Generative Artificial Intelligence (AI) is fundamentally different from prior technologies used in educational settings. Educators and researchers of online, blended, and in-person learning are still coming to grips with possible applications of AI in the learning experience with existing technologies; let alone understanding the potential consequences that future developments in AI will produce. Despite potential risks, AI may revolutionize previous models of teaching and learning and perhaps create opportunities to realize progressive educational goals. Given the longstanding tradition of philosophy to examine questions surrounding ethics, ontology, technology, and education, the purpose of this critical reflection paper is to draw from prominent philosophers across these disciplines to address the question: how can AI be employed in future educational contexts in a humanizing and ethical manner? Drawing from the work of Gunther Anders, Michel Foucault, Paolo Freire, Benjamin Bloom, and Hannah Arendt, we propose a framework for assessing the use and ethics of AI in modern education contexts regarding human versus AI generated textual and multimodal content, and the broader political, social, and cultural implications. We conclude with applied examples of the framework and implications for future research and practice.
Currently, many generative Artificial Intelligence (AI) tools are being integrated into the educational technology landscape for instructors. Our paper examines the potential and challenges of using Large Language Models (LLMs) to code student-generated content in online discussions based on intended learning outcomes and how instructors could use this to assess the intended and enacted learning design. If instructors were to rely on LLMs as a means of assessment, the reliability of these models to code the data accurately is crucial. Employing a diverse set of LLMs from the GPT family and prompting techniques on an asynchronous online discussion dataset from a blended-learning bachelor-level course, our research examines the reliability of AI-supported coding in educational research. Findings reveal that while AI-supported coding demonstrates efficiency, achieving substantial, moderate agreement with human coding for specific nuanced and context-dependent codes is challenging. Moreover, the high cost, token limits, and the advanced necessary skills needed to write API scripts might limit the usability of AI-driven coding. Finally, implementation would require specific parameterization techniques based on the class and may not be feasible for widespread implementation. Our study underscores the importance of transparency in AI coding methodologies and the need for a hybrid approach that integrates human judgment to ensure data accuracy and interpretability. In addition, it contributes to the knowledge base about the reliability of LLMs to code real, small datasets using complex codes that are common in the instructor's practice and explores the potential and challenges of using these models for assessment purposes.
This paper investigates the transformative impact of generative artificial intelligence (AI) on vocational education career planning, transitioning from traditional methodologies to personalized, dynamic strategies. By leveraging Natural Language Processing (NLP) and Machine Learning (ML), it delves into how generative AI can provide tailored career guidance, adaptive learning pathways, and labor market insights, underpinned by constructivist learning theory and career development models. The study's methodology blends theoretical analysis with practical implementation, focusing on strategic planning, stakeholder engagement, technology customization, and ethical considerations. It discusses the implications for educators, students, and institutions, emphasizing the necessity for continuous adaptation and innovation in the face of technological advancements. Additionally, the paper identifies future research avenues, including the long-term impact of AI on employment outcomes, its scalability across vocational disciplines, and ethical challenges, advocating for the strategic employment of generative AI to align vocational education more closely with the evolving job market and enhance students' readiness for future careers.
Artificial Intelligence (AI) is having an advancing dramatic impact on Technology Enhanced Learning (TEL) in Higher Education. (Popenici & Kerr, 2017) observed an emergence of the use of AI in HE (Higher Education) and pinpointed challenges for institutions and students including issues of academic integrity, privacy and “the possibility of a dystopian future” (p. 11). Potential benefits of AI in HE includes creating learning communities through chatbots (Studente & Ellis, 2020), automated grading, individualized learning strategies and improved plagiarism detection (Owoc et al., 2019). It is unclear how often, and in what manner, students are engaging with AI during their learning and in creating submissions for assessments tasks and if this engagement is creating unrealistic outcomes. It is also unclear how educators are engaging with AI during their teaching and curriculum/assessment design and how this may be impacting the learning outcomes of their cohorts. This research study was conducted to investigate the perceived immediate and long-term implications of engaging with AI of both staff and students on learning and teaching within the University of Adelaide. The design of the research study is underpinned by a blended approach combining Situational Ethics and Planned Behavior Theory to understand the ethical considerations and behavioral activities and future intentions of staff and students regarding the use of AI. Situational Ethics provides a framework for examining the contextual nature of ethical decision-making regarding AI (Boddington, 2017; Memarian & Doleck, 2023). Planned Behavior Theory provides understanding of individuals' motivation and rationalization to engage with AI (Wang et al., 2022). By employing a mixed qualitative and quantitative design, collecting data via online surveys, the study's findings shed light on the ethical challenges and attitudes associated with AI implementation in higher education and provided insights into the factors that influence staff and students’ individual intentions to engage with AI technologies in Learning and Teaching. Participants from all faculties across a wide diversity of student cohorts and staff responded to the surveys. Initial findings reveal educators are suspecting a greater student use of AI than the data demonstrates. The most frequent use of AI by students is for checking grammar and this is more prominent in the international student cohort. Students trust their human educators more than AI for course content and feedback on assessments. Educators are comfortable using AI but feel also they need greater support and training. The majority of students (70%, n=126) are not concerned about the implications of using Generative AI in higher education, regarding issues related to privacy, bias, ethics, or discrimination. However, demonstrating an active concern in this field, the most common use of AI by university staff is to test its capabilities to complete assignments. These and other findings from the study can provide guidance to staff and students by describing current practices and making recommendations regarding assessment, curriculum design, and Learning and Teaching (L&T) activities.
The introduction of generative artificial intelligence (AI) has revolutionized healthcare and education. These AI systems, trained on vast datasets using advanced machine learning (ML) techniques and large language models (LLMs), can generate text, images, and videos, offering new avenues for enhancing surgical education. Their ability to produce interactive learning resources, procedural guidance, and feedback post-virtual simulations makes them valuable in educating surgical trainees. However, technical challenges such as data quality issues, inaccuracies, and uncertainties around model interpretability remain barriers to widespread adoption. This review explores the integration of generative AI into surgical training, assessing its potential to enhance learning and teaching methodologies. While generative AI has demonstrated promise for improving surgical education, its integration must be approached cautiously, ensuring AI input is balanced with traditional supervision and mentorship from experienced surgeons. Given that generative AI models are not yet suitable as standalone tools, a blended learning approach that integrates AI capabilities with conventional educational strategies should be adopted. The review also addresses limitations and challenges, emphasizing the need for more robust research on different AI models and their applications across various surgical subspecialties. The lack of standardized frameworks and tools to assess the quality of AI outputs in surgical education necessitates rigorous oversight to ensure accuracy and reliability in training settings. By evaluating the current state of generative AI in surgical education, this narrative review highlights the potential for future innovation and research, encouraging ongoing exploration of AI in enhancing surgical education and training.
Personalized learning (PL) aspires to provide an alternative to the one-size-fits-all approach in education. Technology-based PL solutions have shown notable effectiveness in enhancing learning performance. However, their alignment with the broader goals of modern education is inconsistent across technologies and research areas. In this paper, we examine the characteristics of AI-driven PL solutions in light of the goals outlined in the OECD Learning Compass 2030. Our analysis indicates a gap between the objectives of modern education and the technological approach to PL. We identify areas where the AI-based PL solutions could embrace essential elements of contemporary education, such as fostering learner's agency, cognitive engagement, and general competencies. While the PL solutions that narrowly focus on domain-specific knowledge acquisition are instrumental in aiding learning processes, the PL envisioned by educational experts extends beyond simple technological tools and requires a holistic change in the educational system. Finally, we explore the potential of generative AI, such as ChatGPT, and propose a hybrid model that blends artificial intelligence with a collaborative, teacher-facilitated approach to personalized learning.
Inclusive STEM teacher training plays a critical role in shaping the future of STEM teaching practices and improving educational outcomes for all students, particularly those from marginalized and underrepresented backgrounds. This study investigates the inclusive collaborative learning framework for enhancing STEM teaching among student teachers, focusing on interpersonal and human-machine (generative artificial intelligence) collaboration. Employing a Self-Determination Theory guided approach, two rounds of exploratory studies were conducted. Study 1 compared the effects of interpersonal collaboration (TSPL: in-Service Teacher-Student Teacher Pair Learning) and human-machine collaboration (CSPL: ChatGPT-Student Teacher Pair Learning). Building on Study 1, Study 2 employed a hybrid inclusive collaborative learning model (iHMCL: integrated Human-Machine Collaborative Learning) with expanded participant demographics, blended course formats, and integrated peer, expert, and AI feedback mechanisms. The two-year iterative empirical research revealed differences in the impact of the three collaborative learning approaches on student teachers' learning. CSPL and iHMCL groups outperformed TSPL in STEM teaching knowledge and cognitive load, while TSPL and iHMCL excelled in STEM teaching ability compared to CSPL. The SDT-based inclusive collaborative learning framework for STEM teacher training proved effective, with noted implications. In the future, the integration of generative artificial intelligence and cross boundary learning in inclusive STEM teacher education will require educators to redefine their roles, emphasizing emotional support , critical thinking, and creativity, ensuring that AI complements rather than replaces hands-on, reality-based learning. • This study explores three collaborative methods (TSPL, CSPL, iHMCL) in developing STEM teachers' literacy through SDT framework. • Expanded learning through hybrid methods and cross-boundary interaction. • Comparison of interpersonal and human-machine (AI) collaboration in learning. • CSPL/iHMCL excel in STEM teaching knowledge and cognitive load; TSPL/iHMCL excel in STEM teaching ability.
This study investigated the role of generative artificial intelligence (AI) tools in facilitating informal digital English learning activities among second language (L2) learners in a Chinese context. It explored how factors like learners’ ideal L2 self-imagination, international posture, and perceptions of information quality influence their engagement with productive (e.g., writing, speaking) and receptive (e.g., reading, listening) informal digital learning activities mediated by generative AI. Drawing from theories of consumption values and self-determination, the research model examined relationships between these variables. The findings suggest that while generative AI holds promise for informal digital language learning by facilitating imagination of an ideal multilingual self, there are opportunities to enhance functionality for L2 contexts. Nurturing an international mindset through interventions may also promote informal learning across genders. Adapting informal digital learning resources based on factors like information quality perceptions could increase engagement. This study provides insights into harnessing generative AI effectively for blended language learning solutions.
While computational thinking (CT) is crucial for modern education, integrating artificial intelligence (AI) into learning poses challenges due to its complexity. Generative AI Drawing (GAID) offers an intuitive method for teaching AI concepts, but barriers such as restrictive access for younger students and limited instructional frameworks hinder its potential. This study proposes combining GAID with game-based learning (GBL) to create an engaging, hands-on approach for teaching CT and AI literacy. A GBL environment was developed where students designed and refined robot-control board game cards. This study involved a total of 56 sixth-grade students from two elementary schools in northern Taiwan. One group, consisting of 28 students using GAID and block-based coding to foster AI literacy, was assigned as the experimental group. The other group, consisting of 28 students who did not employ GAID but relied only on a Google search engine in their learning, was assigned as the control group. The results showed that the experimental group enhanced algorithmic thinking and AI literacy, particularly in the “Create AI” dimension, compared to the control group. However, the control group excelled in CT concept mastery, suggesting that the beginners need conventional learning first to better support foundational CT learning. Consequently, a balanced educational approach blending automated tools like GAID with exploratory, project-based activities is recommended to maximize learning outcomes.
Abstract ChatGPT is an artificial intelligence chatbot that utilizes advanced natural language processing technologies, including large language models, to produce human‐like responses to user queries spanning a wide range of topics from programming to mathematics. As an emerging generative artificial intelligence (GAI) tool, it presents novel opportunities and challenges to the ongoing digital transformation of education. This article employs a systematic review approach to summarize the viewpoints of Chinese scholars and experts regarding the implementation of GAI in education. The research findings indicate that a majority of Chinese scholars support the cautious integration of GAI into education as it serves as a learning tool that offers personalized educational experiences for students. However, it also raises concerns related to academic integrity and the potential hindrance to students' critical thinking skills. Consequently, a framework called DATS, which outlines an optimization path for future GAI applications in schools, is proposed. The framework takes into account the perspectives of four key stakeholders: developers, administrators, teachers, and students.
No abstract
As a new type of artificial intelligence, ChatGPT is becoming widely used in learning. However, academic consensus regarding its efficacy remains elusive. This study aimed to assess the effectiveness of ChatGPT in improving students’ learning performance, learning perception, and higher-order thinking through a meta-analysis of 51 research studies published between November 2022 and February 2025. The results indicate that ChatGPT has a large positive impact on improving learning performance (g = 0.867) and a moderately positive impact on enhancing learning perception (g = 0.456) and fostering higher-order thinking (g = 0.457). The impact of ChatGPT on learning performance was moderated by type of course (QB = 64.249, P < 0.001), learning model (QB = 76.220, P < 0.001), and duration (QB = 55.998, P < 0.001); its effect on learning perception was moderated by duration (QB = 19.839, P < 0.001); and its influence on the development of higher-order thinking was moderated by type of course (QB = 7.811, P < 0.05) and the role played by ChatGPT (QB = 4.872, P < 0.05). This study suggests that: (1) appropriate learning scaffolds or educational frameworks (e.g., Bloom’s taxonomy) should be provided when using ChatGPT to develop students’ higher-order thinking; (2) the broad use of ChatGPT at various grade levels and in different types of courses should be encouraged to support diverse learning needs; (3) ChatGPT should be actively integrated into different learning modes to enhance student learning, especially in problem-based learning; (4) continuous use of ChatGPT should be ensured to support student learning, with a recommended duration of 4–8 weeks for more stable effects; (5) ChatGPT should be flexibly integrated into teaching as an intelligent tutor, learning partner, and educational tool. Finally, due to the limited sample size for learning perception and higher-order thinking, and the moderately positive effect, future studies with expanded scope should further explore how to use ChatGPT more effectively to cultivate students’ learning perception and higher-order thinking.
Generative Artificial Intelligence (GenAI) has emerged as a promising technology that can create original content, such as text, images, and sound. The use of GenAI in educational settings is becoming increasingly popular and offers a range of opportunities and challenges. This special issue explores the management and integration of GenAI in educational settings, including the ethical considerations, best practices, and opportunities. The potential of GenAI in education is vast. By using algorithms and data, GenAI can create original content that can be used to augment traditional teaching methods, creating a more interactive and personalized learning experience. In addition, GenAI can be utilized as an assessment tool and for providing feedback to students using generated content. For instance, it can be used to create custom quizzes, generate essay prompts, or even grade essays. The use of GenAI as an assessment tool can reduce the workload of teachers and help students receive prompt feedback on their work. Incorporating GenAI in educational settings also poses challenges related to academic integrity. With availability of GenAI models, students can use them to study or complete their homework assignments, which can raise concerns about the authenticity and authorship of the delivered work. Therefore, it is important to ensure that academic standards are maintained, and the originality of the student's work is preserved. This issue highlights the need for implementing ethical practices in the use of GenAI models and ensuring that the technology is used to support and not replace the student's learning experience.
The recent advancement of pre-trained Large Language Models (LLMs), such as OpenAI's ChatGPT, has led to transformative changes across fields. For example, developing intelligent systems in the educational sector that leverage the linguistic capabilities of LLMs demonstrates a visible potential. Though researchers have recently explored how ChatGPT could possibly assist in student learning, few studies have applied these techniques to real-world classroom settings involving teachers and students. In this study, we implement a reading comprehension exercise generation system that provides high-quality and personalized reading materials for middle school English learners in China. Extensive evaluations of the generated reading passages and corresponding exercise questions, conducted both automatically and manually, demonstrate that the system-generated materials are suitable for students and even surpass the quality of existing human-written ones. By incorporating first-hand feedback and suggestions from experienced educators, this study serves as a meaningful pioneering application of ChatGPT, shedding light on the future design and implementation of LLM-based systems in the educational context.
This study addresses the current lack of research on the effectiveness assessment of Artificial Intelligence (AI) technology in architectural education. Our aim is to evaluate the impact of AI-assisted architectural teaching on student learning. To achieve this, we developed an AI-embedded teaching model. A total of 24 students from different countries participated in this 9-week course, completing a comprehensive analysis of architectural programming and design using AI technologies. This study conducted questionnaire surveys with students at both midterm and final stages of the course, followed by structured interviews after the course completion, to explore the effectiveness and application status of the teaching model. The results indicate that the AI-embedded teaching model positively and effectively influenced student learning. The “innovative capability” and “work efficiency” of AI technologies were identified as key factors affecting the effectiveness of the teaching model. Furthermore, the study revealed a close integration of AI technologies with architectural programming but identified challenges in the uncontrollable expression of architectural design outcomes. Student utilization of AI technologies appeared fragmented, lacking a systematic approach. Lastly, the study provides targeted optimization suggestions based on the current application status of AI technologies among students. This research offers theoretical and practical support for the further integration of AI technologies in architectural education.
The application of generative artificial intelligence in the field of education has been receiving increasing attention, with the performance of chatbot ChatGPT being particularly prominent. This study aims to explore in depth the performance impact on higher education students utilizing ChatGPT. To this end, we conducted a survey on 448 university students and employed the partial-least squares (PLS) method of structural equation modeling for data analysis. The results indicate that all eight hypothetical paths posited in this study were supported, and surprisingly, the hypothesis that technology characteristics have a direct effect on performance impact was supported. Moreover, the study found that overall quality is a crucial factor determining performance impact. Overall quality indirectly affects performance impact through task-technology fit, technology characteristics, and compatibility, among which the mediating effect of compatibility is most significant, followed by technology characteristics. This study offers practical recommendations for students on the proper use of ChatGPT during the learning process and assists developers in enhancing the services of the ChatGPT system.
Human-AI collaboration has evolved into a complex, multidimensional paradigm shaped by research in various domains. Key areas such as human-in-the-loop systems, Interactive Machine Learning (IML), Hybrid Intelligence, and Human-Agent Interaction have significantly contributed to this development. However, these fields often lack cohesion, underscoring the need for a cohesive perspective to advance. This work addresses this gap by integrating insights from diverse aspects of collaboration to present a holistic approach to fostering effective and adaptive interactions between humans and artificial agents. It emphasizes empowering end-users with greater control and involvement in decision-making processes, thereby enhancing both the levels of interactivity and adaptability within intelligent systems. Moving beyond a focus on AI training techniques, this paper presents a broader perspective on incorporating human input into AI decision-making and learning processes, highlighting the importance of flexibility in systems and user engagement. The manuscript proposes a framework encompassing five levels of human integration and examines their relationship with core collaboration aspects, including the system purpose, participant expertise, and system proactivity. By synthesizing current knowledge on human-AI collaboration and outlining essential design principles, this work aims to advance the field and foster interdisciplinary collaboration among researchers, practitioners, and designers.
This paper investigates the integration of ChatGPT into educational environments, focusing on its potential to enhance personalized learning and the ethical concerns it raises. Through a systematic literature review, interest analysis, and case studies, the research scrutinizes the application of ChatGPT in diverse educational contexts, evaluating its impact on teaching and learning practices. The key findings reveal that ChatGPT can significantly enrich education by offering dynamic, personalized learning experiences and real-time feedback, thereby boosting teaching efficiency and learner engagement. However, the study also highlights significant challenges, such as biases in AI algorithms that may distort educational content and the inability of AI to replicate the emotional and interpersonal dynamics of traditional teacher–student interactions. The paper acknowledges the fast-paced evolution of AI technologies, which may render some findings obsolete, underscoring the need for ongoing research to adapt educational strategies accordingly. This study provides a balanced analysis of the opportunities and challenges of ChatGPT in education, emphasizing ethical considerations and offering strategic insights for the responsible integration of AI technologies. These insights are valuable for educators, policymakers, and researchers involved in the digital transformation of education.
AI-generated content (AIGC) is uniquely positioned to drive the digital transformation of professional education in the animation, comic, and game (ACG) industries. However, its collaborative application also faces initial novelty effects and user discontinuance. Existing studies often employ single-variable analytical methods, which struggle to capture the complex mechanisms influencing technology adoption. This study innovatively combines necessary condition analysis (NCA) and fuzzy-set qualitative comparative analysis (fsQCA) and applies them to the field of ACG education. Using this mixed-method approach, it systematically explores the necessary conditions and configurational effects influencing educational users’ continuance intention to adopt AIGC tools for collaborative design learning, aiming to address existing research gaps. A survey of 312 Chinese ACG educational users revealed that no single factor constitutes a necessary condition for their continuance intention to adopt AIGC tools. Additionally, five pathways leading to high adoption intention and three pathways leading to low adoption intention were identified. Notably, the absence or insufficiency of task–technology fit, and perceived quality do not hinder ACG educational users’ willingness to actively adopt AIGC tools. This reflects the creativity-driven learning characteristics, and the flexible and diverse tool demands of the ACG discipline. The findings provide theoretical and empirical insights to enhance the effective synergy and sustainable development between ACG education and AIGC tools.
As ChatGPT has gained popularity, more academics have looked into its application to English instruction. In this study, interviews with English teachers at a primary school in Malaysia were conducted using a qualitative research approach. This study displays how ChatGPT how ChatGPT was utilized in the preparation of English teaching for Malaysian instructors, which refers to ChatGPT can be used as an immediate help for English teachers in designing or preparing their teaching content, as well as what effect ChatGPT has on the teachers’ instructional strategy, which refers to effectiveness and efficiency.
In the dynamic landscape of contemporary education, the evolution of teaching strategies such as blended learning and flipped classrooms has highlighted the need for efficient and effective generation of multiple-choice questions (MCQs). To address this, we introduce MCQGen, a novel generative artificial intelligence framework designed for the automated creation of MCQs. MCQGen uniquely integrates a large language model (LLM) with retrieval-augmented generation and advanced prompt engineering techniques, drawing from an extensive external knowledge base. This integration significantly enhances the ability of the LLM to produce educationally relevant questions that align with both the goals of educators and the diverse learning needs of students. The framework employs innovative prompt engineering, combining chain-of-thought and self-refine prompting techniques, to enhance the performance of the LLM. This process leads to the generation of questions that are not only contextually relevant and challenging but also reflective of common student misconceptions, contributing effectively to personalized learning experiences and enhancing student engagement and understanding. Our extensive evaluations showcase the effectiveness of MCQGen in producing high-quality MCQs for various educational needs and learning styles. The framework demonstrates its potential to significantly reduce the time and expertise required for MCQ creation, marking its practical utility in modern education. In essence, MCQGen offers an innovative and robust solution for the automated generation of MCQs, enhancing personalized learning in the digital era.
The Communicative Language Teaching (CLT) approach has transformed foreign language education by prioritizing fluency, interaction, and real-world communication over rote memorization and grammar drills. This method promotes student-centered learning, task-based instruction, and the use of authentic materials, making language acquisition more engaging and effective. However, despite its advantages, CLT faces several challenges, including limited emphasis on grammatical accuracy, difficulties in assessment, and resistance in non-native English-speaking contexts. Traditional grammar-based testing often fails to measure communicative competence, highlighting the need for more effective assessment models. Additionally, teacher preparedness and classroom management remain barriers to CLT’s full implementation, especially in large class settings and regions where traditional teaching methods dominate. Future developments in blended learning, adaptive teaching strategies, and AI-driven assessment tools could help bridge the gap between fluency and linguistic accuracy, ensuring that learners develop both communication skills and structural competence. This paper explores the advantages and limitations of CLT, discussing potential solutions for integrating communicative approaches with structured learning methodologies to create a balanced and effective language teaching framework.
Performers of classical music require a blend of technical precision and artistic expression in their output, and this fusion, often referred to as “musicality,” is considered vital to performance quality. This paper introduces an innovative approach that leverages deep learning and LLMs to simultaneously evaluate and coach musicians using recorded performances on a variety of performance metrics at varying levels of subjectivity. A case study, centered around flute players performing a challenging excerpt from Ravel’s “Daphnis et Chloé,” demonstrates the proposed model’s capabilities. Feedback is generated by a large-language model based on machine-assessed quality, learned from human judgments. The model showcases promise in bridging the gap between technical precision and human expression in classical music performance assessment and provides a foundation for expanding the repertoire of assessed pieces and advancing the integration of AI in classical music education.
This paper is a reflective account that outlines the design of two Continual Professional Development (CPD) workshop sessions based on a blend of theory for design thinking about aspects of curriculum, pedagogy and technology. The theoretical approach blended aspects of design-based research, speculative design, Activity Theory and subtractive change to address issues, barriers and explore opportunities in each workshop example that is presented. The first of these workshops brought university engineering lecturers together to explore the opportunities and barriers for integrating ‘co-creation’ as a pedagogical strategy to subject teaching alongside a new interface into their curriculum. The results show how design thinking exposes limitations and challenges that prevent the realisation of pedagogically rich interventions. The second workshop brought together post-compulsory vocational lecturers to a teacher education workshop and used the same theoretical reference points to inform and antagonise the implications that Large Language Models, such as Chat GPT, present to subject knowledge, curriculum design and modes of assessment. Here these theoretically rich forms are proposed for planning use in learning design and for reshaping curricula, where academics and other professionals supporting teaching and learning may want to introduce new technologies and integrate innovative pedagogical methods or confront new challenges to their work. They may also be used as continual professional development sessions in highly participatory, practical and creative ways that allow for lucid experimentation and to imbue professionals with agency and trust.
No abstract
ChatGPT is revolutionizing the field of higher education by leveraging deep learning models to generate human-like content. However, its integration into academic settings raises concerns regarding academic integrity, plagiarism detection, and the potential impact on critical thinking skills. This article presents a study that adopts a thing ethnography approach to understand ChatGPT’s perspective on the challenges and opportunities it represents for higher education. The research explores the potential benefits and limitations of ChatGPT, as well as mitigation strategies for addressing the identified challenges. Findings emphasize the urgent need for clear policies, guidelines, and frameworks to responsibly integrate ChatGPT in higher education. It also highlights the need for empirical research to understand user experiences and perceptions. The findings provide insights that can guide future research efforts in understanding the implications of ChatGPT and similar Artificial Intelligence (AI) systems in higher education. The study concludes by highlighting the importance of thing ethnography as an innovative approach for engaging with intelligent AI systems and calls for further research to explore best practices and strategies in utilizing Generative AI for educational purposes.
The declaration of the COVID-19 pandemic forced humanity to rethink how we teach and learn. The metaverse, a 3D digital space mixed with the real world and the virtual world, has been heralded as a trend of future education with great potential. However, as an emerging item, rarely did the existing study discuss the metaverse from the perspective of education. In this paper, we first introduce the visions of the metaverse, including its origin, definitions, and shared features. Then, the metaverse in education is clearly defined, and a detailed framework of the metaverse in education is proposed, along with in-depth discussions of its features. In addition, four potential applications of the metaverse in education are described with reasons and cases: blended learning, language learning, competence-based education, and inclusive education. Moreover, challenges of the metaverse for educational purposes are also presented. Finally, a range of research topics related to the metaverse in education is proposed for future studies. We hope that, <i>via</i> this research paper, researchers with both computer science and educational technology backgrounds could have a clear vision of the metaverse in education and provide a stepping stone for future studies. We also expect more researchers interested in this topic can commence their studies inspired by this paper.
Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.
<italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Contribution: </i> This article proposes a new theoretical model with a goal to develop future human computational thinking (CT) in foundational computer science (CS) education. The model blends six critical types of thinking, i.e., logical thinking, systems thinking, sustainable thinking, strategic thinking, creative thinking, and responsible thinking into the design of a first-year undergraduate programming course. The study describes a creative blended pedagogy that embeds the proposed model into the course plan. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Background:</i> The emergence of artificial intelligent systems such as large language models from a knowledge provider perspective, coupled with a gradual change in post-pandemic outlook of education challenge the relevance and raises concerns about the future of education. The 21st-century human CT requirements, viz., learning to code (skill) and thinking computationally (competency), will be inadequate in the future. Moreover, there is substantial evidence which shows that most introductory programming courses fail to integrate critical elements like ethics and responsibility as part of the course. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Intended Outcomes:</i> The authors anticipate experiential learning models such as this has immense potential to future-proof CS education, as well as make future software engineers responsible citizens. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Application Design:</i> The proposed model blends six types of thinking into the design and activities of the course. The underlying theoretical basis of these activities revolve around three key principles: 1) experiential learning; 2) self-reflection; and 3) peer learning. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Findings:</i> This case study from a liberal educational institution in India qualitatively shows evidence of students developing six critical elements of thinking that shapes their future CT ability.
The integration of generative artificial intelligence, particularly large language models, into education presents opportunities for both personalised learning and pedagogical challenges. This study focuses on electrical engineering laboratory education. We developed a configurable prototype of a generative artificial intelligence powered tutoring tool, implemented it in an undergraduate electrical engineering laboratory course, and analysed 208 student–tutoring tool interactions using a mixed-methods approach that combined research team evaluation with learner feedback. The findings show that student prompts were predominantly procedural or factual, with limited conceptual or metacognitive engagement. Structured prompt styles produced clearer and more coherent responses and were rated the highest by students, while approaches aimed at fostering reasoning and reflection were valued mainly by the research team for their pedagogical depth. This contrast highlights a consistent preference–pedagogy gap, indicating the need to embed stronger instructional guidance into artificial intelligence tutoring. To bridge this gap, a promising direction is the development of pedagogically enriched AI tutors that integrate features such as adaptive prompting, hybrid strategy blending, and retrieval-augmented feedback to balance clarity, engagement, and depth. The results provide practical and conceptual value relevant to educators, developers, and researchers interested in artificial intelligence tutors that are both engaging and pedagogically sound. For educators, the study clarifies how students interact with tutors, helping align artificial intelligence use with instructional goals. For developers, it highlights the importance of designing systems that combine usability with educational value. For researchers, the findings identify directions for further study on how design choices in artificial intelligence tutoring affect learning processes and pedagogical alignment across STEM contexts. On a broader level, the study contributes to a more transparent, equitable, and sustainable integration of generative AI in education.
Generative artificial intelligence (GenAI) tools have become increasingly accessible and have impacted school education in numerous ways. However, most of the discussions occur in higher education. In schools, teachers’ perspectives are crucial for making sense of innovative technologies. Accordingly, this qualitative study aims to investigate how GenAI changes our school education from the perspectives of teachers and leaders. It used four domains – learning, teaching, assessment, and administration – as the initial framework suggested in a systematic literature review study on AI in education. The participants were 88 school teachers and leaders of different backgrounds. They completed a survey and joined a focus group to share how ChatGPT and Midjounery had a GenAI effect on school education. Thematic analysis identified four main themes and 12 subthemes. The findings provide three suggestions for practices: know-it-all attitude, new prerequisite knowledge, interdisciplinary teaching, and three implications for policy: new assessment, AI education, and professional standards. They also further suggest six future research directions for GenAI in education.
In today's rapidly evolving landscape of Artificial Intelligence, large language models (LLMs) have emerged as a vibrant research topic. LLMs find applications in various fields and contribute significantly. Despite their powerful language capabilities, similar to pre-trained language models (PLMs), LLMs still face challenges in remembering events, incorporating new information, and addressing domain-specific issues or hallucinations. To overcome these limitations, researchers have proposed Retrieval-Augmented Generation (RAG) techniques, some others have proposed the integration of LLMs with Knowledge Graphs (KGs) to provide factual context, thereby improving performance and delivering more accurate feedback to user queries. Education plays a crucial role in human development and progress. With the technology transformation, traditional education is being replaced by digital or blended education. Therefore, educational data in the digital environment is increasing day by day. Data in higher education institutions are diverse, comprising various sources such as unstructured/structured text, relational databases, web/app-based API access, etc. Constructing a Knowledge Graph from these cross-data sources is not a simple task. This article proposes a method for automatically constructing a Knowledge Graph from multiple data sources and discusses some initial applications (experimental trials) of KG in conjunction with LLMs for question-answering tasks.
Since its maiden release into the public domain on November 30, 2022, ChatGPT garnered more than one million subscribers within a week. The generative AI tool ⎼ChatGPT took the world by surprise with it sophisticated capacity to carry out remarkably complex tasks. The extraordinary abilities of ChatGPT to perform complex tasks within the field of education has caused mixed feelings among educators, as this advancement in AI seems to revolutionize existing educational praxis. This is an exploratory study that synthesizes recent extant literature to offer some potential benefits and drawbacks of ChatGPT in promoting teaching and learning. Benefits of ChatGPT include but are not limited to promotion of personalized and interactive learning, generating prompts for formative assessment activities that provide ongoing feedback to inform teaching and learning etc. The paper also highlights some inherent limitations in the ChatGPT such as generating wrong information, biases in data training, which may augment existing biases, privacy issues etc. The study offers recommendations on how ChatGPT could be leveraged to maximize teaching and learning. Policy makers, researchers, educators and technology experts could work together and start conversations on how these evolving generative AI tools could be used safely and constructively to improve education and support students’ learning.
Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents: computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty-five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors. For example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture—observation, planning, and reflection—each contribute critically to the believability of agent behavior. By fusing large language models with computational interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
This paper explores the contribution of custom-trained Large Language Models (LLMs) to developing Open Education Resources (OERs) in higher education. Our empirical analysis is based on the case of a custom LLM specialized for teaching business management in higher education. This custom LLM has been conceptualized as a virtual teaching companion, aimed to serve as an OER, and trained using the authors’ licensed educational materials. It has been designed without coding or specialized machine learning tools using the commercially available ChatGPT Plus tool and a third-party Artificial Intelligence (AI) chatbot delivery service. This new breed of AI tools has the potential for wide implementation, as they can be designed by faculty using only conventional LLM prompting techniques in plain English. This paper focuses on the opportunities for custom-trained LLMs to create Open Educational Resources (OERs) and democratize academic teaching and learning. Our approach to AI chatbot evaluation is based on a mixed-mode approach, combining a qualitative analysis of expert opinions with a subsequent (quantitative) student survey. We have collected and analyzed responses from four subject experts and 204 business students at the Faculty of Economics, Business and Tourism Split (Croatia) and Faculty of Economics Mostar (Bosnia and Herzegovina). We used thematic analysis in the qualitative segment of our research. In the quantitative segment of empirical research, we used statistical methods and the SPSS 25 software package to analyze student responses to the modified BUS-15 questionnaire. Research results show that students positively evaluate the business management learning chatbot and consider it useful and responsive. However, interviewed experts raised concerns about the adequacy of chatbot answers to complex queries. They suggested that the custom-trained LLM lags behind the generic LLMs (such as ChatGPT, Gemini, and others). These findings suggest that custom LLMs might be useful tools for developing OERs in higher education. However, their training data, conversational capabilities, technical execution, and response speed must be monitored and improved. Since this research presents a novelty in the extant literature on AI in education, it requires further research on custom GPTs in education, including their use in multiple academic disciplines and contexts.
Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation across various domains, including medicine. We present a comprehensive evaluation of GPT-4, a state-of-the-art LLM, on medical competency examinations and benchmark datasets. GPT-4 is a general-purpose model that is not specialized for medical problems through training or engineered to solve clinical tasks. Our analysis covers two sets of official practice materials for the USMLE, a three-step examination program used to assess clinical competency and grant licensure in the United States. We also evaluate performance on the MultiMedQA suite of benchmark datasets. Beyond measuring model performance, experiments were conducted to investigate the influence of test questions containing both text and images on model performance, probe for memorization of content during training, and study probability calibration, which is of critical importance in high-stakes applications like medicine. Our results show that GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms earlier general-purpose models (GPT-3.5) as well as models specifically fine-tuned on medical knowledge (Med-PaLM, a prompt-tuned version of Flan-PaLM 540B). In addition, GPT-4 is significantly better calibrated than GPT-3.5, demonstrating a much-improved ability to predict the likelihood that its answers are correct. We also explore the behavior of the model qualitatively through a case study that shows the ability of GPT-4 to explain medical reasoning, personalize explanations to students, and interactively craft new counterfactual scenarios around a medical case. Implications of the findings are discussed for potential uses of GPT-4 in medical education, assessment, and clinical practice, with appropriate attention to challenges of accuracy and safety.
Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by “unit-testing” sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications.
Recent advancements in artificial intelligence (AI) and specifically generative AI (GenAI) are threatening to fundamentally reshape computing and society. Largely driven by large language models (LLMs), many tools are now able to interpret and generate both natural language instructions and source code. These capabilities have sparked urgent questions in the computing education community around how educators should adapt their pedagogy to address the challenges and to leverage the opportunities presented by this new technology. In this working group report, we undertake a comprehensive exploration of generative AI in the context of computing education and make five significant contributions. First, we provide a detailed review of the literature on LLMs in computing education and synthesise findings from 71 primary articles, nearly 80% of which have been published in the first 8 months of 2023. Second, we report the findings of a survey of computing students and instructors from across 20 countries, capturing prevailing attitudes towards GenAI/LLMs and their use in computing education contexts. Third, to understand how pedagogy is already changing, we offer insights collected from in-depth interviews with 22 computing educators from five continents. Fourth, we use the ACM Code of Ethics to frame a discussion of ethical issues raised by the use of large language models in computing education, and we provide concrete advice for policy makers, educators, and students. Finally, we benchmark the performance of several current GenAI models/tools on various computing education datasets, and highlight the extent to which the capabilities of current models are rapidly improving.
No abstract
The paper is intended to study the efficiency of Project Based professional Learning in improving Writing Skills to the prevailing English Language development. This research is based on selected group of students enrolled in different programs at university level. The traditional mode of writing skills is compared with the modern version of techno- aided tool i.e. Chat GPT an open AI Tool (CHAT- OAI). The targeted group has been taught with the traditional way of teaching and has been appraised with the latest version of CHATOAI. The treatment group has been accessed through mixed method approach, it’s a quasiexperimental design with pre-recorded test and post recorded test experimental set- up where the impact of techno tools on Professional writing topics is studied; especially in connection to Business correspondence needs as letter writing, email writing, report writing, notices, memorandums, office orders, circulars, agenda and minutes of meetings etc. The similar has been followed with open ended Questions to seek an over- view of student’s perceptions and feedback on the blended teaching tactics introduced in the classrooms. The data analysis reveals that the techno- aided version of Chat- GPT would replace the conventional mode of learning language. Technology assists students to improve professional writing skills. The study reveals that the students and teachers have keen interest in improving the current practice of developing their professional writing content as it seems to be more quality and time saving; further enhancing confidence of working professionals.
The aim of this paper is to discuss the role and impact of generative artificial intelligence (AI) systems in higher education. The proliferation of AI models such as GPT-4, Open Assistant and DALL-E presents a paradigm shift in information acquisition and learning. This transformation poses substantial challenges for traditional teaching approaches and the role of educators. The paper explores the advantages and potential threats of using generative AI in education and necessary changes in curricula. It further discusses the need to foster digital literacy and the ethical use of AI. The paper’s findings are based on a survey conducted among university students exploring their usage and perception of these AI systems. Finally, recommendations for the use of AI in higher education are offered, which emphasize the need to harness AI's potential while mitigating its risks. This discourse aims at stimulating policy and strategy development to ensure relevant and effective education in the rapidly evolving digital landscape.
Artificial intelligence (AI) is redefining medical education, bringing new dimensions of personalized learning, enhanced visualization and simulation-based clinical training to the forefront. Additionally, AI-powered simulations offer realistic, immersive training opportunities, preparing students for complex clinical situations and fostering interprofessional collaboration skills essential for modern healthcare. However, the integration of AI into medical education presents challenges, particularly around ethical considerations, skill atrophy due to overreliance and the exacerbation of the digital divide among educational institutions. Addressing these challenges demands a balanced approach that includes blended learning models, digital literacy and faculty development to ensure AI serves as a supplement to, rather than a replacement for, core medical competencies. As medical education evolves alongside AI, institutions must prioritize strategies that preserve human-centred skills while advancing technological innovation to prepare future healthcare professionals for an AI-enhanced landscape.
No abstract
As artificial intelligence transforms the landscape of language technologies, advanced natural language processing models like GPT-4 are poised to revolutionize translator training paradigms. This mixed-methods study examined the integration of GPT-4 into translator education to harness its potential while retaining human expertise as the core. Structured translation prompts demonstrated GPT-4’s prowess in technical translations, but the model faces challenges in capturing complex literary and cultural subtleties, necessitating measured integration approaches. Interviews with experts in AI-enabled pedagogy advocated blended learning models judiciously combining GPT-4’s capabilities with immersive human training focused on creativity and cultural awareness. Direct observations of translator trainees showed benefits from GPT-4 usage, like personalized feedback and the need for human collaboration in complex cases. Cross-case analysis revealed variances in aptitude across diverse text genres and subjects, demanding tailored deployment strategies. While recognizing the risks associated with overdependence and taking into account ethical considerations, findings indicate an immense potential for GPT-4 to enrich pedagogy if integrated prudently in a human-centric manner. This underscores a balanced approach harnessing AI to amplify competencies without compromising the irreplaceable human essence underpinning high-quality, ethical translation. Keywords: ChatGPT, Translation, GPT-4, Translator training, Human-AI collaboration
Purpose Academic institutions, for the most part, discontinued face-to-face classes in favor of adopting and deploying online learning modalities that allowed for immediate participation. The pandemic has hastened the pace of implementation as well as the utilization of and reliance on technology. Artificial intelligence (AI) is important for higher education business continuity. Currently, some institutions are utilizing these resources to strengthen their student recruitment and retention efforts. Others use them to make the classroom more accessible or to construct tailored learning programs. Design/methodology/approach The rapid spread of the deadly COVID-19 pandemic in early 2020 has compelled many countries to enact stringent measures to halt the virus’s spread. The pandemic has hastened the adoption of online teaching and remote work technology. While a combination of online and face-to-face learning is the way of the future, it will necessitate additional resources to support program development and delivery, as well as increased collaboration between IT and subject matter experts. Findings This successful technological integration, which includes a smooth transition from face-to-face training to digital e-courses, provides a variety of benefits, including money saved on travel expenses. Top technological developments will continue to enhance company innovation and efficiency while also improving service efficiency. The top strategic technology trends for this year fall into three categories: human centricity, location independence, and resilient delivery, and are expected to be significant for the next five to ten years. Higher Education Institutions (HEIs) will need to establish a technological ecosystem that is dependable, cloud-based, data-integrated, and learning-focused to compete successfully in this “new normal.” After the epidemic, when classes resume on campus, a hybrid approach to virtual learning is likely to become the new normal. While it is unlikely that campuses would be totally virtual, they will also be unlikely to be entirely physical. Originality/value A blend of actual and virtual classrooms, as well as online learning, is the long-term solution, and strategic decisions made now will be critical in preparing for a post-pandemic world.
The paper discusses the potentials of ChatGPT and other AI technologies in diffusing ecohumanism in educational settings. Ecohumanism merges ecological and humanistic values to foster sustainable and ethical interactions between humans and the natural environment. Blended use of AI in education has the potential for paradigm changes in learning characteristics, personalizing experiences, and enhancing ecological literacy while bridging interdisciplinary collaboration. It illustrates, with the use of case studies from institutions like UC Berkeley, MIT, Stanford University, University of Edinburgh and University of Dunaujvaros, how AI tools may make learning environments engaging, effective, and ethically responsible. Some of the ethical and practical issues involved in the integration process are also spelled out along with AI, and it provides future research directions for maximizing the potential benefits that will result from the combination of AI with ecohumanistic education.
Abstract Preparing students to collaborate with AI remains a challenging goal. As AI technologies are new to K-12 schools, there is a lack of studies that inform how to design learning when AI is introduced as a collaborative learning agent to classrooms. The present study, therefore, aimed to explore teachers’ perspectives on what (1) curriculum design, (2) student-AI interaction, and (3) learning environments are required to design student-AI collaboration (SAC) in learning and (4) how SAC would evolve. Through in-depth interviews with 10 Korean leading teachers in AI in Education (AIED), the study found that teachers perceived capacity and subject-matter knowledge building as the optimal learning goals for SAC. SAC can be facilitated through interdisciplinary learning, authentic problem solving, and creative tasks in tandem with process-oriented assessment and collaboration performance assessment. While teachers expressed instruction on AI principles, data literacy, error analysis, AI ethics, and AI experiences in daily life were crucial support, AI needs to offer an instructional scaffolding and possess attributes as a learning mate to enhance student-AI interaction. In addition, teachers highlighted systematic AIED policy, flexible school system, the culture of collaborative learning, and a safe to fail environment are significant. Teachers further anticipated students would develop collaboration with AI through three stages: (1) learn about AI, (2) learn from AI, and (3) learn together. These findings can provide a more holistic understanding of the AIED and implications for the educational policies, educational AI design as well as instructional design that are aimed at enhancing SAC in learning.
The rapid evolution of e-learning platforms, propelled by advancements in artificial intelligence (AI) and machine learning (ML), presents a transformative potential in education. This dynamic landscape necessitates an exploration of AI/ML integration in adaptive learning systems to enhance educational outcomes. This study aims to map the current utilization of AI/ML in e-learning for adaptive learning, elucidating the benefits and challenges of such integration and assessing its impact on student engagement, retention, and performance. A comprehensive literature review was conducted, focusing on articles published from 2010 onwards, to document the integration of AI/ML in e-learning. The review analyzed 63 articles, employing a systematic approach to evaluate the deployment of adaptive learning algorithms and their educational implications. Findings reveal that AI/ML algorithms are instrumental in personalizing learning experiences. These technologies have been shown to optimize learning paths, enhance engagement, and improve academic performance, with some studies reporting increased test scores. The integration of AI/ML in e-learning platforms significantly contributes to the personalization and effectiveness of the educational process. Despite challenges like data privacy and the complexity of AI/ML systems, the results underscore the potential of adaptive learning to revolutionize education by catering to individual learner needs.
Technology breakthroughs are driving a significant transition in the educational sector. The blending of technology with conventional teaching has increased the efficiency of educational assessment methods. Traditional score management techniques frequently include laborious human grading, which causes feedback delays and inefficiencies in the educational system. Additionally, subjectivity and inconsistent grading practices introduced by humans may increase the bias in evaluations. By allowing students to electronically submit their answer sheets, which are then rigorously analyzed by cutting-edge AI-powered algorithms, this bias can be reduced. The suggested solution in the following study completely transforms conventional score management techniques. The proposed system manages scores and signifies a fundamental shift in schooling. Overall, this system represents a model of effectiveness, openness, and collaboration in the field of education and promises to improve the learning environment for both teachers and students.
No abstract
No abstract
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts.
Abstract Recent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross a difficulty threshold, and (2) the point of intervention , that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. Practitioner notes What is already known about this topic Collaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains. Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments. Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI. What this paper adds We extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration. We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom. From our discussions with the teacher, we define two types of inflection points to address students' STEM+C difficulties—the difficulty threshold and the intervention point —and discuss how the feedback latency interval separating them can inform educator interventions. We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline. Implications for practice and/or policy Our case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback. Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment. Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula.
Artificial intelligence (AI) is becoming increasingly ubiquitous in all areas of life. A concrete example is the urgent need to develop new educational practices in the post-pandemic world, as called for in the OECD 2021 Report on Digital Education (OECD, 2021). Furthermore, it has been estimated that more than half of the tasks associated with almost half of all jobs have been exposed to AI, and at least 19% of tasks associated with more than 80% of jobs have been exposed. As such, the potential of AI in enhancing expert work by supporting decision making, automating routine tasks and fostering innovation to solve complex problems has become increasingly evident. However, AI also introduces significant risks, such as the loss of human expertise, ethical concerns and the potential for overreliance on automated systems (Filippucci et al., 2024). The emerging paradigm of hybrid intelligence (HI) can offer promising solutions that strike a balance between mitigating the negative impacts of AI disruption and supporting learners' and workers' re-skilling and upskilling in a world of growing population and societal challenges (Akata et al., 2020). However, current data-driven AI systems are still too narrow to help humans, lacking in social and emotional intelligence and restricted in their ability to produce realistic and applicable results (Cui & Yasseri, 2024). Humans are unique in that we are capable of creative and flexible thinking—connecting thinking and action to long-term aims, values and purposes—and we can judge activities and purposes from an ethical point of view. While the HI in education research is still in its early stages, this special section introduces some early works, especially highlighting human–AI coevolution and learning, which can be expected to impact education, well-being and quality of life. This special section highlights ways to advance multidisciplinary research at the cross-section of the learning sciences and computer sciences to generate future AI-based solutions for various fields such as education. We hope that this special section pushes forward discussions on the role of digital data in multimodal analytics (Cukurova et al., 2020) and AI-based methods in future educational technologies (Raković et al., 2023). We begin by introducing the concept of HI, exploring how human intelligence can be effectively integrated with AI and identifying key research directions necessary to advance this emerging field. We introduce the state of the art in HI research and how this special section will contribute to research and practice. Finally, we discuss future directions in HI research. Hybrid intelligence is an emerging field that seeks to bridge the gap between human intelligence and AI. By combining the strengths of both humans and machines, HI aims to create systems that outperform either humans or machines working independently. In other words, HI aims to combine the strengths of both humans and machines through their coevolutionary processes to collaborate, learn from and reinforce each other (Järvelä, Zhao, et al., 2023). This is a key difference to AI, which is designed to work independently to perform tasks that normally require human intelligence, such as perception and learning (Russell & Norvig, 2010). Despite significant advancements in AI, many systems remain opaque (ie, ‘black box’) models, which complicates collaboration between humans and machines due to a lack of transparency and interpretability (Rosé et al., 2019). This gap creates challenges in trust and effective interaction, particularly in human-centric environments. We argue that the successful development of HI requires fundamentally new solutions to address core AI challenges. While AI outperforms humans in tasks like pattern recognition and machine learning, it lags in essential human attributes such as emotional intelligence, collaboration, adaptability, responsibility and explainability. These uniquely human qualities are crucial for effective teamwork, ethical decision making and the navigation of complex social interactions, areas where AI still struggles to match human performance. To overcome these limitations, current data-driven AI paradigms cannot be the ultimate solution. Instead, HI aims to integrate AI with uniquely human abilities, fostering a partnership where both systems reinforce one another. Achieving this requires a comprehensive, multidisciplinary approach, incorporating insights from fields like cognitive science and psychology to develop AI systems that are more adaptive, responsible and transparent. These advancements are essential in transforming AI from a mere tool into a trusted partner in real-world applications. To achieve this, not only is technological innovation required, but so is a critical reexamination of how we align machine intelligence with human values, ethics and objectives. While there is a growing consensus about the importance of the HI paradigm, we still lack a robust theoretical and conceptual framework for understanding human intelligence and learning processes that can be effectively augmented by HI. We also lack an understanding of how to facilitate the information exchange and mutual learning between AI and humans with new data-processing methods and computational models. Since HI systems and solutions are currently under development, empirical evidence of HI is scarce. We thus identify several research themes that require greater attention from the global research community and highlight topics addressed by the papers featured in this special section. Decades of research have shown that AI, as an information processor, is superior to humans. Currently, there is growing interest in artificial general intelligence (AGI), which aims to match human intelligence across all tasks, and even artificial superintelligence (ASI), which aims to exceed it. These topics have recently been highlighted within academia and the broader public (Ororbia & Friston, 2023). While achieving AGI is a major goal of AI research, it also raises significant philosophical, ethical and technical questions regarding its safety, control and potential impact on society. Furthermore, the idea of ASI raises critical ethical and existential risks, as it could become uncontrollable and surpass human authority, leading to scenarios where humanity may struggle to manage or direct its actions. Nevertheless, intelligence remains a complex phenomenon that encompasses several abilities (Bereiter & Scardamalia, 1993). It includes the ability to learn, understand, reason, make decisions and adapt to new situations. A key human strength lies in our ability to plan, monitor and control our own learning processes (Zimmerman, 1989), coupled with a nuanced and comprehensive understanding of context. This metacognitive capacity, the psychological ability to monitor and regulate one's thoughts and behaviours, is fundamental to effective learning and adaptation. Empirical research has shown that skilful and advanced learners use metacognitive skills to guide their thinking and studying, which makes them agentic learners (Flavell, 1979). However, recent research has shown that if students use Open AI applications, such as ChatGPT, to think automatically in their tasks, their agency may be detrimentally affected (Darvishi et al., 2024). AI significantly impacts human decision making, potentially decreasing it and impacting cognition (Ahmad et al., 2023) if students do not exercise their own metacognitive monitoring and strategies. We believe that data-driven AI is often limited in its understanding of context and cause–effect relationships, as AI operates primarily on correlations rather than deductive reasoning. This can result in unintended consequences in decision making and ethical dilemmas. Hence, we need human intelligence to guide and regulate current AI paradigms. Ethically, human oversight is crucial to ensure that AI systems align with societal values, promote fairness and are held accountable for their actions, preventing harmful biases or unethical outcomes (Nikolinakos, 2023). Cognitively, human intelligence is necessary to bridge the gap where AI falls short, particularly in areas requiring common sense reasoning, contextual understanding and adaptability to novel or ambiguous situations that AI systems struggle to handle effectively (Arslan, 2024). Most importantly, humans bring empathy, emotional intelligence, and the capacity to understand and respond to contextualized and complex emotional cues, enabling more meaningful interactions, especially in areas such as health care, education and customer service. By integrating these human qualities into AI systems, we can create more trustworthy, reliable and emotionally intelligent hybrid solutions that not only perform tasks efficiently but also resonate with human needs and values. Cukurova (2024) introduced a vision for HI that builds on the interplay of learning, analytics and artificial intelligence in education. He introduced a multi-dimensional view of AI's role in learning and education, emphasizing the intricate interplay between AI, analytics and learning processes, especially stressing a relationship between human control and automatization. Since HI requires effective dynamic interactions between humans and AI, Cukurova's (2024) conceptual framework for human–AI interaction in education is rooted in, and builds upon, the significant differences between human intelligence and artificial information processing. This framework is extremely useful for HI research, as it recognizes the stages contributing to the development of hybrid human–AI systems: the externalization of human cognition, the internalization of AI models to influence human mental models and the extension of human cognition via tightly coupled human–AI HI systems. There are numerous challenges in conceptualizing and developing HI systems. For example, how do we develop AI systems that work in synergy with humans, how do these systems learn from and adapt to their environments, and how can humans and AI share and explain their goals and strategies to each other (Akata et al., 2020)? The integration of humans and AI also brings up ethical issues, including the need for transparency, accountability and assurance that AI systems align with human values and do not perpetuate biases. In this special section, we highlight two important research themes: Data and algorithms assisting HI research and understanding core human learning mechanisms to adapt and interact within HI systems. Data and algorithms have been widely employed to understand and assess human learning and intelligence (Azevedo & Gašević, 2019; Blikstein & Worsley, 2016; Nguyen et al., 2020). The development of learning analytics has made it possible to integrate data from various channels and insights into the cognitive and emotional processes to human For example, learning analytics on a of including from learning et al., 2019; et al., data et al., social interactions et al., and These data have been integrated to at a comprehensive understanding of a complex and dynamic learning integration a of behaviours, and a of learning processes than that by (Raković et al., 2023). Furthermore, learning analytics the of advanced algorithms to offer both human learning and decision making & Worsley, For intelligent systems use these insights to learning tasks and to et al., 2020). is crucial in fostering such as learning learners to on their goals and adapt their strategies To more effective information exchange and mutual learning between humans and AI, are exploring computational models that human contextual understanding with machine For example, machine learning are capable of of multimodal such as et al., et al., and to human interactions, and processes et al., These including learning and AI systems to in human that may have been insights into the cognitive and emotional of machine learning such as computer vision and recognition systems, are enabling AI to human and more effectively by and For example, by a of and AI systems can emotional and cognitive This can be by integrating data such as or to a understanding of human et al., et al., 2024). As a these methods for a more nuanced of human the for more AI systems that can with humans in a more advancements in learning analytics and the learning sciences are increasingly AI to more nuanced insights from educational data & et al., 2023). AI such as machine learning, and computer are to that a more understanding of learning This aims not only to the of learning but also to human intelligence, the development of HI. While humans and ethical AI and human cognitive and human learning processes significant challenges for A are the core human learning mechanisms that can and need to be to adapt and interact in dynamic hybrid learning is a of and These processes have been in to (Zimmerman, is a action to develop new skills or and of its it is not only in educational but also across various areas of human is the agency of learners goals and control their activities and strategies in of the of these In other words, learners monitor goal to the need for and to control their learning actions. This of monitoring and control makes learners to their learning in to adapt their learning to achieve their goals & advancements in the have to use multimodal data and advanced analytics to understand complex human learning processes (Azevedo & Gašević, 2019). These advancements have also emerging research directions the potential of these data in the of AI systems that are to differences in and can adapt to and learners as manage the and emotional of learning both and of these is that data on in action can be and integrated at and of that are the capacity of human However, a is that these data-driven processes have become from and of the of human learning 2020). learning technologies by AI have introduced new to the of that impact how learners regulate their own learning et al., 2024). However, these also introduce particularly research paradigms and data that need to be for HI research to ensure that the to align with its theoretical For and (2024) introduced a framework that can guide multimodal and in research to learning processes complex work and is that learning processes, such as significant challenges. models are still in development and often lack the to identify a is in or with a et al., For example, on limited data such as results in the of only interactions and cognitive and emotional is by the interaction between and emotional that unique and systems and make understanding and learning processes, is for may result in and that are too differences in learners' such as and highlighting the need for and models & 2024). the of learning processes, coupled with the challenges in identifying and their more models and in HI research. These for of and the dynamic interplay between cognitive and emotional processes by learning The current of HI research is by significant advancements in systems and the development of AI models that can adapt to and learn from human in (Akata et al., 2020). For example, et (2024) an to skills by dynamic machine research that dynamic significantly learners' abilities across various methods like and the results that this dynamic is more effective than for both complex tasks, where it a and tasks, where it a thus to learners of their HI research is increasingly on the human that influence interactions and between humans and AI systems for HI. This of research aims to how psychological and social the with AI For example, have that trust is a crucial of effective human–AI interaction et al., 2024). trust that on AI systems their a balance that are exploring how like transparency, and contribute to this trust et al., et al., 2024). are how such as a with or their cognitive impact their to with AI. By understanding and these human the goal is to create HI systems that are more and capable of fostering collaboration between humans and AI. the future of HI research is to on the integration of AI systems with human cognitive This includes advancements in for more between humans and AI, as as the development of AI models that can understand and adapt to human and social collaboration will be combining insights from fields such as cognitive psychology and ethics to create AI systems that are not only intelligent but also and The papers in this special section discuss key of HI in decision making, and other and cognitive tasks by the strengths of both humans and machines to achieve outcomes that could The papers in this special section have integrated from a of such as advanced learning the learning education, learning analytics and the computer thus a multidisciplinary on AI. We identify themes: AI for assisting humans and machines in understanding each AI to insights into human learning processes and human–AI collaboration research. This the advancements in AI at the and cognitive between humans and As AI systems become increasingly in enabling and interactions has become within this highlight designed to mutual on contextual and learning These will AI systems to human needs more and complex cognitive processes, such as decision making and fostering a human–AI partnership that to and et of metacognitive of artificial intelligence on learning processes, and and their learning, and data and The results the learners of learning difference in and the the other by of but their and not significantly is particularly is that AI technologies such as may promote learners' on and potentially metacognitive work is in that it on how AI can human decision making and cognitive in a negative et (2024) in learning to research by how models can learning work by The new insights into how and the in their and for In the an where generate to from their that this has a impact on Furthermore, the that for to effectively use as to create could be particularly especially in supporting students with of The integration of AI into educational research has up new for understanding human learning processes in greater and et al., & 2023). AI such as machine learning, and multimodal are to be in insights that to For example, learning analytics can be particularly in understanding learning et al., et al., 2023). learning algorithms can complex from learning to identify in from and how these to learning These insights to how learners how social interactions contribute to the learning and where and et al., 2024). Furthermore, AI can be to assess various emotional and cognitive of human learning processes at both the and The in this special section have the potential of AI to generate insights into these learning and learning with in an intelligent educational by et the development of an educational designed to particularly in an The learning in a interaction to automated The highlights the of which and with the leading to more learning the results the importance of combining AI with to educational outcomes rather than on the to the by and in as as exploring the long-term impacts and of the on a more et in their hybrid intelligence to for in how HI can the quality of in learning for is crucial for but often from due to of To address this, this work an HI that to and by AI-based multimodal data A with The results that to thinking and more attention in This that integrating with human can the quality and impact of in learning environments. HI research various models of collaboration, including decision systems, machine learning and systems where humans and AI work to generate new AI systems can insights or and humans can these contextual understanding and make For example, collaboration can the work of human by for new insights into learning and that not to 2019). et (2024) how collaboration between and an multimodal can guide and as work to computational models and solve problems in sciences tasks This humans and AI, understanding of in learning the of social interaction and that combining the of humans and AI can address challenges. et (2024) in their that can use the to guide for Furthermore, can to help the multimodal two of interest to students a and the point of These are where a or to effective et (2024) that collaboration can help students in in two can use the multimodal to guide for and can the multimodal et (2024) from an learning the collaboration between humans and AI in AI becoming more integrated in educational are how students learn from both AI systems and human However, there is a need for a more comprehensive understanding of how to effectively interactions between and AI in educational environments. This to identify the challenges and that AI is with and to assess how students this hybrid The a an called which to the learning of is an tool that and in a The offer insights and for and to the integration of in the on the role of HI in education. et (2024) The impact of in a hybrid intelligence learning how humans and AI in learning, with a on While learning systems offer promising their in human–AI falls of and there is a of empirical their To address this the in an HI learning can students in learning from their and more and A field with students to the of on the students their work and the The results that incorporating to significantly highlighting the role of AI in students their and fostering et (2024) cognitive processes in digital through A hybrid intelligence an HI research that human with machine intelligence to cognitive processes digital by The recent in the use of AI within learning analytics aims to achieve a more and understanding of learning, the of human intelligence However, machine intelligence has for its lack of contextual and in complex human and social To overcome these issues, the a machine learning that human identify which are essential in learners' use of thinking skills thinking skills This insights for future and the of educational By and expert the to learners' attention which are crucial for understanding their cognitive HI research the integration of human intelligence with AI to decision making and other cognitive Cukurova (2024) that humans in and complex decision making under AI systems are in of and tasks with In the HI the synergy of these strengths and the unique of both humans and AI. and et (2024) a on human–AI collaboration to in learning such as & and unique human that need to be and in in the of AI. These are the skills require to become agentic which are also essential in and successful and supporting in is a especially in but and et (2024) that HI such as AI can make this designed a metacognitive AI that can and students to their metacognitive with the of their for the and empirical results on their of between and the of on the partner as a of of and to the context of these While their not facilitate as their for for the learning for by et (2024) is the of an methods that introduced and an learning with core on of and with students and across the how these to and learning outcomes through and and The results an understanding of and of and with emotional on to cognitive students in greater learning The offer insights for and learning regarding AI's role in learning and directions for future research. The future of HI research for As AI technologies their integration with human intelligence will become increasingly in systems that are more and with human abilities, values and societal A is the synergy between human intelligence and AI to create learning environments. This developing AI systems that with human fostering metacognitive skills and understanding (Azevedo & 2023). AI, which tasks, HI mutual interaction and learning, enhancing human cognitive processes This new data-processing and computational models that facilitate effective information exchange and In HI can education by augmented and environments. learners to interact with both digital and real-world in In these HI can adapt to learners' and cognitive educational through data et al., 2024). This students to their critical and collaboration skills in realistic in this aims to AI's it not only learners' needs but also emotional and decision crucial research on the ethical integration of AI in education. As the tasks of and multimodal data AI solutions become more from to emotional cues, the of ethics even more on automated could essential human skills like decision making and research AI systems that human agency and to needs (Järvelä, & 2023). By incorporating emotional intelligence and empathy, AI can align more with human values. The ethical integration of AI in education and other fields and learning and solutions that adapt to their of and agency and data need to how HI can promote and accountable AI solutions that learning and to As AI systems become more integrated into learning environments, a major is that human agency is not HI research promote the of AI that rather than human decision making, and emotional This research has been by the of and the of Hybrid
Abstract This study explores the role of generative AI (GenAI) in providing formative feedback in children's digital learning experiences, specifically in the context of mathematics education. Using multimodal data, the research compares AI‐generated feedback with feedback from human instructors, focusing on its impact on children's learning outcomes. Children engaged with a digital body‐scale number line to learn addition and subtraction of positive and negative integers through embodied interaction. The study followed a between‐group design, with one group receiving feedback from a human instructor and the other from GenAI. Eye‐tracking data and system logs were used to evaluate student's information processing behaviour and cognitive load. The results revealed that while task‐based performance did not differ significantly between conditions, the GenAI feedback condition demonstrated lower cognitive load and students show different visual information processing strategies among the two conditions. The findings provide empirical support for the potential of GenAI to complement traditional teaching by providing structured and adaptive feedback that supports efficient learning. The study underscores the importance of hybrid intelligence approaches that integrate human and AI feedback to enhance learning through synergistic feedback. This research offers valuable insights for educators, developers and researchers aiming to design hybrid AI‐human educational environments that promote effective learning outcomes. Practitioner notes What is already known about this topic? Embodied learning approaches have been shown to facilitate deeper cognitive processing by engaging students physically with learning materials, which is especially beneficial in abstract subjects like mathematics. GenAI has the potential to enhance educational experiences through personalized feedback, making it crucial for fostering student understanding and engagement. Previous research indicates that hybrid intelligence that combines AI with human instructors can contribute to improved educational outcomes. What this paper adds? This study empirically examines the effectiveness of GenAI‐generated feedback when compared to human instructor feedback in the context of a multisensory environment (MSE) for math learning. Findings from system logs and eye‐tracking analysis reveal that GenAI feedback can support learning effectively, particularly in helping students manage their cognitive load. The research uncovers that GenAI and teacher feedback lead to different information processing strategies. These findings provide actionable insights into how feedback modality influences cognitive engagement. Implications for practice and/or policy The integration of GenAI into educational settings presents an opportunity to enhance traditional teaching methods, enabling an adaptive learning environment that leverages the strengths of both AI and human feedback. Future educational practices should explore hybrid models that incorporate both AI and human feedback to create inclusive and effective learning experiences, adapting to the diverse needs of learners. Policymakers should establish guidelines and frameworks to facilitate the ethical and equitable adoption of GenAI technologies for learning. This includes addressing issues of trust, transparency and accessibility to ensure that GenAI systems are effectively supporting, rather than replacing, human instructors.
Abstract This manifesto advocates for the thoughtful integration of AI in education, emphasising a human-centred approach amid the rapid evolution of artificial intelligence (AI). The chapter explores the transformative potential of large language models (LLM) and generative AI (GenAI) in education, addressing both opportunities and concerns. While AI accelerates change in education, adapting to students’ diverse learning needs, it also poses challenges to traditional assessment paradigms. The manifesto stresses the importance of empowering teachers and students as decision-makers, highlighting the need for a balanced approach to AI integration. It emphasises human-centricity in AI use, promoting ethical considerations, responsible practices, and regulations. The right to choose and co-create is underscored, giving autonomy to educators and learners in selecting technologies aligned with their philosophies. Additionally, the manifesto introduces the concept of hybrid intelligence (HI), advocating collaboration between human and machine intelligence to enhance educational experiences. The manifesto encourages creative uses of AI in education, envisioning a harmonious partnership where AI and humans co-create transformative knowledge.
The arrival of generative Artificial Intelligence (AI) in educational settings offers a unique opportunity to explore the intersection of human cognitive processes and AI, especially in complex tasks like writing. This study adopts a process-oriented approach to investigate the self-regulated learning (SRL) strategies employed by 21 doctoral and master’s students during a writing task facilitated by generative AI. It aims to identify and analyze the SRL strategies that emerge within the framework of hybrid intelligence, emphasizing the collaboration between human intellect and artificial capabilities. Utilizing a learning analytics methodology, specifically lag sequential analysis (LSA), the research examines process data to reveal the patterns of learners’ interactions with generative AI in writing, shedding light on how learners navigate different SRL strategies. This analysis facilitates an understanding of how learners adaptively manage their writing task with the support of AI tool. By delineating the SRL strategies in AI-assisted writing, this research provides valuable implications for the design of educational technologies and the development of pedagogical interventions aimed at fostering successful human-AI collaboration in various learning environments.
Learning Objects (LOs) have long aimed to make digital education scalable and reusable, yet their alignment with constructivist learning remains contested. This study offers a structured comparison of traditional LO design principles and constructivist learning metaphors—acquisition, participation, and knowledge creation—to examine how emerging research directions position themselves within this educational technology landscape. We analyse how emerging research directions—symbolic AI, generative AI, hybrid AI (Retrieval-Augmented Generation), and constructivist-oriented LO research—align with or challenge these learning metaphors. We then explore how these directions influence the relationship between LOs and constructivist pedagogy. Our findings show that while some AI-based approaches reinforce structured, predefined learning, others—and especially constructivist-oriented LO models—support more adaptive, collaborative, and student-centred designs. Empirical findings from teacher interviews reveal that teachers’ conceptions of learning vary by context—often defaulting to transmissive models under technological constraints, but aligning more closely with participation and knowledge creation metaphors when reflecting on pedagogical theory. These combined and somewhat surprising findings underscore the need for LO frameworks that are pedagogically flexible—that is, able to support both structured and open-ended designs, adapt to varying teaching contexts, and empower learners through meaningful engagement.
Under globalization and educational informatization, English oral proficiency is crucial for cross-cultural communication. This study employs a qualitative exploratory research design, utilizing literatures on oral proficiency cultivation models from the CNKI and WOS databases, which is analyzed through bibliometric visualization and comparative content analysis via CiteSpace. The analysis reveals several key findings: 1) Chinese research has declined recently while global research continues growing with broader collaboration. 2) Chinese studies focus on pedagogical models and localized theories, whereas global research emphasizes technology-enabled learning and learner internal factors. 3) Chinese research aims to solve local teaching problems, while global research integrates cognitive neuroscience and AI technologies. To address the lack of sustained momentum in Chinese research, this study proposes integrating AIGC into teaching, building online-offline learning ecosystems, and conducting longitudinal empirical studies on formative assessment, thereby benefiting educators, researchers, and policy-makers in the field of second language acquisition.
The development of new artificial intelligence-generated content (AIGC) technology creates new opportunities for the digital transformation of education. Teachers’ willingness to adopt AIGC technology for collaborative teaching is key to its successful implementation. This study employs the TAM and TPB to construct a model analyzing teachers’ acceptance of AIGC technology, focusing on the influencing factors and differences across various educational stages. The study finds that teachers’ behavioral intentions to use AIGC technology are primarily influenced by perceived usefulness, perceived ease of use, behavioral attitudes, and perceived behavioral control. Perceived ease of use affects teachers’ willingness both directly and indirectly across different groups. However, perceived behavioral control and behavioral attitudes only directly influence university teachers’ willingness to use AIGC technology, with the impact of behavioral attitudes being stronger than that of perceived behavioral control. The empirical findings of this study promote the rational use of AIGC technology by teachers, providing guidance for encouraging teachers to actively explore the use of information technology in building new forms of digital education.
ABSTRACT Research on the factors influencing the acceptance of GenAI in language learning has expanded widely; however, few studies have focused on the role of language learning emotions. To enhance the effectiveness of GenAI‐powered English‐speaking instruction and the learning experience, this study expands on the Integrated Model of Technology Acceptance (IMTA) by investigating the role of various emotions and willingness to communicate in different contexts as intrinsic motivators for the acceptance of GenAI‐powered conversational chatbots. Using a questionnaire ( n = 368) and pre‐ and post‐tests, the study found that EFL learners with higher communicative confidence and greater foreign language learning boredom tend to perceive GenAI chatbots as less useful for developing speaking skills. While GenAI successfully aided them in improving their speaking skills through both theme‐based and free dialogues, learners who are more willing to engage in face‐to‐face interactions with peers and teachers may not find chatbots as productive or engaging as human counterparts. The results suggest that EFL teachers should be aware of the limitations of GenAI and students' individual differences, integrating GenAI into their classrooms in a way that aligns with student's proficiency and preferences to create a harmonious and efficient GenAI‐supported language learning environment. This also underscores the importance of developing teachers' AI competence in the GenAI era.
With the continuous development of information technology, the application of Artificial Intelligence Generated Content (AIGC) in the field of education is becoming increasingly widespread.This study combines the characteristics and needs of the International Trade Practice Course to explore a blended teaching design based on AIGC.The aim is to optimize the theoretical knowledge module, practical operation module, and case analysis module in the teaching content by introducing artificial intelligence technology.A blended teaching design method combining online and offline is adopted, which includes 1) online self-learning, 2) offline practical operation, 3) AIGC tool assisted practice for individual trade links, and 4 Comprehensive trade links classroom discussions and interactions to enhance students' learning outcomes and satisfaction.Evaluations were conducted during and after the course.The results indicate that this teaching design can significantly improve students' learning interest, self-learning ability, and practical operation ability, providing new ideas and methods for the teaching reform of the International Trade Practice Course.
With the rapid development of Artificial Intelligence (AI) technology, the application of AI in the field of education has gradually become one of the key factors in improving teaching quality and student abilities. Based on the conservation of resources theory, this study explores how the usage of AI in teaching impacts students' creativity, exploring the mediating role of learning engagement and the moderating role of AI literacy. The research finds that the usage of AI in teaching significantly enhances students' creativity, with learning engagement playing a mediating role in this process, thereby promoting creativity improvement. In addition, AI literacy moderates the relationship between the usage of AI in teaching and learning engagement. The results of this study not only expand the application of the conservation of resources theory in the field of education but also emphasize the important role of AI literacy in AI teaching, providing valuable policy suggestions for educational practices.
The introduction of artificial intelligence (AI) has triggered changes in modern dance education. This study investigates the application of diffusion-based modeling and virtual digital humans in dance instruction. Utilizing AI and digital technologies, the proposed system innovatively merges music-driven dance generation with virtual human-based teaching. It achieves this by extracting rhythmic and emotional information from music through audio analysis to generate corresponding dance sequences. The virtual human, functioning as a digital tutor, demonstrates dance movements in real time, enabling students to accurately learn and execute dance postures and rhythms. Analysis of the teaching outcomes, including effectiveness, naturalness, and fluidity, indicates that learning through the digital human results in enhanced user engagement and improved learning outcomes. Additionally, the diversity of dance movements is increased. This system enhances students’ motivation and learning efficacy, offering a novel approach to innovating dance education.
This scholarly article delves into the intricacies of merging artificial intelligence (AI) and generative design within the realm of design and design education. It provides an in-depth analysis of the interpretation of the digital creative industries as outlined in the 14th Five-Year Plan and the N
This work examines the application of Generative Artificial Intelligence (GAI) technology in animation teaching, focusing on its role in enhancing teaching quality and learning efficiency through innovative instructional strategies. Compared to traditional animation teaching methods, GAI technology introduces a novel pedagogical paradigm characterized by adaptive personalized learning pathways, intelligent teaching resource optimization, and immersive interactive learning models. A mixed-methods research approach is adopted, integrating quantitative analysis (experimental data and questionnaire surveys) and qualitative analysis (behavioral observations) to systematically assess the educational effectiveness of GAI technology. The experiment, conducted over 12 weeks, involved 120 students divided into an experimental group and a control group. Data sources included pre- and post-test evaluations, learning feedback surveys, and classroom behavior analysis. The results indicate that, compared to conventional teaching methods, GAI technology significantly enhances learning outcomes, knowledge application abilities, learning motivation, and student satisfaction. The adaptive personalized learning pathway dynamically adjusts content based on students' progress, improving their mastery of foundational knowledge and skill transferability. Intelligent teaching resources automatically generate high-quality animation examples and provide dynamic feedback mechanisms, fostering creative expression and practical efficiency. The immersive interactive learning model effectively increases classroom engagement, teamwork skills, and problem-solving abilities. These findings demonstrate that GAI technology has the potential to transform animation teaching by optimizing the learning experience and advancing intelligent teaching methodologies. Beyond offering personalized learning solutions, GAI technology plays a crucial role in cultivating students' creativity, critical thinking, and autonomous learning abilities. This work provides theoretical support and practical guidance for the digital transformation of animation teaching while underscoring the broader applicability of GAI technology in the education sector, offering new directions for the future development of intelligent education.
The interactive integration of Dig Data and Internet technology promotes the accelerated evolution of human society from the industrial age to the information age, which makes ChatGPT and other artificial intelligence technologies available and applied in teaching, and begins to enter the vision of teaching reform.ChatGPT Application in teaching represents the progress of educational technology and the update of teaching means, indicating that the new era of artificial intelligence to assist the development of human intelligence and its teaching growth, and promote the in-depth development of educational reform has arrived.This is a new dynamic for a long time in the future, and also an important direction of deepening teaching reform.For the application of ChatGPT in teaching, it is not a proper attitude to simply resist, brutally prohibit or blindly ignore it.After all, the growth and progress of human beings is not based on the wisdom, but the bold attempt, enterprising spirit and open and inclusive, embracing all rivers.It is of great theoretical and practical significance to discuss the teaching and learning problems of the "Big Data + Internet" information age, and to clarify the application of ChatGPT in teaching and its many cognitive misunderstandings or unnecessary misunderstandings, both for deepening the teaching reform and accelerating the independent cultivation of innovative talents.
This paper takes the perspective of innovation ecology as the theoretical framework, and discusses the construction of talent training mode under the background of new engineering.Taking the application case of generative artificial intelligence as an example, the study introduces enterprise teaching resources, relying on project case experience, and drawing on the competition form of "AIGC Innovation and Creativity Competition", guides students to independently select topics, plan design schemes, and uses generative artificial intelligence technology for auxiliary design, shows the overall design process of students applying AIGC, and discusses the teaching reform path driven by artificial intelligence. With the optimization and innovation of university and industry cooperation mechanism as the breakthrough point, this paper discusses how to build a more closely and efficient industry-university cooperation mode, so as to promote the innovation and optimization of talent training mode under the background of new engineering, and provide beneficial practical experience and inspiration for university talent training.
In the field of digital art and game development, 3D modeling has always been a crucial link. The traditional modeling process is a long and challenging process that requires constant adjustments to details over and over again. With the continuous upgrading of computer hardware computing power and algorithm models, a large number of content productivity tools have emerged. This article first analyzes the characteristics of PGC, UGC, and AIGC; Then, the course "3D Animation Design" in vocational colleges was analyzed, which is mainly aimed at digital cultural and creative fields such as animation and games, and requires the production of a large amount of image content such as 3D models; Finally, the study explored students in vocational colleges who do not have a professional art background. AIGC provides new possibilities for this process, such as how to integrate Stable Diffusion into the teaching practice of the 3D Animation Design course.
Personalized learning is the primary learning approach for contemporary college students. The emergence and development of Artificial Intelligence Generated Content (AIGC) technology have provided new opportunities for realizing personalized learning among college students. However, the use of this new technology also poses many challenges to personalized learning. This study explores the current applications and challenges of AIGC in empowering personalized learning for university students through methods of literature review and case analysis. The research finds that AIGC can currently meet the personalized learning needs of college students, but it also faces challenges such as students' excessive dependence on AI and the weakening of teacher-student relationships.
The era of numerical intelligence has put forward higher requirements on the computational thinking of college students, and the lack of computational thinking is still common among college students at present.In an investigation to find methods and measures to cultivate college students' computational thinking that are widely applicable and effective, the study investigated the effect of artificial intelligence generated content tools in programming education to enhance college students' computational thinking through a 13-week quasi-experimental study.The results of the study show that (1) conventional programming education activities do not significantly enhance college students' computational thinking; (2) artificial intelligence generated content tools in programming education activities significantly enhance college students' computational thinking; and (3) uncontrolled use of artificial intelligence generated content tools is not conducive to the enhancement of learners' academic performance.On this basis, strategies and suggestions are proposed for better utilization of generative artificial intelligence generated content tools to enhance college students' computational thinking ability and academic performance in programming learning.
The issue of supporting teachers' professional development has been a significant focus in applied research related to large language models. The research group developed a blended teachers' professional development program, empowering teachers' professional development of instructional design competency in blended learning with a large language model, and conducted an empirical study at a university in northwest China. Utilising a focus group interview (n = 23) and quantitative analysis, we established that the program enabled targeted training, offered teachers a personalized learning experience, and eventually improved their competency in instructional design for blended learning. The large language model effectively empowered teachers' professional development. • LLMs can be used in diagnosing problems before starting teacher professional development programs. • LLMs could assist to offer teachers personalized and accurate feedback in teacher professional development programs. • LLMs could assist to offer a self-directed learning environment in teacher professional development programs. • Teacher professional development programs empowered by LLMs could develop teachers' competency if designed appropriately.
Artificial intelligence (AI) is transforming various industries, and education is no exception. Rapid advancements in AI technology have become essential for educators and educational assessment professionals to enhance teaching and learning experiences. AI-powered educational assessment tools provide numerous benefits, including improving the accuracy and efficiency of assessments, generating personalized feedback for students, and enabling teachers to adapt their teaching strategies to meet the unique needs of each student. Therefore, AI has the potential to revolutionize the way education is delivered and assessed, ultimately leading to better educational outcomes for students. This paper explores the various applications of AI tools in educational measurement and assessment. Specifically, it discusses the integration of large language AI models in classroom assessment, in specific areas such as test purpose determination and specification, developing, test blueprint, test item generation/development, preparation of test instructions, item assembly/selection, test administration, test scoring, interpretation of test results, test analysis/appraisal, and reporting. It analyses the role of teachers in AI-based assessment and the challenges of using AI-powered tools in educational assessment. Finally, the paper presents strategies to address these challenges and enhance the effectiveness of AI in educational assessment. In conclusion, using AI in educational assessment has benefits and limitations. As such, educators, policymakers, and stakeholders must work together to develop strategies that maximize the benefits of AI in educational assessment while mitigating the associated risks. The application of AI in educational assessment can ultimately transform education, improve learning outcomes, and equip students with the skills needed to succeed in the 21st century.
The integration of Artificial Intelligence (AI) into educational technologies marks a significant shift in learning methodologies and operational dynamics within educational institutions. At the forefront is an AI-driven virtual mock interview platform designed to address the high Customer Acquisition Costs (CAC) in the edtech sector, especially for interview preparation services. This initiative harnesses a blend of AI technologies, including ADA 2 for creating context-aware embeddings and Machine Learning (ML), to transform the traditional mock interview process into a dynamic, cost-effective system. Central to the platform is its use of advanced Natural Language Processing (NLP) techniques and GPT-4 Large Language Model (LLM), automating the process of mock interviews and providing personalized feedback, ensuring a preparation journey that meets specific candidate needs and mirrors real interview scenarios. A key evaluation among 100 students from a cohort of 1800 demonstrated a 90% cost reduction for three mock interviews, reducing expenses from ₹3000 to just ₹300 per candidate. This cost efficiency significantly enhances access to quality interview preparation, improving student satisfaction and accessibility. Moreover, the platform provides valuable insights into student performance, setting a new standard in educational technology by offering an effective, personalized interview preparation experience. This project reflects a holistic approach to student development and the critical role of technology in addressing the evolving needs of learners
Let’s Chat: Integrating Large Language Models into Blended Learning of English for Specific Purposes
English for Specific Purposes (ESP) pedagogy addresses the specific needs of language learners in professional or academic contexts by enhancing their motivation, autonomy and contextual communication skills. Despite recent fruitful approaches to blended ESP pedagogy, challenges such as learner diversity, limited classroom time and teacher resources can make real-world implementation of blended ESP learning costly and challenging. To address these issues, we propose to integrate Large Language Models (LLMs) such as ChatGPT into blended ESP learning to provide personalized learning experiences through a flexible, interactive and engaging interface. We present, as an exploratory prototype, a web-based system that uses the ChatGPT API to support the administration and delivery of customizable blended ESP courses. We discuss the design, implementation and envisioned use cases of the system in a blended learning environment.
Hybrid learning is a complex combination of face-to-face and online learning. This model combines the use of multimedia materials with traditional classroom work. Virtual hybrid learning is employed alongside face-to-face methods. That aims to investigate using Artificial Intelligence (AI) to increase student engagement in hybrid learning settings. Educators are confronted with contemporary issues in maintaining their students’ interest and motivation as the popularity of online and hybrid education continues to grow, where many educational institutions are adopting this model due to its flexibility, student-teacher engagement, and peer-to-peer interaction. AI will help students communicate, collaborate, and receive real-time feedback, all of which are challenges in education. This article examines the advantages and disadvantages of hybrid education and the optimal approaches for incorporating Artificial Intelligence (AI) in educational settings. The research findings suggest that using AI can revolutionize hybrid education, as it enhances both student and instructor autonomy while fostering a more engaging and interactive learning environment.
Abstract As the integration of artificial intelligence (AI) into our daily lives and educational environments becomes increasingly prevalent, it is necessary to understand the way in which these technologies impact cognitive functions. AI models such as ChatGPT hold immense promise for advancing the field of education, making it easier than ever for educators to support personalized learning and for students to access information. However, there are risks associated with increased AI engagement; individuals may become over‐reliant on AI, resulting in a reduced capacity for critical thinking, or a decline in memory retention. This article provides a comprehensive survey of these potential impacts, emphasizing the need for the judicious utilization of AI, and advocating for an integration approach that supplements, rather than supplants, human cognitive functions. The paper concludes by encouraging further research into the long‐term cognitive effects of interacting with advanced AI models such as ChatGPT.
No abstract
Purpose This study aims to analyze the most frequently discussed topics in the scientific discourse on artificial intelligence (AI) in higher education using Natural Language Processing (NLP) techniques. Design/methodology/approach This paper analyzes 52 peer-reviewed articles published between 2017 and 2024, utilizing NLP techniques to identify prevalent unigrams, bigrams and trigrams related to AI in higher education. Findings The analysis identifies an emerging concern with utilizing AI tools to enhance educational processes, with “Higher education,” “artificial intelligence” and “generative AI” becoming ubiquitous terms in use. LLM and ChatGPT represent types of technology that evoke potential for personalized learning and enhanced practice in instruction. Research limitations/implications In review studies, samples with a post-secondary educational background usually restrict generalizability to school environments. Future studies can examine the long-term consequences of AI technology in extended academic environments, longitudinal studies and educational environments. Practical implications The frequency patterns from our analysis offer essential insights for educators and administrators regarding curriculum development and teaching practices. The high occurrence of terms like “artificial intelligence” (1,193 times) and “higher education” (824 times) highlights the need for incorporating AI literacy into curricula. This integration should include guidelines for responsible AI use and training programs for faculty. The frequent mentions of “teaching learning” (226 times) and “AI education” (319 times) highlight important implications for teaching practices. Educational institutions must establish frameworks that blend traditional methods with AI-enhanced strategies, including assessment plans that consider AI tools while upholding academic integrity. Additionally, institutions should prioritize investment in AI infrastructure and support systems. Social implications Our findings highlight important societal implications beyond education. The frequency analysis reveals concerns about educational equity, including disparities in access to AI-enhanced education, digital literacy gaps and economic barriers to adopting AI tools. Addressing these issues is vital to prevent the worsening of social inequalities. Additionally, our results emphasize the need for workforce development. Educational institutions should focus on equipping students with the AI competencies that employers demand and bridging the gap between academic training and industry needs. The policy implications of our findings are equally significant. Our analysis suggests the need for educational policies that address AI integration while establishing clear guidelines for ethical AI use in academic settings. These policies should include standards for AI tool evaluation and implementation to guide institutions' adoption decisions. The economic impact of these developments is also noteworthy, as our results indicate the potential for enhanced workforce preparedness through AI-integrated education, improved educational efficiency through automation and new opportunities for educational technology development. Originality/value This study contributes to the field by providing an overview of prominent trends in AI within higher education, discussing the practical application, future research opportunities, and challenges associated with the responsible and effective use of AI in education.
This study examined student experiences before and after an essay writing assignment that required the use of ChatGPT within an undergraduate engineering course. Utilizing a pre-post study design, we gathered data from 24 participants to evaluate ChatGPT's support for both completing and grading an essay assignment, exploring its educational value and impact on the learning process. Our quantitative and thematic analyses uncovered that ChatGPT did not simplify the writing process. Instead, the tool transformed the student learning experience yielding mixed responses. Participants reported finding ChatGPT valuable for learning, and their comfort with its ethical and benevolent aspects increased post-use. Concerns with ChatGPT included poor accuracy and limited feedback on the confidence of its output. Students preferred instructors to use ChatGPT to help grade their assignments, with appropriate oversight. They did not trust ChatGPT to grade by itself. Student views of ChatGPT evolved from a perceived “cheating tool” to a collaborative resource that requires human oversight and calibrated trust. Implications for writing, education, and trust in AI are discussed.
No abstract
No abstract
合并后的分组构建了一个从“技术底座”到“理论框架”,再到“教学实践”与“治理评估”的完整研究闭环。报告全面覆盖了生成式AI赋能混合式教学的五个关键维度:1) 探讨GPT-4等大模型的技术能力与智能化工具开发;2) 确立人机协作与混合增强智能的理论根基;3) 创新混合式教学设计并评估其对认知与成效的影响;4) 展示多学科(尤其是语言、医学、艺术)的深度应用案例;5) 审视师生技术接受度、评价体系变革及伦理治理挑战。这为研究生成式AI如何驱动教育数字化转型提供了系统性的文献支撑。