讯飞 iFlyCode 支持的大学编程课程项目式学习 (PBL) 挑战与 AI 支持的教学模式
AI辅助编程教学的实证研究与成效评估
该组文献集中于定量与定性分析AI代码助手对学生编程技能提升、学习认知负荷、算法思维发展及实际学习效率的具体影响。
- The Influence of Artificial Intelligence Tools on Learning Outcomes in Computer Programming: A Systematic Review and Meta-Analysis(Manal Alanazi, Ben Soh, Halima E. Samra, Alice Li, 2025, Computers)
- The impact of AI-assisted pair programming on student motivation, programming anxiety, collaborative learning, and programming performance: a comparative study with traditional pair programming and individual approaches(Guangrui Fan, Dandan Liu, Rui Zhang, Lihu Pan, 2025, International Journal of STEM Education)
- Enhancing Novice Programming Education through AI-Driven Mentorship and Project-Based Learning(Sai Veda Prakash Masetty, Sreeja Vallamulla, 2025, 2025 IEEE Integrated STEM Education Conference (ISEC))
- The Effect of Generative AI as a Coding Assistant in Deep Learning Practicum on Code Quality and Conceptual Understanding(Nurhidayah, Alimin, Ohfit Rijei, Owentianus Nouvic, Putra Langlang Buana, Putra Rajawijaya, 2025, Information Technology Education Journal)
- Enhancing Programming Proficiency, Evaluating the Impact of AI-powered Code Assistant Tools on Learning Outcomes(Trust Mhlanganiso, Justin Makota, 2025, Oikos: The Zimbabwe Ezekiel Guti University bulletin of Ecology, Science Technology, Agriculture, Food Systems Review and Advancement)
- Evaluating the Impact of Assistive AI Tools on Learning Outcomes and Ethical Considerations in Programming Education(Seong Min Park, Marco Ho, M. Lin, Jeeho Ryoo, 2025, 2025 IEEE Global Engineering Education Conference (EDUCON))
- Potentiality of generative AI tools in higher education: Evaluating ChatGPT's viability as a teaching assistant for introductory programming courses(Zishan Ahmed, Shakib Sadat Shanto, Akinul Islam Jony, 2024, STEM Education)
- From Code Generation to Conceptual Learning: Student Use of LLMs in a Web Programming Course(Hajara-Yasmin Isa, Matthew Weston, Muhammad Rizky Wellyanto, Ishita Karna, Jerry O. Talton, Ranjitha Kumar, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- The use of artificial intelligence in teaching programming to students(Damir Bolotbaev, M. Temirov, 2025, Bulletin of the Jusup Balasagyn Kyrgyz National University)
- The Influence of AI-assisted Tools on Engineering Project Outcomes(Alexandru Dinu, 2025, Revista Romaneasca pentru Educatie Multidimensionala)
- EVALUATING LLM‑GENERATED FEEDBACK FOR DEBUGGING ASSISTANCE IN CS1(Nimisha Agarwal, Vinayak Gupta, Amey Karkare, 2026, INTED Proceedings)
- Large Language Models (LLMs) in Programming Learning: The Current Research State and Agenda(Qian Fu, Yaning Zhao, Zixi Jia, Yafeng Zheng, 2025, IEEE Transactions on Learning Technologies)
- Use of AI-driven Code Generation Models in Teaching and Learning Programming: a Systematic Literature Review(Doga Cambaz, Xiaoling Zhang, 2024, Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1)
AI赋能的项目式学习(PBL)与工程实践模式
该组文献专注于将AI整合进项目驱动的教学场景,探讨如何通过AI辅助工具提升学生在复杂任务下的项目规划、团队协作及工程实践能力。
- Project-Based Learning Connecting Robotics and Artificial Intelligence(Alexandra Posekany, 2025, Lecture Notes in Networks and Systems)
- The Effects of AI Programming Assistant on University Students’ Algorithmic Thinking and Self-Efficacy(W. Xiao, Xueying Lu, 2026, Journal of Educational Computing Research)
- Artificial Intelligence as a Catalyst: A Case Study on Adaptive Learning in Programming Education(Tero Reunanen, Noora Nieminen, 2024, AHFE International)
- Experiences and Insights Gained from AI-Assisted Programming Instruction in Higher Education(Haoshun Cao, 2025, Proceedings of the 2025 International Conference on AI-enabled Education)
- Toward Artificial Intelligence-Human Paired Programming: A Review of the Educational Applications and Research on Artificial Intelligence Code-Generation Tools(Jiangyue Liu, Siran Li, 2024, Journal of Educational Computing Research)
- Benchmarking of Generative AI Tools in Software Engineering Education: Formative Insights for Curriculum Integration(N. Roy, Oleksandr Horielko, O. Omojokun, 2025, Proceedings of the 2025 ACM Conference on International Computing Education Research V.2)
- Exploring Generative AI for Learning Experiences and Instructional Practices in Software Engineering Education(Tianjia Wang, 2026, Proceedings of the 57th ACM Technical Symposium on Computer Science Education V.2)
- Generative AI in Student Software Development Projects: A User Study on Experiences and Self-Assessment(Uwe M. Borghoff, Mark Minas, Jannis Schopp, 2025, Proceedings of the 6th European Conference on Software Engineering Education)
- A Systemic View of a Software Engineering Education Curriculum: Requirements and Guidelines in the Era of Generative AI(Thomas J. Marlowe, Cyril S. Ku, Joseph R. Laracy, V. Kirova, Katherine G. Herbert, 2026, Journal of Integrated Design and Process Science)
- Development of Educational Projects on the Basis of Technological Platforms with Artificial Intelligence: The Experience of MIPT on the Use of High Vox-Platform(E.V. Blagodarny, A.A Vedyakhin, A.M. Raygorodsky, 2018, 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI))
- AI Literacy Through a Project-Based Learning Course(Marco Ho, Carly Orr, Rebecca Jeon, M. Lin, Jeeho Ryoo, 2025, 2025 IEEE Smart World Congress (SWC))
- AI-Assisted Project-Based Learning and Exploration in the Field of Electronic Information Engineering Majors(Li Yu, Ming-Wei Wu, Gang Cen, Yong-Xin Xi, Xin-Yu Yan, Tian Qiu, 2023, Communications in Computer and Information Science)
- Towards Deep Learning: Transforming Project-Based Teaching with an AI-Enhanced Platform for Intelligent Assessment and Collaborative Learning(Zhen Yao, D. Xia, Xiannian Sun, Yang Liu, 2025, Proceedings of the 2025 2nd International Symposium on Artificial Intelligence for Education)
- Toward an AI Knowledge Assistant for Context-Aware Learning Experiences in Software Capstone Project Development(Andrés Neyem, Luis A. González, M. Mendoza, Juan Pablo Sandoval Alcocer, Leonardo Centellas, Carlos Paredes, 2024, IEEE Transactions on Learning Technologies)
- Fostering programming skill and critical thinking through AI-assisted PBL integration(C. B. Omeh, M. A. Ayanwale, L. Mnguni, C. J. Olelewe, 2025, Journal of New Approaches in Educational Research)
教学策略重构、伦理挑战与评价框架构建
该组文献侧重于AI整合带来的宏观教学挑战(如学术诚信、过度依赖),探讨如何通过调整评估方式、提示词工程教学及课程框架设计来培养学生高阶思维。
- ChatGPT: Challenges and Benefits in Software Programming for Higher Education(Carlos Alexandre Gouvea da Silva, F. Ramos, R. V. de MORAES, Edson Leonardo dos Santos, 2024, Sustainability)
- Teaching Programming in the Age of AI: Transforming Pedagogy Amidst Code-Generating Technologies(Asad Azemi, 2025, 2025 IEEE Frontiers in Education Conference (FIE))
- A Systematic Literature Review on Large Language Models Applications in Computer Programming Teaching Evaluation Process(A. Pereira, Rafael Ferreira Mello, 2025, IEEE Access)
- Literature Review on the Integration of Generative AI in Programming Education(Jemimah Nathaniel, S. Oyelere, Jarkko Suhonen, Matti Tedre, 2025, International Journal of Artificial Intelligence in Education)
- ARTIFICIAL INTELLIGENCE AS A SUPPORT TOOL IN TEACHING PROGRAMMING TO FUTURE BACHELOR'S STUDENTS OF VOCATIONAL EDUCATION(B. Rozputnia, L. Shevchenko, Volodymyr Umanets, Serhii Yashchuk, Yuliia Sabadosh, 2025, ENVIRONMENT. TECHNOLOGY. RESOURCES. Proceedings of the International Scientific and Practical Conference)
- Investigating the Impact of Code Generation Tools (ChatGPT & Github CoPilot) on Programming Education(Faisal Nizamudeen, Lorenzo Gatti, N. Bouali, Faizan Ahmed, 2024, Proceedings of the 16th International Conference on Computer Supported Education)
- Using Generative Artificial Intelligence Tools in Software Engineering Courses(Soma Datta, 2024, 2024 36th International Conference on Software Engineering Education and Training (CSEE&T))
- Evaluating the Educational Benefits and Risks of AI Coding Assistants Among Novice Programming Students in Sri Lanka(IM Samarappulige, K Ekanayake, NT Jayathilake, 2025, Authorea Preprints)
- Benchmarking AI Tools for Software Engineering Education: Insights into Design, Implementation, and Testing(N. Roy, Oleksandr Horielko, O. Omojokun, 2026, Proceedings of the 57th ACM Technical Symposium on Computer Science Education V.1)
- Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation(Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, J. Prather, E. Santos, 2022, Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1)
- Developing Critical Thinking with AI Coding Assistants: An Educational Experience focusing on Testing and Legacy Code(I. Blasquez, 2025, Proceedings of the 30th ACM Conference on Innovation and Technology in Computer Science Education V. 1)
- Integrating Generative AI in Software Engineering Education: Practical Strategies(Yishu Li, J. Keung, Xiaoxue Ma, 2024, 2024 International Symposium on Educational Technology (ISET))
- How ChatGPT Will Change Software Engineering Education(Marian Daun, Jennifer Brings, 2023, Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1)
- A Systematic Framework for Generative AI-Powered Curriculum Development: Integrating Industry Requirements with Agile Learning(Jordan Scott, Gedare Bloom, G. Bailey, 2025, 2025 International Symposium on Networks, Computers and Communications (ISNCC))
- Generative AI in Engineering and Computing Education: A Scoping Review of Empirical Studies and Educational Practices(Jonathan Álvarez Ariza, Milena Benitez Restrepo, Carola Hernández Hernández, 2025, IEEE Access)
- Incorporating Generative AI into Software Development Education(Olga Petrovska, Lee Clift, Faron Moller, Rebecca Pearsall, 2024, Proceedings of the 8th Conference on Computing Education Practice)
- INTEGRATING GENERATIVE AI METHODS IN COMPUTER SCIENCE EDUCATION: PERSPECTIVES, STRATEGIES, AND OUTCOMES(M. Zimmermann, H. Janetzko, Benjamin Haymond, 2024, EDULEARN Proceedings)
- Artificial Intelligence Integration in Programming Education: Implications for Pedagogy and Practice(Venushini Rajendran, R Kanesaraj Ramasamy, 2024, Lecture Notes in Electrical Engineering)
- Enhancing Computer Programming Education: Integrating Advanced AI Tools for Challenging Curriculum Projects(António Jorge Gouveia, Bruno Machado, Cristiano Pendão, 2025, Communications in Computer and Information Science)
- THE ROLE OF AI CODING ASSISTANTS: REVISITING THE NEED FOR LITERATE PROGRAMMING IN COMPUTER AND DATA SCIENCE EDUCATION(Marcus Birkenkrahe, 2024, INTED Proceedings)
- Influence of ChatGPT on Programming Code Generation: A Case Study of the Technical University of Manabí(Daniel Palacios, Lucía Rivadeneira, 2024, Communications in Computer and Information Science)
AI驱动的软件工程辅助技术与代码分析优化
该组文献关注AI在软件工程技术层面的应用,如代码质量提升、重构优化、单元测试生成等,强调技术工具在专业实践流程中的支持作用。
- An empirical comparison of AI assisted software refactoring tools(Gull-i Saba, Sohaib Ahmed, Hiba Sania, Talha Ahmed Khan, Hidawati Bte Mohamad Nasir, 2026, Scientific Reports)
- Improving Software Engineering Practices: AI-Driven Adoption of Design Patterns(Vinay Supekar, Rajeshree Khande, 2024, 2024 Second International Conference on Advanced Computing & Communication Technologies (ICACCTech))
- Using LLM-Based Filtering to Develop Reliable Coding Schemes for Rare Debugging Strategies(Aysa Xuemo Fan, Qianhui Liu, Luc Paquette, Juan D. Pinto, 2024, Communications in Computer and Information Science)
- A Survey of LLM-Based Applications in Programming Education: Balancing Automation and Human Oversight(Griffin Pitts, Anurata Prabha Hridi, Arun Balajiee Lekshmi Narayanan, 2025, Proceedings of the Fourth Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP))
- Large Language Models in Computer Science Education: A Systematic Literature Review(Nishat Raihan, Mohammed Latif Siddiq, Joanna C. S. Santos, Marcos Zampieri, 2024, Proceedings of the 56th ACM Technical Symposium on Computer Science Education V. 1)
- Artificial Intelligence and Computer-Supported Collaborative Learning in Programming: A Systematic Mapping Study(Carlos Giovanny Hidalgo Suarez, V. Bucheli-Guerrero, H. Ordóñez-Eraso, 2023, Tecnura)
编程教育领域研究现状与未来趋势综述
该组文献通过系统综述,宏观梳理了AI辅助编程教育的研究领域、演进路径及未来的前沿发展方向,为研究布局提供全景视角。
- TEACHING LLM PROGRAMMING IN INDUSTRIALLY FOCUSED COURSES(Thaylor Vieira, Evellim Michele Silva Martins, Nei Junior da Silva Farias, Ricardo da Silva Barboza, Vicente Ferreira de Lucena Jr, 2025, ICERI Proceedings)
- Understanding and Enhancing CS Students’ Interaction Experience with AI Coding Assistant Tools(Xiao Long, Xin Tan, Yinghao Zhu, Jing Jiang, Li Zhang, 2025, ACM Transactions on Software Engineering and Methodology)
- A Systematic Review of Studies on the Use of Generative Artificial Intelligence Tools in Programming Education(Emre Özgül, 2026, Kastamonu Eğitim Dergisi)
- AI chatbots in programming education: Students' use in a scientific computing course and consequences for learning(Suzanne Groothuijsen, A. V. D. Beemt, Joris C. Remmers, L. V. Meeuwen, 2024, Computers and Education: Artificial Intelligence)
- LLM Tools for Programming(Michal Cernanský, Peter Hafner, I. D. Luptáková, 2025, 2025 International Conference on Emerging eLearning Technologies and Applications (ICETA))
- Computer Programming Education in the Age of Generative AI: Insights from Empirical Research(Fitsum Gizachew Deriba, I. Sanusi, Oladele O. Campbell, S. Oyelere, 2024, SSRN Electronic Journal)
- Coding with AI: How Are Tools Like ChatGPT Being Used by Students in Foundational Programming Courses(Aashish Ghimire, John Edwards, 2024, Lecture Notes in Computer Science)
- A Framework for Understanding the Role of Generative AI in Engineering Education: A Literature Review(Prarthona Paul, C. Variawa, 2025, 2025 ASEE Annual Conference & Exposition Proceedings)
- Adoption of AI-coding assistants in programming education: exploring trust and learning motivation through an extended technology acceptance model(Farman Ali, Awais Ahmed, M. Alipour, Hugo Terashima-Marín, 2025, Journal of Computers in Education)
- How AI teaching assistant use affects students’ learning outcomes: An empirical study on the differential effects of questioning strategies(Tao Zhang, Chen Zhao, Qianhui Lv, Xinnan Wang, Ziqi Zhang, 2026, Education and Information Technologies)
- APPROACHES TO UTILIZING GENERATIVE ARTIFICIAL INTELLIGENCE IN PROGRAMMING AND SOFTWARE ENGINEERING EDUCATION(Joaquín Cañadas, Manel Mena, Juan Alberto Llopis, Rosa Ayala, Francisco García, J. Criado, Julio Barón, 2024, ICERI Proceedings)
- Empowering Control Engineering Students Through Theory, Implementation, and AI-Assisted Learning: Evaluating a Project-Based Advanced Computer Control Course(H. El-Kebir, M. Ornik, J. Bentsman, 2025, IFAC-PapersOnLine)
- The Impact of Generative Artificial Intelligence Tools in Project-Based Learning(T. V. Dijk, V. Zaytsev, 2024, Lecture Notes in Computer Science)
本报告整合了AI在编程教育领域的理论与实践成果,构建了涵盖实证评估、PBL模式重构、教学策略与评价、软件工程技术优化及研究综述的系统框架。这些研究共同揭示了AI辅助编程教育从单点技术应用向人机协作式项目学习模式演进的趋势,为应对编程教育中的复杂挑战提供了从方法论到落地实施的全面理论支撑。
总计70篇相关文献
Learning to code is especially challenging for beginners due to limited personalized guidance and delayed feedback. This paper introduces an AI-driven coding education framework that supports self-directed, project-based learning for novice programmers through an integrated AI mentor, intelligent debugging assistance, and gamification features. We conducted a 4-week pilot study with 200 participants—100 in an AI-assisted environment and 100 in a self-paced control group—to assess the framework’s impact. Results showed that AI-assisted learners achieved significantly higher project completion rates (90% vs. 60%), resolved bugs faster (18.2 vs. 32.7 minutes on average), and attained larger test score improvements (mean gain of +22 vs. +10 points, p < 0.05). Crucially, learners reported greater confidence in tackling programming tasks, suggesting that real-time, context-aware AI guidance can bolster novice performance without promoting dependency. We present a novel system architecture that integrates project-based assignments, adaptive learning paths, and a proactive AI mentor to keep students engaged and continuously supported. These findings underscore the potential of AI-driven mentorship to enhance coding education at scale by providing immediate, individualized assistance. Our study contributes empirical evidence that carefully designed AI tools can accelerate debugging, improve learning outcomes, and increase persistence among beginner programmers, paving the way for more robust, student-centered coding instruction.
… example of project-based learning that focuses on the use of AI tools for assisted education in the … Therefore, this project can be promoted as an in-class specialized teaching program, …
… in programming skills among students in the AI-assisted group, with a … project-based learning supported by Google Gemini, an artificial intelligence tool used for real-time programming …
… , and a comprehensive project-based learning component. … control, dynamic programming, and imitation learning—but also … in engineering education research, where project-based and …
… IT programming skills foster the students learning experience … and AI education in a project-based learning environment. … that the efficacy of teaching machine learning, Data Science …
… AI-assisted coding tools in the learning process. … project-based programming course, assessing whether these tools support or hinder their understanding of fundamental programming …
The rapid emergence of generative Artificial Intelligence (AI) tools has sparked both excitement and concern in education. This paper reports on a case study of British Columbia Institute of Technology’s COMP 2800 – Term 2 Computer Systems Technology Project course, which was redesigned to foster AI literacy through a five-week project-based learning approach. In Spring 2023, approximately 230 first-year computing students (forming 60 teams across two campuses) built web applications addressing an "AI for Good" challenge. The course structure combined Agile sprints, reflective retrospectives, and open-ended problem solving with an expectation to integrate generative AI (e.g., ChatGPT, GitHub Copilot, and Midjourney). We described the course design, the diverse ways students leveraged AI tools in their projects, and a standout project case study. Qualitative and quantitative data from team retrospectives were analyzed to assess outcomes in estimation skills, teamwork, ethical awareness, and prompt engineering proficiency. The results indicate that project-based integration of AI can enhance students’ functional AI literacy and prompt critical thinking about AI’s role. We discuss implications for computing education, including pedagogical strategies for incorporating AI into curriculum and preparing students for responsible AI-augmented software development.
In recent years, large language models (LLMs) have gained considerable attention in academic environments, particularly for their potential to support student work. This paper investigates how the use of LLM tools influences graduate engineering students’ performance and perception during project-based learning. The study was conducted over the course of one semester at a Romanian technical university and involved 60 students, divided into two groups: one using AI tools such as ChatGPT and one working with traditional resources only. A mixed-method approach was employed, including pre- and post-project questionnaires and a dual evaluation system involving both human and AI grading, based on a shared rubric. Results show a significant increase in perceived satisfaction and a reduction in the reported difficulty among students who had access to AI tools. Moreover, their average grades were higher and more consistent compared to those in the non-AI group. The study also highlights the alignment between human and AI-based assessment and the growing openness among students toward adopting generative tools in future academic work. These findings suggest that integrating LLMs into higher education may improve learning experiences, but also raise questions about critical thinking, fairness, and ethical use.
The core objective of engineering education is to cultivate outstanding talents capable of addressing complex engineering problems and meeting contemporary and future challenges. In this context, enhancing students’ engineering thinking and professional competence through pedagogical innovation has become a key focus in higher education, intending to equip students with the ability to analyze and solve practical problems from multidisciplinary perspectives. Using the mechanical design major as an exemplary case, this study implements a project-based teaching approach centered on authentic engineering problems. The course content is restructured according to industrial design and production workflows, supported by an AI-assisted knowledge graph that integrates prerequisite and subsequent courses, thereby achieving deeper curriculum cohesion. To address persistent issues in formative assessment—such as insufficient and diverse feedback, difficulties in ensuring the authenticity of online evaluations, and a lack of personalized guidance—a localized, intelligent, collaborative learning platform was developed. This platform transcribes and intelligently analyzes student-submitted videos, compares them against a predefined problem graph to identify competency gaps, and recommends targeted exercises based on the structured knowledge graph. Moreover, AI technologies are utilized to synthesize feedback from both peers and instructors, facilitating students’ progression from knowledge application to competency development. This process supports the systematic construction of students’ professional knowledge, fosters higher-order thinking skills, and enhances learning motivation and initiative. Teaching practice demonstrates that this AI-enhanced project-based teaching model effectively promotes scientific thinking and significantly improves students’ comprehensive ability to solve complex engineering problems.
The technological advancement represented by Artificial Intelligence Generated Content (AIGC) is fundamentally transforming higher education, particularly in the field of Computer Science (CS). CS programming education, centered on courses such as Data Structures and Algorithms (DSA), faces both unprecedented opportunities and challenges in its traditional teaching model. This article systematically examines the current landscape, key technologies, teaching methodologies, practical outcomes, and core challenges of implementing AIGC as an auxiliary teaching tool in university programming courses. The analysis focuses on how AIGC technology reshapes teaching objectives, innovates content delivery, transforms instructional methods, and reconstructs evaluation systems across four dimensions of programming education in universities. Through comprehensive literature review and case analysis of EI conference papers, academic journals, and authoritative research reports from the past five years, this study examines the specific applications and implementation strategies of AIGC in facilitating abstract concept comprehension, enhancing programming skills, developing human-machine collaborative teaching models, and establishing automated evaluation systems. Research findings indicate that effective AIGC-assisted programming instruction exhibits key characteristics including human-machine collaboration through "teacher-teaching assistant + AI" frameworks, instantaneous feedback systems, and individualized learning pathways. AIGC technology is catalyzing shifts from knowledge transmission to skills development in teaching objectives, from standardized to personalized content delivery, from unidirectional lectures to interactive inquiry in teaching methods, and from outcome-focused to process-oriented evaluation approaches. However, significant challenges persist, including concerns regarding academic integrity, insufficient AI literacy among educators, outdated evaluation systems, and technological ethical considerations. AIGC's role in programming education has evolved from a supplementary tool to become an integral partner in transforming teaching paradigms. The establishment of a new "ternary" educational ecosystem - student-centered, teacher-led, and AI-supported - represents an inevitable progression. This research provides practical insights and theoretical framework for educators, researchers, and policymakers seeking to advance programming education in the AI era.
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP), such as text generation and understanding. Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL). Foundational models such as the Generative Pre-trained Transformer (GPT) and LLaMA series have set strong baseline performances in various NL and PL tasks. Additionally, several models have been fine-tuned specifically for code generation, showing significant improvements in code-related applications. Both foundational and fine-tuned models are increasingly used in education, helping students write, debug, and understand code. We present a comprehensive systematic literature review to examine the impact of LLMs in computer science and computer engineering education. We analyze their effectiveness in enhancing the learning experience, supporting personalized education, and aiding educators in curriculum development. We address five research questions to uncover insights into how LLMs contribute to educational outcomes, identify challenges, and suggest directions for future research.
The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on overcoming otherwise mitigating the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.
Pair Programming is considered an effective approach to programming education, but the synchronous collaboration of two programmers involves complex coordination, making this method difficult to be widely adopted in educational settings. Artificial Intelligence (AI) code-generation tools have outstanding capabilities in program generation and natural language understanding, creating conducive conditions for pairing with humans in programming. Now some more mature tools are gradually being implemented. This review summarizes the current status of educational applications and research on AI-assisted programming technology. Through thematic coding of literature, existing research focuses on five aspects: underlying technology and tool introduction, performance evaluation, the potential impacts and coping strategies, exploration of behavioral patterns in technological application, and ethical and safety issues. A systematic analysis of current literature provides the following insights for future academic research related to the practice of “human-machine pairing” in programming: (1) Affirming the value of AI code-generation tools while also clearly defining their technical limitations and ethical risks; (2) Developing adaptive teaching ecosystems and educational models, conducting comprehensive empirical research to explore the efficiency mechanisms of AI-human paired programming; (3) Further enriching the application of research methods by integrating speculative research with empirical research, combining traditional methods with emerging technologies.
The recent emergence of LLM-based code generation models can potentially transform programming education. To pinpoint the current state of research on using LLM-based code generators to support the teaching and learning of programming, we conducted a systematic literature review of 21 papers published since 2018. The review focuses on (1) the teaching and learning practices in programming education that utilized LLM-based code generation models, (2) characteristics and (3) performance indicators of the models, and (4) aspects to consider when utilizing the models in programming education, including the risks and challenges. We found that the most commonly reported uses of LLM-based code generation models for teachers are generating assignments and evaluating student work, while for students, the models function as virtual tutors. We identified that the models exhibit accuracy limitations; generated content often contains minor errors that are manageable by instructors but pose risks for novice learners. Moreover, risks such as academic misconduct and over-reliance on the models are critical when considering integrating these models into education. Overall, LLM-based code generation models can be an assistive tool for both learners and instructors if the risks are mitigated.
: In our rapidly evolving technological landscape, AI tools have gained substantial power and integration across various domains. Through interviews and surveys conducted at a University in the Netherlands, we investigated students’ perceptions of AI tools. Our results show that students generally have a positive attitude towards the adoption of AI technologies and feel that it enhances their learning experience. Furthermore, this research project examines the capabilities of AI-powered tools, namely GitHub Copilot and ChatGPT, in solving a variety of university-level assignments. By empirically evaluating the capabilities of these AI tools and offering insights to educators, this research project aims to assist them in designing programming exercises that encompass essential learning processes while accounting for students’ utilization of AI tools. The findings indicate that a majority of the exercises currently utilized by the examined university could be solved partially or entirely with the aid of these tools. This project highlights the importance of educators understanding the capabilities of AI tools, as well as students’ attitudes towards them, to effectively adapt their teaching methods and promote essential learning goals.
number of approaches have been explored by scholars that include individuals as well as social learning theories. Cognitive load theory provides a valuable framework for examining the challenges faced by novice programmers and offers opportunities for improving instructional approaches, and has been selected as the underpinning theory for this study to provide a focus for theorizing the impact of AI program generators on learning to program. Cognitive load theory builds on the suggestion that human memory has two distinct areas, short-term working memory and long-term memory. Working memory is limited
Purpose: This study aims to systematically review the existing literature on using Generative Artificial Intelligence (Gen-AI) tools in programming education and assess their impact on educational processes. Method: In the study, the systematic review method was adopted, following the PRISMA 2020 flow diagram guidelines. As part of the literature review, the Web of Science, ACM, IEEE, Scopus, Springer Link, Google Scholar, and The Scientific and Technological Research Council of Türkiye - National Academic Network and Information Center (TÜBİTAK ULAKBİM) databases were searched. The studies on using Gen-AI tools in programming education were compiled based on the research questions. Findings: As a result of the literature review, 27 studies that met the specified criteria were analyzed. It was found that the majority of these studies were conducted with undergraduate students and generally focused on Python as the programming language. The most commonly used AI tool was ChatGPT. It was observed that a significant number of the studies reviewed focused on students' cognitive support gains, computational thinking skills, and their effects on their academic achievement and motivation. Highlights: The reviews revealed that there was a significant increase in the number of academic studies on the use of Gen-AI tools in programming education, especially in 2024. However, the fact that there are only three studies on this subject in Türkiye shows that there is a big gap in the local literature. In this respect, it is thought that local studies on integrating AI tools into programming education should be increased, and there is great potential in this field.
This innovative to practice full-paper examines how the rapid evolution of artificial intelligence (AI) tools capable of generating code—exemplified by systems such as ChatGPT and GitHub Copilot—has instigated a pedagogical shift in how programming is taught and learned. As these AI tools become more deeply embedded in professional and academic practices, educators are compelled to reassess traditional pedagogical approaches and realign them with the demands of an AIaugmented future. This paper explores the profound implications of these developments for computer programming education, focusing on the challenges that instructors face and the promising opportunities for curricular innovation. While AI tools provide valuable learning assistance, they risk encouraging superficial understanding if students rely on them excessively. We begin by reviewing major challenges associated with the use of AI tools by students, such as the ease with which students can generate complete code solutions without engaging in the problem-solving process. The birth of code-generating AI also poses significant challenges in maintaining academic rigor and assessment fairness. We also cover a short review of strategies and processes that instructors are employing to combat this issue. Finally, we will present our proposed innovative approach, which includes the integration of AI into programming education by shifting the focus from merely producing code to designing and architecting software systems. This approach encourages students to use AI as a collaborative tool, fostering creativity, strategic thinking, and ethical reasoning. This approach requires greater emphasis on systems thinking and design, which can foster an entrepreneurial mindset.
As AI-assisted coding becomes standard in software development, computer science educators need a clearer understanding of how Large Language Models (LLMs) can support the learning process. Recent work has examined how students can benefit from using LLMs in their courses, but most studies rely on self-reported usage or controlled experiments with short, isolated programming tasks. To complement these approaches, this paper investigates how students organically leverage LLMs in an advanced computer science course where assignments reflect real-world complexity. We analyze 448 LLM chat logs from 147 students across two offerings of a senior-level web programming course at a large U.S. research university. Through open coding, we identified 14 distinct prompt–response pair types that cluster into three categories: to generate code, debug code, and explain programming concepts. Our analysis reveals that how students interact with LLMs correlates with academic performance. High-effort detailed specifications for code generation positively correlated with final grades (r = 0.25, p < 0.01), whereas low-effort behaviors such as pasting raw error messages showed negative correlations (r = −0.34, p < 0.01). We also observed a temporal shift toward explanation-oriented interactions, suggesting that students increasingly use LLMs as conceptual tutors and not just as code generators.
… Sixteen percent of the studies aimed to explore perceptions of GenAI tools … courses in response to AI code generation tools [29], and the impact of GenAI on programming education …
… into programming education and its impact on preserving higher-order thinking skills and foundational programming … integration hinges on intentional teaching strategies, thoughtfully …
… Vaithilingam, P., Zhang, T., Glassman, EL: Expectation vs Experience: evaluating the usability of code generation tools powered by large language models. In: Chi Conference on …
ChatGPT is a substantial language model developed by OpenAI, rooted in the GPT-3.5 architecture, with the capacity to generate human-like responses to text-based inputs. ChatGPT serves various purposes, encompassing chatbots, customer service, and personal assistants, which can significantly contribute to sustainability initiatives. Its applications range from language translation and content creation to text summarization. Utilizing ChatGPT offers several advantages, notably its rapid response generation, high accuracy, and its capacity to evolve and improve over time, aligning with sustainability goals for efficiency and innovation. In an educational context, ChatGPT can provide invaluable support to students and educators, aiding in tasks such as generating summaries for extensive texts and addressing subject-related queries. For programming education, ChatGPT can assist students with coding assignments by offering suggestions, hints, and even generating code snippets, fostering sustainable coding practices. Nevertheless, employing ChatGPT in coding education presents challenges, particularly the risk of students becoming overly dependent on AI-generated code and failing to grasp fundamental concepts, which can hinder long-term sustainability in the field. To gauge the viability of ChatGPT in programming education and sustainability, we conducted a Likert scale questionnaire with a group of 40 Brazilian students from March to April 2023. Our primary goal was to assess students’ interest in utilizing ChatGPT as a tool to face programming challenges and problems. Specifically, we aimed to determine their level of inclination towards relying exclusively on ChatGPT during programming classes. In addition to these objectives, we sought to discern not only the positive and beneficial perceptions of using ChatGPT in the classroom but also to investigate its potential impact on learning outcomes and student engagement. Furthermore, we aimed to explore whether participants would consider transitioning to exclusive reliance on ChatGPT in the context of their programming education. Our study revealed that students recognized ChatGPT as an innovative set of AI tools applicable to various classroom contexts, including programming and computer languages, thereby fostering sustainability in the adoption of AI technology for educational purposes. Notably, a majority of students participating in the study expressed a keen interest in employing this tool as a supplementary educational resource in the classroom, promoting sustainable and enhanced learning experiences.
… of source code from natural language instructions, which is useful for assisting or generating code to support students, primarily in the early stages of programming courses within a …
In the dynamic field of programming education, integrating artificial intelligence (AI) tools has started to play a significant role in enhancing learning experiences. This paper presents a case study conducted during a foundational programming course for first-year students in higher education, where students were encouraged to utilize generative artificial intelligence programming copilot extensions in their programming IDE and browser-based generative AI tools as supportive AI tools. The primary objective was to observe the impact of AI on the learning curve and the overall educational experience.Key findings suggest that the introduction of AI tools significantly altered the learning experience for students. Many who initially struggled with grasping elementary programming concepts found that AI support made understanding basic programming concepts much easier, enhancing their confidence and skills. This was particularly evident in the reduced levels of anxiety typically associated with early programming learning, as the AI copilot provided a non-judgmental, always-available source for clarifying doubts, including queries that students might hesitate to ask in a traditional classroom setting.Notably, some students leveraged the AI to generate similar exercise problems, reinforcing their understanding and skills. The AI's capability to address basic queries also freed up the instructor's time, allowing for more personalized student guidance in more advanced problems. This shift in the instructional dynamic further contributed to a learning environment where students felt more comfortable engaging with complex topics, thereby reducing the psychological barriers often linked with early-stage programming education.The course's structure, enriched by AI, enabled students to delve into more complex programming constructs earlier than traditional curricula would allow. For instance, students were tasked with simulating basic e-commerce operations, such as user registration, product browsing, and cart functionalities. These practical challenges naturally introduced advanced concepts like external data storage, unit testing, and user interface design, which are typically reserved for more advanced courses. With the help of generative AI programming copilot tools, students at any programming skill level were able to develop nearly functional complex structures. Interestingly, even when their projects were not fully functional, students remained motivated. Instead of feeling discouraged by these imperfect outcomes, they showed resilience and a keen interest in understanding and improving their code. This reaction is a significant shift from traditional learning settings, where unfinished or flawed projects often lead to increased anxiety or a drop in motivation.Furthermore, the AI's proactive suggestions inspired students to explore beyond the curriculum. Advanced learners delved into databases, cryptography libraries in Python, and even more advanced user interface design, ensuring that they remained engaged and challenged. This elementary course, enhanced by generative AI tools, also inspired students to learn other programming languages since they now learned that individual learning is more available with the aid of generative AI.In conclusion, the integration of AI in programming education offers a promising avenue for enhancing both the learning experience and outcomes. This case study underscores the potential of AI to revolutionize traditional teaching methodologies, fostering a more dynamic, responsive, and inclusive learning environment.This paper handles the results, possibilities and challenges of AI empowered education in programming. It also gives practical examples as well as future research perspectives.
The article presents a thorough examination of the potential applications of artificial intelligence (AI) in supporting the instruction of programming to future bachelor's degree students in vocational education. It explores the pivotal domains of AI integration into the educational process, encompassing the utilization of adaptive learning systems, intelligent tutoring systems, automated code evaluation systems, and generative models that enhance both the theoretical and practical training of students. It demonstrates the ways in which AI enhances the personalization of educational content, facilitates rapid feedback loops, and optimizes the verification process of software solutions. It has been determined that the integration of AI facilitates the creation of adaptive learning environments. In such environments, automated algorithms analyze test results, the history of students' interaction with educational materials, and the personal pace of information assimilation. Consequently, this facilitates the development of customized educational pathways that are tailored to the distinct characteristics of each student. The implementation of intelligent tutoring systems, such as ChatGPT, GitHub Copilot, or Google AI Studio based on Gemini, facilitates the elucidation of complex programming concepts, including the principles of recursion, sorting algorithms, and other fundamental principles. This, in turn, contributes to the cultivation of critical thinking and self-study skills. In the article, the authors analyze the challenges associated with the introduction of AI in the educational process. The primary challenges identified pertain to issues of academic integrity, particularly when future bachelors of vocational education employ AI capabilities to automatically generate solutions without a comprehensive grasp of the subject matter. Additionally, the article addresses technical limitations concerning the substantial computing resources required and the integration of contemporary algorithms into existing educational platforms. The article further underscores the necessity for specialized professional development programs to equip educators with the skills to effectively utilize AI in vocational education. Additionally, it emphasizes the establishment of ethical frameworks to guide the implementation of AI technologies in this context, ensuring that the principles of academic integrity are preserved and the integrity of the educational process is maintained. The authors of the article propose a number of recommendations and approaches to optimize the process of AI integration, create integrated learning environments, and improve existing assessment methods with regard to automated code verification. The findings of the study can be utilized to enhance pedagogical approaches in programming, to improve the quality of training in the field of information technology, and to promote the development of competitive graduates.
Teaching and learning in higher education require adaptation following students' inevitable use of AI chatbots. This study contributes to the empirical literature on students' use of AI …
With the advent of large language models like ChatGPT, there is interest in leveraging these tools as teaching assistants in higher education. However, important questions remain regarding the effectiveness and appropriateness of AI systems in educational settings. This study evaluated ChatGPT's potential as a teaching assistant for an introductory programming course. We conducted an experimental study where ChatGPT was prompted in response to common student questions and misconceptions from a first-year programming course. This study was conducted over a period of 2 weeks with 20 undergraduate students and 5 faculty members from the department of computer science. ChatGPT's responses were evaluated along several dimensions—accuracy, completeness, pedagogical soundness, and the ability to resolve student confusion by five course faculties through a survey. Additionally, another survey was administered to students in the course to assess their perception of ChatGPT's usefulness after interacting with the tool. The findings suggested that while ChatGPT demonstrated strengths in explaining introductory programming concepts accurately and completely, it showed weaknesses in resolving complex student confusion, adapting responses to individual needs, and providing tailored debugging assistance. This study highlighted key areas needing improvement and provided a basis to develop responsible integration strategies that harness AI to enrich rather than replace human instruction in technical courses. The results, based on the limited sample size and study duration, indicated that ChatGPT has potential as a supplemental teaching aid for core concepts, but also highlighted areas where human instruction may be particularly valuable, such as providing advanced support. Further research with larger samples and longer study periods is needed to assess the generalizability of these findings.
In the context of rapid integration of artificial intelligence (AI) into all spheres, including education, understanding its impact on the learning process, especially in the field of programming, is of paramount importance. The aim of this paper was to identify and evaluate the impact of AI on the formation of students’ programming competences and to develop recommendations for optimising its application in the academic environment. The study was conducted using empirical methods, including anonymous questionnaire survey of second-year students of two leading universities. The study found that 100% of surveyed students actively used AI in the learning process, with a significant majority (78.5%) regularly interacting with AI applications, indicating a high degree of dependence, exacerbated by insufficient fundamental training and limited access to paid resources. It was found that students often perceive AI outputs uncritically and tend to focus on getting ready-made code without delving into understanding the algorithms, potentially leading to a loss of autonomy and errors. Despite these challenges, AI is showing significant benefits such as personalising learning, increasing efficiency and engagement, positioning itself as a powerful support tool for educators. The data obtained also indicate insufficient differentiation of the concepts of “AI” and “neural networks” among the respondents, which emphasises the need for deeper theoretical training and development of analytical skills. The results of the study provided valuable information for teachers and educational programme developers, allowing them to adjust approaches to teaching programming, to strengthen the emphasis on critical thinking and independence, and to develop methods for effective integration of AI into the educational process, taking into account its advantages and limitations
… -tools can boost innovation in teaching methods, allowing for example to create adaptive learning paths and real-time coding support. Professors can leverage these tools to more easily …
Objective: The Computer-Supported Collaborative Learning (CSCL) approach integrates artificial intelligence (AI) to enhance the learning process through collaboration and information and communication technologies (ICTs). In this sense, innovative and effective strategies could be designed for learning computer programming. This paper presents a systematic mapping study from 2009 to 2021, which shows how the integration of CSCL and AI supports the learning process in programming courses. Methodology: This study was conducted by reviewing data from different bibliographic sources such as Scopus, Web of Science (WoS), ScienceDirect, and repositories of the GitHub platform. It employs a quantitative methodological approach, where the results are represented through technological maps that show the following aspects: i) the programming languages used for CSCL and AI software development; ii) CSCL software technology and the evolution of AI; and iii) the ACM classifications, research topics, artificial intelligence techniques, and CSCL strategies. Results: The results of this research help to understand the benefits and challenges of using the CSCL and AI approach for learning computer programming, identifying some strategies and tools to improve the process in programming courses (e.g., the implementation of the CSCL approach strategies used to form groups, others to evaluate, and others to provide feedback); as well as to control the process and measure student results, using virtual judges for automatic code evaluation, profile identification, code analysis, teacher simulation, active learning activities, and interactive environments, among others. However, for each process, there are still open research questions. Conclusions: This work discusses the integration of CSCL and AI to enhance learning in programming courses and how it supports students' education process. No model integrates the CSCL approach with AI techniques, which allows implementing learning activities and, at the same time, observing and analyzing the evolution of the system and how its users (students) improve their learning skills with regard to programming. In addition, the different tools found in this paper could be explored by professors and institutions, or new technologies could be developed from them.
The MIPT School of Applied Mathematics and Computer Science conducts research on artificial intelligence and develops education in this field in Russia. Modern science and technology are developing so quickly that a person needs to constantly learn and acquire new skills. Therefore, MIPT develops educational courses at the school, academic and corporate levels. Employers and scientific laboratories are more interested in the practical skills of employees in AI, than just theoretical knowledge. For this reason, the MIPT School of Applied Mathematics and Computer Science conducts and develop new practice-oriented educational courses. Moreover, the Laboratory of Innovation at MIPT creates the HighVox platform, which will allow MIPT students to gain experience in solving real problems from Russian companies during their studies. The platform creates the digital trace of each student: competences, scientific interests, courses taken, completed projects, soft skills, etc. Based on the digital trace of each participant, the platform automatically creates recommendations for projects and teams most suitable for the student. In the future, HighVox will become a place where technical specialists search for work, get an education (lifelong learning), communicate with colleagues on specialized topics and offer their ideas for startups. As part of the creation of this platform, the laboratory of innovation conducts research in two directions: a model of human competence and the formation of effective teams based on hard & soft skills using artificial intelligence.
Software assistants have significantly impacted software development for both practitioners and students, particularly in capstone projects. The effectiveness of these tools varies based on their knowledge sources; assistants with localized domain-specific knowledge may have limitations, while tools, such as ChatGPT, using broad datasets, might offer recommendations that do not always match the specific objectives of a capstone course. Addressing a gap in current educational technology, this article introduces an AI Knowledge Assistant specifically designed to overcome the limitations of the existing tools by enhancing the quality and relevance of large language models (LLMs). It achieves this through the innovative integration of contextual knowledge from a local “lessons learned” database tailored to the capstone course. We conducted a study with 150 students using the assistant during their capstone course. Integrated into the Kanban project tracking system, the assistant offered recommendations using different strategies: direct searches in the lessons learned database, direct queries to a generative pretrained transformers (GPT) model, query enrichment with lessons learned before submission to GPT and large language model meta AI (LLaMa) models, and query enhancement with Stack Overflow data before GPT processing. Survey results underscored a strong preference among students for direct LLM queries and those enriched with local repository insights, highlighting the assistant's practical value. Furthermore, our linguistic analysis conclusively demonstrated that texts generated by the LLM closely mirrored the linguistic standards and topical relevance of university course requirements. This alignment not only fosters a deeper understanding of course content but also significantly enhances the material's applicability to real-world scenarios.
This study investigates the impact of AI-assisted pair programming on undergraduate students’ intrinsic motivation, programming anxiety, and performance, relative to both human–human pair programming and individual programming approaches. A quasi-experimental design was conducted over two academic years (2023–2024) with 234 undergraduate students in a Java web application development course. Intact class sections were randomly assigned to AI-assisted pair programming (using GPT-3.5 Turbo in 2023 and Claude 3 Opus in 2024), human–human pair programming, or individual programming conditions. Data on intrinsic motivation, programming anxiety, collaborative perceptions, and programming performance were collected at three time points using validated instruments. Compared to individual programming, AI-assisted pair programming significantly increased intrinsic motivation (p < .001, d = 0.35) and reduced programming anxiety (p < .001), producing outcomes comparable to human–human pair programming. AI-assisted groups also outperformed both individual and human–human groups in programming tasks (p < .001). However, human–human pair programming fostered the highest perceptions of collaboration and social presence, surpassing both AI-assisted and individual conditions (p < .001). Mediation analysis revealed that perceived usefulness of the AI assistant significantly mediated the relationship between the programming approach and student outcomes, highlighting the importance of positive perceptions in leveraging AI tools for educational benefits. No significant differences emerged between the two AI models employed, indicating that both GPT-3.5 Turbo and Claude 3 Opus provided similar benefits. While AI-assisted pair programming enhances motivation, reduces anxiety, and improves performance, it does not fully match the collaborative depth and social presence achieved through human–human pairing. These findings highlight the complementary strengths of AI and human interaction: AI support can bolster learning outcomes, yet human partners offer richer social engagement. As AI capabilities advance, educators should integrate such tools thoughtfully, ensuring that technology complements rather than replaces the interpersonal dynamics and skill development central to effective programming education.
… scalable natural-language debugging assistance. This paper … assistants and 80 introductory programming course (CS1) … current limitations of LLM-based debugging support in CS1. …
Tools based on the use of Large Language Models (LLMs) have improved the computer programming teaching process, automated feedback processes, facilitated program repair, and enabled personalized learning experiences. This research examines which and how LLM-based opportunities are applied in the computer programming teaching assessment process and how LLMs are applied to improve evaluation accuracy, their impact on student learning outcomes, and the challenges in scaling these technologies. Key opportunities arise from prompt engineering, which optimizes precision and LLM-generated feedback relevance, and feedback propagation techniques, which offer scalable solutions for large-scale programming courses. LLMs are also applied effectively in debugging assistance to detect and repair syntactic and semantic errors in student code. This review identifies several research directions, including prompt engineering refinement, improved feedback system scalability, and deeper exploration of the long-term educational impacts of LLM. The study concludes that LLMs are effective in enhancing the assessment process, but a balanced approach combining human oversight with automated feedback is crucial to fostering critical thinking and ensuring long-term learning success in programming education.
The rise of Large Language Model (LLM)– based tools such as ChatGPT, GitHub Copilot, Cursor, and GitHub Copilot Agent is transforming how software is developed. These tools function as intelligent collaborators, capable of generating code, explaining concepts, detecting bugs, and accelerating development workflows. As the software industry increasingly demands graduates who are proficient with modern development environments and tools, ranging from debuggers and version control systems to AI-driven assistants, it becomes essential to incorporate LLM-based tools into computer science education. This article summarizes the most prominent LLM-assisted programming tools. Through selected examples of programming problems, it demonstrates the practical applications of LLM-based assistants and agents. By engaging with these tools, computer science students can prepare for a future in which AI will be an integral part of professional software development.
Novice programmers benefit from timely, personalized support that addresses individual learning gaps, yet the availability of instructors and teaching assistants is inherently limited.Large language models (LLMs) present opportunities to scale such support, though their effectiveness depends on how well technical capabilities are aligned with pedagogical goals.This survey synthesizes recent work on LLM applications in programming education across three focal areas: formative code feedback, assessment, and knowledge modeling.We identify recurring design patterns in how these tools are applied and find that interventions are most effective when educator expertise complements model output through human-in-the-loop oversight, scaffolding, and evaluation.Fully automated approaches are often constrained in capturing the pedagogical nuances of programming education, although human-in-the-loop designs and course-specific adaptation offer promising directions for future improvement.Future research should focus on improving transparency, strengthening alignment with pedagogy, and developing systems that flexibly adapt to the needs of varied learning contexts.
Large language models (LLMs) show great potential in programming learning. However, existing studies mainly focus on technical implementations and lack a systematic analysis of the application of LLMs in programming learning from an educational perspective. This study conducts a systematic literature review and bibliometric analysis based on 75 high-quality papers, using a 6-D framework (roles, technology, learners, environment, effectiveness, and challenges) to examine the current state and agenda of LLM applications. The results indicate that the application of LLMs has evolved from model validation in 2022 to teaching applications in 2023 and is expected to be deeply integrated into the education system by 2024–2025, reflecting a shift from tools to teaching agents. In programming learning, LLMs primarily take on roles in resource generation, task solving, and feedback provision. In terms of technology usage, OpenAI’s series of models dominate, with Python being the main programming language environment, and research subjects focusing on beginner programmers and university students. Empirical studies show that LLMs can effectively enhance learners’ cognitive outcomes and noncognitive performance, but they can also lead to overreliance on tools, academic integrity risks, and ethical challenges. Future research should establish an education theory-driven design framework for LLMs, conduct studies on generative artificial intelligence literacy and ethical norms, and provide theoretical and practical guidance for programming learning.
… of applying emerging technologies in automation, decision-making, and … for teaching LLMs in a course focused on Industry 4.0, … The goal is to prepare students to develop LLM-based …
… This motivated us to find automated ways to filter a larger amount of debugging episodes to … an introductory computer science course. We proposed an LLM-based filtering approach that …
The rapid adoption of Generative AI as a coding assistant in programming education raises critical pedagogical questions regarding its impact on learning quality. This study investigates whether the use of Generative AI in a deep learning practicum enhances students’ code quality and conceptual understanding or merely improves productivity without meaningful comprehension. A quasi-experimental pretest–posttest control group design was employed involving 60 undergraduate students enrolled in a Deep Learning course. The experimental group (n = 30) used Generative AI tools (ChatGPT/GitHub Copilot) during practicum sessions, while the control group (n = 30) relied on conventional resources. Instruments included a validated conceptual understanding test (α = 0.87) and an analytic code quality rubric based on ISO/IEC 25010 standards (κ = 0.82). Data were analyzed using independent samples t-tests and MANOVA at α = 0.05. Results show that the experimental group achieved significantly higher posttest conceptual scores (M = 78.63) than the control group (M = 72.10), t(58) = 3.34, p = 0.001, d = 0.86. Code quality scores were also significantly higher (20.77 vs. 18.12 out of 25), t(58) = 4.57, p < 0.001, d = 1.18. MANOVA confirmed a significant combined effect (Wilks’ Λ = 0.71, p < 0.001). The study was limited to a single institution and a six-week intervention period, which may restrict generalizability and long-term interpretation. This research provides controlled experimental evidence demonstrating that Generative AI can enhance both technical code quality and conceptual mastery in deep learning education, contributing empirical guidance for responsible AI integration in computing curricula
AI coding assistants (ACATs) are reshaping computer science (CS) education, yet students’ perception and responses to ACATs’ suggestions remains limited understood, especially regarding behavioral patterns, decision-making, and usability challenges. To address this gap, we conducted a study with 27 CS students, examining their interactions with three widely used ACATs across five key dimensions: interaction frequency and acceptance rate, self-perceived productivity, behavioral patterns, decision-making factors, and challenges and expectations. To support this investigation, we developed an experimental platform incorporating a VSCode extension for log data collection, screen recording and automatic generation of personalized interview and survey questions. Our findings reveal substantial variation in ACAT acceptance rates depending on task types, recommendation methods, and content. We propose a novel five-layer interaction behavior model that captures different stages of user interaction. Notable insights include the problem-solving value of rejected AI suggestions, the inefficiencies introduced by modifying existing code that often lead to backtracking, and the high stability of “slowly accepted” suggestions. Moreover, we identify 22 decision-making factors, 11 challenges, and 23 student expectations for future ACAT improvements—such as enhanced debugging accuracy and adaptive learning of individual coding styles. This study contributes actionable design implications for improving ACAT usability, informing student interaction strategies, and guiding future research in human-software interaction, ultimately aiming to better support CS education.
… This paper scrutinizes the impact of AI coding assistants in … disruptive influence of AI on traditional educational methods, … explored the complex impact of AI coding assistants in computer …
This systematic review and meta-analysis investigates the impact of artificial intelligence (AI) tools, including ChatGPT 3.5 and GitHub Copilot, on learning outcomes in computer programming courses. A total of 35 controlled studies published between 2020 and 2024 were analysed to assess the effectiveness of AI-assisted learning. The results indicate that students using AI tools outperformed those without such aids. The meta-analysis findings revealed that AI-assisted learning significantly reduced task completion time (SMD = −0.69, 95% CI [−2.13, −0.74], I2 = 95%, p = 0.34) and improved student performance scores (SMD = 0.86, 95% CI [0.36, 1.37], p = 0.0008, I2 = 54%). However, AI tools did not provide a statistically significant advantage in learning success or ease of understanding (SMD = 0.16, 95% CI [−0.23, 0.55], p = 0.41, I2 = 55%), with sensitivity analysis suggesting result variability. Student perceptions of AI tools were overwhelmingly positive, with a pooled estimate of 1.0 (95% CI [0.92, 1.00], I2 = 0%). While AI tools enhance computer programming proficiency and efficiency, their effectiveness depends on factors such as tool functionality and course design. To maximise benefits and mitigate over-reliance, tailored pedagogical strategies are essential. This study underscores the transformative role of AI in computer programming education and provides evidence-based insights for optimising AI-assisted learning.
Artificial Intelligence Integration in Programming Education: Implications for Pedagogy and Practice
… of AI technologies in programming education and elucidates their pedagogical implications. With … This integration enables students to leverage AI assistance effectively, enhancing their …
Providing timely and personalized feedback to large numbers of students is a long-standing challenge in programming courses. Relying on human teaching assistants (TAs) has been extensively studied, revealing a number of potential shortcomings. These include inequitable access for students with low confidence when needing support, as well as situations where TAs provide direct solutions without helping students to develop their own problem-solving skills. With the advent of powerful large language models (LLMs), digital teaching assistants configured for programming contexts have emerged as an appealing and scalable way to provide instant, equitable, round-the-clock support. Although digital TAs can provide a variety of help for programming tasks, from high-level problem solving advice to direct solution generation, the effectiveness of such tools depends on their ability to promote meaningful learning experiences. If students find the guardrails implemented in digital TAs too constraining, or if other expectations are not met, they may seek assistance in ways that do not help them learn. Thus, it is essential to identify the features that students believe make digital teaching assistants valuable. We deployed an LLM-powered digital assistant in an introductory programming course and collected student feedback (n=813) on the characteristics of the tool they perceived to be most important. Our results highlight that students value such tools for their ability to provide instant, engaging support, particularly during peak times such as before assessment deadlines. They also expressed a strong preference for features that enable them to retain autonomy in their learning journey, such as scaffolding that helps to guide them through problem-solving steps rather than simply being shown direct solutions.
The rise of AI coding assistants, like GitHub Copilot, is transforming software development. These tools promise productivity gains and support for various tasks, from code generation to explain legacy systems. However, their integration into education raises pedagogical challenges: How can we use their potential without compromising students' autonomy and mastery of fundamental concepts? How can we stay critical of their limitations? This paper explores these questions through a structured educational experience. It is based on a guided tutorial designed to confront students with the assistants' limitations. A simultaneous questionnaire is provided to allow students to take the necessary time to thoroughly analyze the assistant's responses. The tutorial has three stages. It introduces students to real-world scenarios of increasing complexity: getting started, implementing business rules, and working on a legacy project. This approach helps develop skills such as critical thinking, prompt refinement, and error correction. The results also show that well-supervised use of coding assistants can enhance teaching in testing and working with legacy code. They help students overcome initial roadblocks and encourage them to think about best practices. Furthermore, students themselves highlight the importance of supervising these tools to maintain their autonomy.
Algorithmic thinking refers to students’ abilities and skills in understanding problems, formulating strategies, and creating algorithms. This study explores the integration of large language model-driven AI programming assistants into the cultivation of university students’ algorithmic thinking, and examines the effect of AI assistants on university students’ algorithmic thinking and self-efficacy. The results indicate that AI assistants, equipped with functions such as timely feedback, personalized support, emotional companionship, and cognitive scaffolding, exert a significantly positive effect on students’ algorithmic thinking and self-efficacy. This effect shows no differences across genders or learning styles. While AI assistants demonstrated clear advantages in areas such as real-time feedback, problem diagnosis, and generating diverse solutions, students perceived that teachers retained a crucial and complementary role in guiding goals and values, constructing knowledge frameworks, providing emotional support, and offering holistic learning guidance. Based on these perceptions, we propose that teachers and AI assistants can collaborate to form a collaborative model for cultivating algorithmic thinking. This study provides practical guidance for applying AI assistants in programming or algorithm courses to enhance students’ algorithmic thinking, and offers valuable insights for designing human-machine collaborative teaching methods and optimizing AI assistants.
… This study uses a concurrent mixed-methods research design, specifically a convergent parallel approach, to examine the educational impact of AI coding assistants (eg, ChatGPT, …
This study critically evaluates the efficacy of GitHub Copilot in low-level programming education, specifically within C programming tasks involving complex concepts like memory management and pointer manipulation. While AI tools have shown promise in supporting high-level programming, its impact on skill-intensive, low-level contexts remains underexplored. We conducted a within-subject experimental study with 34 graduate computer science students, assessing performance on AI -assisted and independent tasks. Statistical analyses revealed that Copilot, one of the AI programming tools, enhances productivity in routine coding activities; however, it is insufficient for tasks requiring deep problem-solving skills. Notably, a significant performance decline in AI-free tasks suggests a dependency on Copilot that may hinder the development of essential independent problem-solving abilities. Survey feedback underscores ethical concerns, with 40.6 % of students expressing uncertainty about responsible AI usage and potential over-reliance. These findings highlight the ne-cessity for structured instructional practices, including AI-free assessments and clear ethical guidelines, to promote balanced technology integration in programming education. This study contributes to educational theory by illuminating the limitations of generative AI within constructivist and self-regulated learning frameworks. Future research should explore the long-term effects of AI dependency on technical skill development and investigate AI advancements tailored for low-level programming to better support foundational skills.
There always exists a naiver, tinier approach to writing good code and, likewise, a longer and more comprehensive way. Imagine a tool that takes advantage of deep machine learning algorithms to fish the most appropriate code and put it in a drop-down menu only for a programmer to select. In recent years, the use of AI-powered programming tools has grown in leaps and bounds and gained much attention. Whereas AI has been around for roughly 30 years, it is still uncertain for educators on how to make instructive advantage of it on a larger scale and how it can essentially have a profound effect on teaching and learning in tertiary education. This article investigates the effectiveness of AI-powered code assistant tools in learning programming. For the research, two leading coding platforms were picked (eclipse and VS code) and a selected AI-powered code assistant tool (Tabnine) was installed. After training the research participants, an experiment was carried out, whereby all 40 participants were given two tests, one on AI-assistant enabled coding platform and the other on non-AI-assistant enabled coding platform. Results indicate that AI-coding tools significantly increase students’ coding efficiency and general motivation to code. Results also show that AI code assistant tools do not affect participant’s code comprehension. From the results found it is recommended that AI code assistant tools must be incorporated to aid students in developing a positive attitude towards programming and also to improve their coding efficiency.
… various educational fields, most notably in computing education. AI-based coding assistants (AI-… study examines the impact of utilitarian and hedonic values on the adoption of AI-CAs by …
… Overall, this study provides new empirical evidence in educational technology and learning sciences, … The variable AI-Usage was a dummy variable coded as 1 if a student used the …
This paper explores how Generative AI can be incorporated into software development education. We present examples of formative and summative assessments, which explore various aspects of ChatGPT, including its coding capabilities, its ability to construct arguments as well as ethical issues of using ChatGPT and similar tools in education and the workplace. Our work is inspired by the insights from surveys that show that the learners on our Degree Apprenticeship Programme have a great interest in learning about and exploiting emerging AI technology. Similarly, our industrial partners have a clear interest for their employees to be formally prepared to use GenAI in their software engineering roles. In this vein, it is proposed that embedding the use of GenAI tools in a careful and creative way - by developing assessments which encourage learners to critically evaluate AI output - can be beneficial in helping learners understand the subject material being taught without the risk of the AI tools “doing the homework”.
The transformative influence of generative artificial intelligence (AI), notably large language models (LLMs), has significantly reshaped the software engineering (SE) landscape, impacting various aspects of software development within industry and academia. The imperative to integrate generative AI into educational programs arises from the necessity to furnish graduates with contemporary methodologies that enhance software quality and streamline development processes. Nevertheless, a research gap exists concerning the systematic integration of established SE education guidelines with specific course contexts to strengthen SE education through incorporating generative AI. In response to this gap, our study presents a vision for integrating generative AI into SE education, with a particular emphasis on practical integration strategies aimed at endowing students with essential competencies tailored for contemporary software development. Aligning our vision with the knowledge domains within SE education, we delineate its application across specific areas such as code generation, auto test case completion, and others. The overall objective of these proposed initiatives is to furnish students in SE with an updated and immersive learning experience, thereby addressing the evolving demands of the field.
Software engineering education must be guided by developments in software engineering and aim for professional and educational development of students. This position paper explores the historical and conceptual evolution of software engineering as a discipline central to modern integrated design, software systems, and process science, tracing the trajectory from early structured models, through waterfall, to Agile and AI-augmented paradigms, and looks at how changes in application mix and modes of use, technological complexity, and business and enterprise needs have shaped development methodologies. Special attention is paid to the role of generative AI on the software engineering process and the role of the software engineer, altering workflows, collaboration, and enterprise architectures, and further enabling automation, digital transformation, and autonomous systems, shifting responsibilities and skill sets upwards. The article then considers corresponding shifts in software engineering education, highlighting the need for curricula that intertwine process thinking, systems theory, and ethical engagement, integrate with generative AI technologies, and place greater emphasis on “soft skills”, in courses that combine development of professional expertise with conceptual capabilities and an enhanced capacity for life-long learning. It then presents a set of requirements, options, and guidelines for software engineering education in the era of generative AI.
… generative AI has the potential to revolutionize the classroom experience along the informatics engineering … semester of the bachelor's degree. Our primary objective has been to apply …
Generative Artificial Intelligence (Gen-AI) has revolutionized software engineering (SE) by automating tasks across design, coding, and testing [1] [2]. Tools like ChatGPT and GitHub Copilot streamline code generation, architectural modeling, debugging, and test-case creation [3] [4]. Despite their rapid adoption in industry, the pedagogical implications of these tools in computing education have not been systematically examined. This study solves the existing gap by conducting a comprehensive benchmarking study of Gen-AI tools across four core SE phases— design documentation, feature implementation, debugging support, and testing — to address two research questions: RQ1: What strengths and limitations do Gen-AI tools exhibit in each phase? RQ2: How can insights from benchmarking inform effective integration of Gen-AI into SE curricula? To answer these questions, a diverse set of Gen-AI tools is evaluated, ranging from design-focused assistants such as Lucidchart, Mermaid.js and UIzard; implementation-oriented systems including GitHub Copilot, TabNine, Codeium and Supermaven; debugging supports like GPT-4 and Claude 3.5 Sonnet; and testing frameworks such as Testim, Mabl and Applitools—while also surveying emerging platforms (as of summer 2024) like Replit, Postman, Visily, Gemini, Eraser.io and others. For each tool and development phase, we applied phase-specific metrics: in design documentation, we assessed diagram accuracy, completeness, user effort, and IDE integration; in feature implementation, we measured pattern-based code generation quality, code-completion effectiveness, refactoring robustness, and UI/UX scaffolding; in debugging, we evaluated error-detection accuracy, hallucination rates, and clarity of explanatory feedback; and in testing, we examined test-case relevance and defect-detection coverage. Across all phases, we tracked prompt engineering complexity as a key mediating factor influencing tool performance. Our evaluation reveals speed-fidelity trade-offs: Code-completion assistants accelerate boilerplate generation but demand manual oversight to ensure cross-file consistency and manage higher-order abstractions; diagramming tools can produce precise UML models with minimal effort— but at the cost of iterative prompt refinement for complex cases; LLM debuggers deliver context-sensitive fixes yet suffer from nontrivial hallucination rates; testing generators exhibit wide variance in edge-case coverage. On average, tools needed 2.4 prompt iterations for usable diagrams and 1.5 prompts for bug fixes, underscoring the human effort in guiding AI. We recommend a scaffolded framework for integrating Gen-AI into SE education by: embedding AI tools into hands-on assignments, to explore tasks in a controlled context; by structuring small team projects in which one subgroup uses AI assistants while the other completes the same tasks manually (covering design, implementation, debugging and testing) to surface contrasts in workflow, tool strengths, and human reasoning; by requiring students to maintain a reflective journal documenting their AI usage and prompt-engineering strategies, fostering metacognitive insight into how tool inputs shape outputs; and by equipping learners with decision making criteria, teaching them to evaluate AI assistants according to task fit- preparing them to leverage AI responsibly across SE phases in its evolving landscape.
This position paper discusses the potential for using generative AIs like ChatGPT in software engineering education. Currently, discussions center around potential threats emerging from student's use of ChatGPT. For instance, generative AI will limit the usefulness of graded homework dramatically. However, there exist potential opportunities as well. For example, ChatGPT's ability to understand and generate human language allows providing personalized feedback to students, and can thus accompany current software engineering education approaches. This paper highlights the potential for enhancing software engineering education. The availability of generative AI will improve the individualization of education approaches. In addition, we discuss the need to adapt software engineering curricula to the changed profiles of software engineers. Moreover, we point out why it is important to provide guidance for using generative AI and, thus, integrate it in courses rather than accepting the unsupervised use by students, which can negatively impact the students' learning.
In the era of generative artificial intelligence (GenAI), educators and students face both practical opportunities and pressing challenges for software engineering (SE) education. GenAI powered by large language models is capable of completing complex software development tasks, becoming increasingly integrated into the development process, and is rephaping how students can learn to design, develop, and test software systems. With the advancement of GenAI, it is important to understand how students and educators perceive its role, benefits, and challenges in educational contexts. Also, many existing GenAI tools are adapted from general-purpose models without considering how they align with curriculum goals, cognitive development, or instructional strategies in SE courses. To address these problems, my research works addresses these emerging challenges and opportunities by examining the role of GenAI in SE education from: (1) understanding the perceptions, practices, and expectations of students and instructors regarding genAI; (2) exploring how intelligent systems powered by GenAI can align with pedagogical goals and support active, collaborative and engaging learning environments; (3) investigating how GenAI could reshape SE knowledge areas and developing a validated concept inventory to guide curriculum design and assessment. Through a combination of empirical studies, system development, data analysis, and concept inventory development, my research contributes both insights and practical frameworks to guide the pedagogical integration of GenAI in SE education.
… her undergraduate degree in Computer Engineering at the … in engineering education discussing the use of Generative AI tools, … in engineering education”, “Generative AI”, “Generative …
The dynamic cybersecurity threats in specialized domains like space systems create significant challenges for educational content development. Traditional curriculum development struggles to keep pace with dynamic industry requirements, taking months to develop and requiring large expert teams. This paper introduces a systematic framework that integrates Generative AI (GenAI) with established instructional design principles to rapidly develop domain-specific cybersecurity training. Our framework combines Retrieval Augmented Generation (RAG) with the Analysis, Design, Development, Implementation, and Evaluation (ADDIE) model, enabling requirements-driven curriculum development that translates industry stakeholder interviews and job descriptions directly into comprehensive educational materials. We demonstrate framework feasibility through Space Information Systems Security Officer (ISSO) curriculum development, generating 500+ domainspecific Knowledge, Skills, and Tasks (KSTs), modular lectures, 886 assessment questions, and gamified exercises within hours rather than months. Market research with 33 industry professionals revealed critical gaps in existing training frameworks, with $82 \%$ emphasizing soft skills and $67 \%$ requiring holistic system understanding beyond traditional cybersecurity domains. Our proof-of-concept implementation with 5 completing participants showed promising results with test scores improving from $73.9 \%$ to $92.1 \%$, though larger validation studies are needed. The systematic framework addresses identified gaps in current cybersecurity training approaches while providing a replicable methodology for other rapidly evolving technical domains requiring specialized workforce development.
Since the release of diverse generative AI (GenAI) tools such as ChatGPT, Google Gemini, DALL $\cdot $ E, and GitHub Copilot, there has been much debate around the impacts and implications of these tools on education. Currently, extant literature remarks on the affordances, challenges, and opportunities of GenAI, but few studies report and analyze empirical studies and educational practices coming up by GenAI usage in learning settings. Then, in this Scoping Review (ScR) based on 146 studies retrieved from the databases SCOPUS, Web of Science (WoS), and ERIC, we analyzed the implications of integrating GenAI in engineering and computing education from K-12 to tertiary levels. We adopted an approach starting from the bibliometric features of the studies in terms of authors, cites, years, or cluster topics, and navigating to the identification of methodologies, strategies, AI literacy instruments and guidelines, learning outcomes, and students’ and teachers’ perceptions, among other features. We advocate that current educational practices in engineering and computing with GenAI can indicate to us a roadmap of its potentialities, uses, and risks from the standpoint of both teachers and students, and this could help us to create more reflexive methodologies that enhance the teaching-learning process based on the evidence. Our purpose with the outcomes and conclusions of this scoping review is to support educators, faculty members, and other stakeholders in engineering and computing education to co-create educational methodologies that articulate GenAI with curricula, AI literacy, and prompt engineering encompassing students’ learning domains such as cognitive, affective, or behavioral.
This pilot study focuses on allowing students to use generative Artificial intelligence (AI) tools for their learning and assignments. Therefore, the study looks to improve the assignments to assess their learning. Students in their course have both formative and summative assessments. Both these assessments consist of writing, quizzes, and presentations. These students are both from the undergraduate and graduate levels. The study seeks to make the assessments sustainable for all teaching levels. The change in writing assessment would help to assess students better. The assignments are tested on ChatGPT and Bard to check if a student gets a passing grade using an AI tool.
The way software is developed is changing rapidly due to the general availability of generative AI tools. As a result, the software engineering education that is part of every computer science program needs to change. Especially in software engineering courses, such AI tools need to be used and practiced in a meaningful and useful way. The programming project is one such course at our university, and the curriculum will be expanded accordingly in the future. In this paper we describe our approach and a user study among the participants of the last programming project, in which we collected experiences with the use of current AI tools, in particular highlighting their usefulness and limitations. Our study focuses on identifying which aspects of the course students used AI tools for, evaluating successful applications, and uncovering remaining challenges.
… curricula to prepare students for the evolving landscape of AI … of GenAI methods into computer science courses, examining … In a case study, student projects on software engineering, we …
As generative AI (Gen AI) tools reshape software engineering (SE) workflows, educators are exploring how to meaningfully integrate them into computing education. This experience report presents a structured benchmarking of widely used AI tools -- such as GitHub Copilot, GPT-4, Codeium, Claude 3.5, Gemini 1.5, Supermaven, TabNine, Testim, Postman, Eraser.io, and Lucidchart AI -- across key SE phases: design, implementation, debugging, and testing. Tools were selected based on industry relevance, accessibility for students, and alignment with common SE tasks. Through controlled experiments conducted by five AI-experienced evaluators with matched exposure levels, we assessed tool performance using standardized prompts, counterbalanced task roles, and a range of proxy metrics -- including prompt iterations, task completion time, human correction burden, hallucination frequency, output accuracy, and cross-file consistency -- to capture both cognitive load and tool limitations. While AI tools accelerated tasks such as boilerplate generation and UML sketching, they exhibited challenges in test coverage quality, cross-file coherence, and reliability under complex prompts. We discuss educational implications, including managing cognitive load, aligning tools with task types, and explicitly teaching prompt refinement and verification strategies. The paper offers actionable guidance for instructors, curriculum-ready artifacts, and a roadmap for scaling AI integration in SE classrooms, while also noting key limitations to support replication and contextual adoption.
… On the other hand, AI-driven refactoring tools offer a solution by automating parts of this … analysis tools in assisting developers in interpreting code quality issues in an educational setting …
Design patterns are essential in software engineering, offering time-tested solutions to common software development problems. They enhance code maintainability, scalability, and efficiency. However, adopting design patterns presents significant challenges. Developers often face a steep learning curve, misconceptions about the applicability of design patterns, and resistance to change. When design patterns are not used, it can lead to increased technical debt, poor maintainability, and scalability issues. This paper comprehensively analyzes current trends in the usage of design patterns, the challenges faced in their adoption, and the problems resulting from their non-usage. It also explores how Artificial Intelligence (AI) can help mitigate these challenges. AI technologies, including machine learning and natural language processing, offer innovative solutions to promote the adoption of design patterns. These solutions include AI-driven code analysis, pattern recognition, automated refactoring, and intelligent code suggestions. Statistical analysis, case studies, and real-world examples are used to demonstrate AI's potential to transform software development practices. The findings suggest that AI not only facilitates the adoption of design patterns but also significantly enhances the overall software development process. AI-driven tools can analyze code to identify existing design patterns and recommend appropriate ones, thereby helping developers implement them more effectively. Automated refactoring tools can incorporate design patterns into code, improving maintainability and scalability while reducing manual effort. Additionally, AI systems can provide real-time code suggestions, assisting developers in making informed design decisions. By addressing the learning curve and misconceptions associated with design patterns, AI-powered educational tools can further promote their adoption. These tools offer interactive tutorials, code examples, and real-time feedback to help developers understand and apply design patterns correctly. This study contributes to the field by providing actionable insights and practical recommendations for leveraging AI in the adoption and implementation of design patterns. The findings pave the way for more robust and maintainable software systems, highlighting the significant role of AI in overcoming the challenges associated with design patterns. In conclusion, while the adoption of design patterns is fraught with challenges, AI technologies offer promising solutions to facilitate their use. By enhancing the overall software development process, AI can help create more maintainable, scalable, and efficient software systems, ultimately benefiting the entire software engineering field.
本报告整合了AI在编程教育领域的理论与实践成果,构建了涵盖实证评估、PBL模式重构、教学策略与评价、软件工程技术优化及研究综述的系统框架。这些研究共同揭示了AI辅助编程教育从单点技术应用向人机协作式项目学习模式演进的趋势,为应对编程教育中的复杂挑战提供了从方法论到落地实施的全面理论支撑。