人工智能赋能 UI/UX 与交互设计:Generative AI 与 LLM 对设计流程、协作、创造力及交互方法的影响
人机协同创作的理论框架与范式研究
聚焦于人机协同创作的深层理论、交互模型、协作范式以及人类设计师与AI在创意任务中的权利分配与合作机制。
- Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems(Jeba Rezwana, M. Maher, 2022, ACM Transactions on Computer-Human Interaction)
- Towards Human-Centred AI-Co-Creation: A Three-Level Framework for Effective Collaboration between Human and AI(Mingyuan Zhang, Zhaolin Cheng, Sheung Ting Ramona Shiu, Jiachen Liang, Cong Fang, Zhengtao Ma, Le Fang, S. Wang, 2023, Computer Supported Cooperative Work and Social Computing)
- The study of human-AI Co-creation design under generative artificial intelligence: cognition, process, method, and outcome(Guodong Chen, Zehan Yu, Yuxin Xie, Zheng Liu, Chunyang Yu, 2025, Journal of Engineering Design)
- Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making(Shuai Ma, Qiaoyi Chen, Xinru Wang, Chengbo Zheng, Zhenhui Peng, Ming Yin, Xiaojuan Ma, 2024, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems)
- A SYSTEMATIC REVIEW OF HUMAN-AI COLLABORATION IN IT SUPPORT SERVICES: ENHANCING USER EXPERIENCE AND WORKFLOW AUTOMATION(Rakibul Hasan, 2025, American Journal of Interdisciplinary Studies)
- Establishing the importance of co-creation and self-efficacy in creative collaboration with artificial intelligence(Jack McGuire, David de Cremer, Tim Van de Cruys, 2024, Scientific Reports)
- How Does Human‐ AI Co‐Creation Design Help Luxury Brands Win Consumers' Favor?(Ziling Wang, Minxue Huang, 2025, Journal of Consumer Behaviour)
- Human-AI Co-Creation Systems in Design and Art(S Anasuri, KK Pappula, 2024, International Journal of AI, BigData, Computational and Management Studies)
- IdeationWeb: Tracking the Evolution of Design Ideas in Human-AI Co-Creation(Hanshu Shen, Lyukesheng Shen, Wenqi Wu, Kejun Zhang, 2025, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems)
- Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation(Sangho Suh, Meng Chen, Bryan Min, T. Li, Haijun Xia, 2023, Proceedings of the CHI Conference on Human Factors in Computing Systems)
- Modeling dialogue with mixed initiative in design space exploration(S. Datta, 2006, Artificial Intelligence for Engineering Design, Analysis and Manufacturing)
- Human-AI Co-Design and Co-Creation: A Review of Emerging Approaches, Challenges, and Future Directions(Nyasha Kadenhe, Mohamed Al Musleh, Allan Lompot, 2025, Proceedings of the AAAI Symposium Series)
- Human-AI Co-creation: Evaluating the Impact of Large-Scale Text-to-Image Generative Models on the Creative Process(Tommaso Turchi, S. Carta, Luciano Ambrosini, A. Malizia, 2023, Lecture Notes in Computer Science)
- AI Creativity and the Human-AI Co-creation Model(Zhuohao Wu, Danwen Ji, Kaiwen Yu, Xianxu Zeng, Dingming Wu, Mohammad Shidujaman, 2021, Lecture Notes in Computer Science)
- From Consumption to Co-Creation: A Systematic Review of Six Levels of AI-Enhanced Creative Engagement in Education(Margarida Romero, 2025, Multimodal Technologies and Interaction)
- The Mosaic of Human-AI Co-Creation: Emerging human-technology relationships in a co-design process with generative AI(Henriikka Vartiainen, Päivikki Liukkonen, Matti Tedre, 2023, no. G5FB8)
- Exploring Human-AI Collaboration in Agile: Customised LLM Meeting Assistants(Beatriz Cabrero‐Daniel, Tomas Herda, Victoria Pichler, Martin Eder, 2024, Lecture Notes in Business Information Processing)
- From Human-Human Collaboration to Human-Agent Collaboration: A Vision, Design Philosophy, and an Empirical Framework for Achieving Successful Partnerships Between Humans and LLM Agents(Bingsheng Yao, Chaoran Chen, A. Wang, Sherry Tongshuang Wu, T. Li, Dakuo Wang, 2026, Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems)
- Designing for Collaboration: Visualization to Enable Human–LLM Analytical Partnership(Mai Elshehaly, R. Jianu, A. Slingsby, G. Andrienko, N. Andrienko, T. Rhyne, 2025, IEEE Computer Graphics and Applications)
- Towards integral human-machine system conception: From automation design to usability concerns(P. Ponsa, R. Vilanova, B. Amante, 2009, 2009 2nd Conference on Human System Interactions)
- A model for types and levels of human interaction with automation(R. Parasuraman, T. Sheridan, C. Wickens, 2000, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans)
- On the Design of and Interaction with Conversational Agents: An Organizing and Assessing Review of Human-Computer Interaction Research(Stephan Diederich, A. Brendel, Stefan Morana, L. Kolbe, 2022, Journal of the Association for Information Systems)
- Human-Automation Interaction Research(P. Hancock, R. Jagacinski, R. Parasuraman, C. Wickens, Glenn F. Wilson, D. Kaber, 2013, Ergonomics in Design: The Quarterly of Human Factors Applications)
- Using Formal Verification to Evaluate Human-Automation Interaction: A Review(M. L. Bolton, E. Bass, Radu I. Siminiceanu, 2013, IEEE Transactions on Systems, Man, and Cybernetics: Systems)
- Generative AI: From Human–Computer Interaction to Human–Computer Creativity(Vladimir Geroimenko, 2025, Springer Series on Cultural Computing)
生成式UI与智能化设计工具的系统构建
关注具体的设计系统开发、生成式UI的应用、原型创建工作流优化,以及AI如何作为工具集成到日常设计实践中。
- AI-Powered UI Generation: Evaluating Interactive Design Paradigms with Vision-Language Models(Suraj Davariya, 2025, 2025 IEEE 6th India Council International Subsections Conference (INDISCON))
- Artificial-Intelligence-Assisted Workflow Optimization for Digital Media Art Creation(Rong-Hao Cui, 2025, Proceedings of the 2025 3rd International Conference on Artificial Intelligence, Systems and Network Security)
- The application of artificial intelligence-assisted technology in cultural and creative product design(Jing Liang, 2024, Scientific Reports)
- Canvil: Designerly Adaptation for LLM-Powered User Experiences(K. Feng, Q. Liao, Ziang Xiao, Jennifer Wortman Vaughan, Amy X. Zhang, David W. McDonald, 2024, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems)
- The GenUI Study: Exploring the Design of Generative UI Tools to Support UX Practitioners and Beyond(X. Chen, Tiffany Knearem, Yang Li, 2025, Proceedings of the 2025 ACM Designing Interactive Systems Conference)
- Towards AI-Assisted Design Workflows for an Expanded Design Space(S. Yousif, E. Vermisso, 2022, CAADRIA proceedings)
- Towards Human–AI Synergy in UI Design: Supporting Iterative Generation with LLMs(Mingyue Yuan, Jieshan Chen, Yongquan Hu, Sidong Feng, Mulong Xie, Gelareh Mohammadi, Zhenchang Xing, Aaron Quigley, 2024, ACM Transactions on Computer-Human Interaction)
- What does Generative UI mean for HCI Practice?(Siân Lindley, Jack Williams, Yining Cao, Haijun Xia, Elizabeth F Churchill, A. Sellen, J. Nichols, David R Karger, 2026, Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems)
- Generative Patterns for Designing Multiple User Interfaces(Thanh-Diane Nguyen, J. Vanderdonckt, A. Seffah, 2016, Proceedings of the International Conference on Mobile Software Engineering and Systems)
- Evolutionary Interfaces(M. Ravi Teja, B.A. Sabarish, C. Arunkumar, 2026, Artificial Intelligence in Instrumentation, Control and Automation)
- LLMs and Diffusion Models in UI/UX: Advancing Human-Computer Interaction and Design(Layla Sun, Mengmeng Qin, Benji Peng, 2024, OSF Preprints)
- AIDED: Augmenting Interior Design with Human Experience Data for Designer–AI Co-Design(Yang Chen Lin, Chen-Ying Chien, Kaihui Hou, Hung-Yu Chen, Po-Chih Kuo, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Enhancing designer creativity through human–AI co-ideation: a co-creation framework for design ideation with custom GPT(Pan Wang, Yash Khinvasara, Geesje Josine Creijghton, Tessa Scholing, Yihua Wang, Zhibin Zhou, Peter R. N. Childs, Yuan Yin, 2025, Artificial Intelligence for Engineering Design, Analysis and Manufacturing)
- Partnering with Generative AI: Experimental Evaluation of Model-Led and Human-Led Interaction in Human-AI Co-Creation(Sebastian Maier, Manuel Schneider, S. Feuerriegel, 2025, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Exploring creativity in human-AI co-creation: a comparative study across design experience(Nan Wang, Hyunsuk Kim, Junfeng Peng, Jiayi Wang, 2025, Frontiers in Computer Science)
- Exploring designers’ continuance usage intention of AI-assisted design tools: integrating TTF, design processes characteristics, TAM, creative self-efficacy, and social influence(Peiyao Cheng, Wanting Sun, Qi Huang, Shumeng Hou, 2026, Journal of Engineering Design)
- Examining the Impact of Generative AI on UX/UI Design(B. Okpala, 2025, SSRN Electronic Journal)
- Reviewing AI in architectural computational design: Applications, opportunities, and the AI-ACD workflow for improved design integration(Basma Nashaat, Mostafa M. Elzeni, 2025, International Journal of Architectural Computing)
- From Prompts to High-Fidelity Prototypes: A Usability Evaluation of Generative AI–Driven Prototyping Tools for Smart Mobile App Design(John Bustamante-Orejuela, Xavier Quiñónez-Ku, Pablo Pico-Valencia, 2026, Multimodal Technologies and Interaction)
- Towards a Working Definition of Designing Generative User Interfaces(Kyungho Lee, 2025, Companion Publication of the 2025 ACM Designing Interactive Systems Conference)
- Generative AI for Secure User Interface (UI) Design(Siva Raja Sindiramutty, Krishna Raj V. Prabagaran, Rehan Akbar, Manzoor Hussain, Nazir Ahmed Malik, 2024, Advances in Information Security, Privacy, and Ethics)
- A Study on Chatbot UI Design Processes Utilizing Generative AI: Integrating ChatGPT and MidJourney(Heehyeon Park, 2025, Lecture Notes in Computer Science)
- Personalizing User Interfaces with Generative Artificial Intelligence: A Systematic Literature Review(João Rodrigo Afonso Mendo, Tiago Pinto, joão Bsrroso, Tânia Rocha, 2026, SSRN Electronic Journal)
交互机制、意图管理与混合主导系统
探讨LLM在交互中的自主性代理能力,以及通过混合主导(Mixed-Initiative)技术实现人类意图与机器自动化的高效平衡。
- LARGE LANGUAGE MODELS IN THE DESIGN PROCESS: A WORKSHOP ON THE POSSIBILITIES OF HUMAN-AI COLLABORATION THROUGH KNOWLEDGE FORMATION(A. Mastroianni, Lucia Rampino, F. Figoli, 2024, ICERI Proceedings)
- Human-LLM collaboration in generative design for customization(Xingzhi Wang, Zhoumingju Jiang, Yi Xiong, Ang Liu, 2025, Journal of Manufacturing Systems)
- Interdisciplinary Co-design with LLM-Based Multi-agents: A Human-AI Platform for Complex Design Challenges(Yuan-Chi Tseng, Yu-Yi Chang, 2025, Lecture Notes in Computer Science)
- Exploring Human-AI Collaboration Dynamics in LLM-supported Engineering Project-Based Learning(Xuan Qiu, Tin Nok Mak, 2026, SSRN Electronic Journal)
- Mixed-Initiative Interaction with Computational Generative Systems(Florian Lehmann, 2023, Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems)
- Web-based learning through mixed-initiative interactions: Design and implementation(N. K. Subramaniam, 2013, Asian Association of Open Universities Journal)
- Mixed-initiative interaction = mixed computation(RamakrishnanNaren, G. CapraRobert, A. Pérez-QuiñonesManuel, 2002, ACM SIGPLAN Notices)
- Human-Machine Interaction Design in Adaptive Automation(Alessandro Pollini, Gian Andrea Giacobone, Michele Zannoni, Diego Pucci, Virginia Vignali, Andrea Falegnami, Andrea Tomassi, Elpidio Romano, 2024, Procedia Computer Science)
- Analysis of User Usage Trends of AI-Assisted Customized Fashion Design Software in the Dimension of Interaction Design——An empirical study on Chinese user willingness to use based on an extended TAM model(Erxuan Zeng, Rong Liu, Yuxue Feng, 2024, Proceedings of the 2024 International Conference on Artificial Intelligence, Digital Media Technology and Interaction Design)
- Investigating Agency of LLMs in Human-AI Collaboration Tasks(Ashish Sharma, Sudha Rao, C. Brockett, Akanksha Malhotra, N. Jojic, W. Dolan, 2023, Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers))
- Investigating Interaction Modes and User Agency in Human-LLM Collaboration for Domain-Specific Data Analysis(Jiajing Guo, V. Mohanty, Jorge Piazentin Ono, Hongtao Hao, Liang Gou, Liu Ren, 2024, Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
- Developing Mixed-Initiative Interaction with Intelligent Systems: Lessons Learned from Supervising Multiple UAVs(M. Hanson, E. Roth, Christopher M. Hopkins, Vince Mancuso, 2004, AIAA 1st Intelligent Systems Technical Conference)
- Computational Models of Mixed-Initiative Interaction(S. Haller, S. Mcroy, A. Kobsa, 2007, Springer Netherlands)
- A User Modeling Approach to Determining System Initiative in Mixed-Initiative AI Systems(Michael W. Fleming, R. Cohen, 2001, Lecture Notes in Computer Science)
- Towards AI-Powered Applications: The Development of a Personalised LLM for HRI and HCI(Khashayar Ghamati, Maryam Banitalebi Dehkordi, Abolfazl Zaraki, 2025, Sensors)
- Designing for Mixed-Initiative Interactions between Human and Autonomous Systems in Complex Environments(M. Barnes, Jessie Y.C. Chen, F. Jentsch, 2015, 2015 IEEE International Conference on Systems, Man, and Cybernetics)
- Human–AI Interaction in LLM(Mehrdad Zakershahrak, 2025, Handbook of Human-Centered Artificial Intelligence)
- History and future of human-automation interaction(Christian P. Janssen, Stella F. Donker, Duncan P. Brumby, Andrew L. Kun, 2019, International Journal of Human-Computer Studies)
- Generative AI: A Systematic Review of Related Interfaces and Interactions(Kostas Ordoumpozanis, M. Konstantakis, S. Zoi, G. Caridakis, 2025, Proceedings of the 3rd International Conference of the ACM Greek SIGCHI Chapter)
- The state of the art in automating usability evaluation of user interfaces(M. Ivory, Marti A. Hearst, 2001, ACM Computing Surveys)
- Human-Guided AI: Designing Prompts in LLM for Effective Human-Computer Collaboration(Michael Hewing, Vincent Leinhos, 2025, Lecture Notes in Computer Science)
- Mixed-Initiative Creative Interfaces(Sebastian Deterding, Jonathan Hook, R. Fiebrink, M. Gillies, J. Gow, Memo Akten, Gillian Smith, Antonios Liapis, K. Compton, 2017, Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems)
- Sensing the Future of Human-Computer Interaction: A Generative Ai-Enhanced Analysis of Experimental Interaction Technologies(Fernando Gomes de Souza, Shekhar Bhansali, Kaushik Pal, Fabíola da Silveira Maranhão, Rui Silva, Daniele Brandão, Nidhi Asthana, 2025, SSRN Electronic Journal)
- Automating interface evaluation(Michael D. Byrne, S. Wood, Noi Sukaviriya, J. Foley, D. Kieras, 1994, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems)
- The Roles and Modes of Human Interactions with Automated Machine Learning Systems: A Critical Review and Perspectives(Thanh Tung Khuat, David Jacob Kedziora, Bogdan Gabryś, 2023, … ® in Human–Computer Interaction)
- Interaction Methods in Generative AI Image Tools: A Review of Trends and Design Opportunities Across HCI and Industry(Hyerim Park, Malin Eiband, André Luckow, Michael Sedlmair, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Human Intervention and Interface Design in Automation Systems(P. Ponsa, R. Vilanova, B. Amante, 2011, International Journal of Computers Communications & Control)
- A Review of Human-Computer Interaction Design Approaches towards Information Systems Development(Mohammed YAKUBU BALA, Karagozlu DAMLA, 2021, BRAIN. Broad Research in Artificial Intelligence and Neuroscience)
- State-of-the-Art UX Frameworks for Human-Centered AI in Generative AI Systems: A Systematic Literature Review(F. Schröder, Mahsa Fischer, 2025, Lecture Notes in Computer Science)
- Human-Computer Interaction: Process and Principles of Human-Computer Interface Design(Gong Chao, 2009, 2009 International Conference on Computer and Automation Engineering)
评估方法、用户认知与社会技术影响
研究人机交互的评价指标、行业感知、设计师的创造力评估,以及AI在特定社会语境下的应用与影响。
- AI and the Future of Collaborative Work: Group Ideation with an LLM in a Virtual Canvas(Jessica He, Stephanie Houde, G. E. Gonzalez, Darío Andrés Silva Moran, Steven I. Ross, Michael J. Muller, Justin D. Weisz, 2024, Proceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work)
- “Who” Is the Best Creative Thinking Partner? An Experimental Investigation of Human–Human, Human–Internet, and Human–AI Co‐Creation(Minmin Tang, Sebastian Hofreiter, C. Werner, Aleksandra Zielińska, Maciej Karwowski, 2024, The Journal of Creative Behavior)
- Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation(Jichen Zhu, Antonios Liapis, S. Risi, Rafael Bidarra, G. Youngblood, 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG))
- Design and Evaluation Methods for LLM-Based Explainable AI (XAI)-Based Human-AI Collaboration Systems(Cheonsu Jeong, 2025, Advances in Artificial Intelligence and Machine Learning)
- I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence(Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, B. Suh, 2018, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems)
- UI/UX for Generative AI: Taxonomy, Trend, and Challenge(Tae-Seok Kim, Marvin John Ignacio, Seunghee Yu, Hulin Jin, Yong-Guk Kim, 2024, IEEE Access)
- User Experience Design Professionals’ Perceptions of Generative Artificial Intelligence(Jie Li, Hancheng Cao, Laura Lin, Youyang Hou, Ruihao Zhu, Abdallah El Ali, 2023, Proceedings of the CHI Conference on Human Factors in Computing Systems)
- How generative AI is reshaping UI/UX design workflows: A systematic review(T. Kumar, Matteo Zallio, Xinyi Tu, 2025, AHFE International)
- Usability Evaluation - Advances in Experimental Design in the Context of Automated Driving Human-Machine Interfaces(Deike Albers, Jonas Radlmayr, Alexandra Loew, Sebastian Hergeth, Frederik Naujoks, Andreas Keinath, K. Bengler, 2020, Information)
- A Review of Human-Computer Interface Evaluation Research Based on Evaluation Process Elements(Xintai Song, Minxia Liu, Lin Gong, Yu Gu, Mohammad Shidujaman, 2023, Lecture Notes in Computer Science)
- Investigating generative AI-based artistic tools in interaction design for sustainable UX(C. Kerdvibulvech, Kawin Meksumphun, 2026, Quality & Quantity)
- The Role of AI Design Assistance on the Architectural Design Process: An Empirical Research with Novice Designers(Emine Zeytin, Kamile Öztürk Kösenciğ, Dilan Öner, 2024, Journal of Computational Design)
- AI-assisted building design(S Saad, M Haris, S Ammad, K Rasheed, 2024, AI in material science)
- Empirical insights into AI-assisted game development: A case study on the integration of generative AI tools in creative pipelines(Andrew Begemann, James Hutson, 2024, Metaverse)
- Designing Age-Inclusive Interfaces: Emerging Mobile, Conversational, and Generative AI to Support Interactions across the Life Span(Cosmin Munteanu, S. Sarcar, Jaisie Sin, C. Wei, Sergio Sayago, Wei Zhao, Jenny Waycott, 2024, 26th International Conference on Mobile Human-Computer Interaction)
- GenFaceUI: Meta-Design of Generative Personalized Facial Expression Interfaces for Intelligent Agents(Yate Ge, Lin Tian, Yiting Dai, Shuhan Pan, Yiwen Zhang, Qi Wang, Weiwei Guo, Xiaohua Sun, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- INSTRUCTOR CONSIDERATIONS FOR DESIGN PROTOTYPING USING GENERATIVE AI(Ronald Glotzbach, C. Ross, 2025, EDULEARN Proceedings)
最终合并将文献分为四大维度:首先,人机协同的理论构建组探讨了合作的核心范式与协作模型;其次,工具构建组侧重于Generative UI等技术落地的系统实践;第三,混合主导与意图管理组专门处理技术性的交互逻辑与AI的自主性行为;最后,评估与影响组则涵盖了行业认知、可用性评估及社会应用中的伦理与认知因素。这一分类旨在覆盖从抽象理论到具体技术实现,再到评估实践的全过程,为研究AI辅助设计流程及交互设计方法提供系统性视角。
总计104篇相关文献
The rapid advancements in Generative AI, particularly Large Language Models (LLMs) and Diffusion Models, are transforming UI/UX design and human-computer interaction (HCI). This article explores recent applications of these technologies to augment and automate key stages of the design process, from ideation and prototyping to code generation. By utilizing their strengths in natural language understanding and content generation, LLMs serve as tools for ideation, design enhancement, and as integral components of user interfaces—enabling conversational systems, adaptive UIs, and task automation. Diffusion models, in contrast, focus on generating visual content and assisting in UI prototyping, creating new possibilities for design workflows. Despite these advances, challenges remain, such as maintaining output quality, integrating AI into existing workflows, and addressing ethical issues like data bias and transparency. This article highlights the need to balance human and AI contributions, foster effective human-AI collaboration, and establish robust evaluation criteria. Future research should explore multimodal LLMs, improve transparency and explainability, and democratize access to the design process. The integration of Generative AI into UI/UX design holds significant potential to advance HCI but requires careful consideration of its limitations and societal impacts.
The increasing capability of AI models to generate user interfaces has the potential to transform HCI and design practice. We invite researchers, designers, developers, and practitioners to explore how generative UI – interfaces created by AI models – will reshape design methods, workflows, and user experiences. Our goals are to (i) envision how generative UI can underpin innovative human-centric experiences, and (ii) reflect on how HCI and design practice could and should evolve to meet the opportunities and challenges this presents. This will be an interactive and discussion-oriented workshop, featuring a pop-up panel, creative ideation exercises, and collaborative artefact development. Artefacts produced through the workshop will be shared online afterwards and will, we hope, result in an Interactions or CACM article. We will welcome submissions from scholars and practitioners working on dynamic or generative UI, as well as those with expertise in related areas. To keep participation broad, participants will be asked to submit a two-page position paper (in ACM single column format), a two-page pictorial, or a two-minute video at the workshop website. We expect approximately 35 participants to register and attend, including the organizers.
Current technological advancements in Information Technology are closely linked to Generative Artificial Intelligence, enabling the automation of complex tasks such as generating documents, images, videos, audio, and actions. As such tasks can save much human labor and resources, diverse industries are trying to adopt this technology. However, developing a product utilizing Generative AI is a challenging task, partly because it is a new technology and many users are not familiar with it yet. This paper’s primary goal is to find a better way to design Generative AI systems, especially from the Human-Computer Interaction perspective. To begin, we propose a taxonomy for Generative AI systems based on their modality, such as text-based, image-based, audio-based, and multi-modal-based systems, and then evaluate them in terms of their usability, because their functionalities should be aligned with the User Interface (UI), leading to a better User Experience (UX). We survey important trends in this area and introduce future applications by touching upon the issue of explainable AI. Although Generative AI has a bright future, it faces formidable challenges in our industries and society. It is hoped that the taxonomy and research findings presented here will be a useful framework for future research in Generative AI systems and their UI/UX.
Generative AI (GenAI) image tools are increasingly integrated into design workflows, prompting HCI research on their interaction methods and interfaces. We reviewed 37 such tools, including 28 HCI research systems and nine commercial systems (2022–July 2025), using three analytical frameworks: interaction methods, creative processes, and tool functionalities. We found that text prompts remain the dominant input method, while visual and attribute-based inputs—particularly in academic tools—are gaining traction and are often combined with text for refinement. Commercial systems emphasize parameter control, whereas academic tools focus on semantic attributes and visual organization. Most tools support ideation and exploration, but provide limited support for refinement and evaluation. Based on these findings, we identify nine design opportunities, including advanced visual interaction, simplified parameter control, precision editing, direct manipulation, workflow integration, default settings that support rapid exploration, and user guidance for later stages. We contribute a framework for analyzing GenAI interfaces and actionable directions for designing more usable, creativity-supportive GenAI image systems.
As GenAI technologies such as large language models, diffusion models, and multimodal generative systems increasingly permeate design workflows, their implications for creativity, methodology, ethics, and collaboration demand critical scholarly attention. This paper presents a systematic literature review of generative artificial intelligence (GenAI) in user interface (UI) and user experience (UX) design, drawing on fifty peer-reviewed and preprint articles published between 2020 and 2025. The review is structured around five research questions, addressing: (1) the stages of the UI/UX design process where GenAI tools are most actively applied, (2) the methodological approaches used to evaluate their integration, (3) the ethical considerations arising from their use, (4) models of human-AI collaboration in design practice, and (5) the research gaps that shape the future trajectory of this field. Findings indicate that while GenAI tools are widely adopted in prototyping and visual asset generation, their use in early-stage conceptualization and UX evaluation remains limited. The literature also reveals methodological fragmentation and a lack of standardized evaluation frameworks. Ethical concerns surrounding bias, transparency, and privacy are underexplored, and few studies provide robust models for collaborative work between humans and AI. This review identifies the need for longitudinal research, structured participatory frameworks, and ethically grounded design methodologies. The paper contributes a comprehensive synthesis of current knowledge and outlines directions for future inquiry at the intersection of generative AI and human-computer interaction.
Among creative professionals, Generative Artificial Intelligence (GenAI) has sparked excitement over its capabilities and fear over unanticipated consequences. How does GenAI impact User Experience Design (UXD) practice, and are fears warranted? We interviewed 20 UX Designers, with diverse experience and across companies (startups to large enterprises). We probed them to characterize their practices, and sample their attitudes, concerns, and expectations. We found that experienced designers are confident in their originality, creativity, and empathic skills, and find GenAI’s role as assistive. They emphasized the unique human factors of “enjoyment” and “agency”, where humans remain the arbiters of “AI alignment’’. However, skill degradation, job replacement, and creativity exhaustion can adversely impact junior designers. We discuss implications for human-GenAI collaboration, specifically copyright and ownership, human creativity and agency, and AI literacy and access. Through the lens of responsible and participatory AI, we contribute a deeper understanding of GenAI fears and opportunities for UXD.
… This study examines the impact of generative artificial intelligence (GAI) on UX/UI design, … Generative AI is rapidly transforming UX design and redefining human-computer interaction…
… in UX frameworks for Human-Centered AI in generative AI … evaluation and design of UX frameworks for generative AI systems. In … of UX frameworks in their interaction with generative AI …
… To assess the appropriateness of the sample size, prior UX and HCI studies employing experimental or design-oriented methods were reviewed. Many comparable studies adopt …
… of generative AI-supported design dual role in interdisciplinary education. For design students, generative AI-supported design … For technology students, generative AI-supported design …
… in AI-driven human–computer interaction, as shown in Fig. 2. Additionally, the significant presence of arXiv, an openaccess repository, with numerous papers published rapidly in 2025, …
… This study proposes a generative AI-supported workflow for earlystage chatbot UI design by integrating ChatGPT and MidJourney into the ideation process. While existing approaches …
… In doing so, we aim to create a design compendium that generative AI designers, … current state of the user experience (UX) and user interface (UI) designs of generative AI. The overall …
As artificial intelligence becomes an everyday presence across education, arts and creative technologies, and cultural heritage, the interaction between users and intelligent systems deserves critical examination. This submission presents a systematic review of 95 case studies, 64 in education, 14 in arts, and 17 in heritage — selected via a PRISMA-guided search and expert screening — to map how generative artificial intelligence is embedded at both the interface and interaction levels. We identify nine interface archetypes (e.g., conversational, adaptive dashboards, immersive environments interfaces), eight interaction patterns (e.g., conversing, collaborating, manipulating),and eight main user experience dimensions as observed in case studies. Our analysis further categorizes six modality-usage patterns—from text, image, audio, and video up to fully multi-modal workflows and distillsfour main categories of end-to-end application pipelines. Notably, only two studies were found to articulate design-phase guidelines, and limitations cluster around output quality, ethical risks, and a lack of longitudinal evaluations. We conclude with limitations observed and future research focused on explainability, participatory design, and sustained field deployments. This synthesis provides a foundation for researchers and practitioners seeking to harness generative artificial intelligence as a responsive, human-centered collaborator.
In automated UI design generation, a key challenge is the lack of support for iterative processes, as most systems focus solely on end-to-end output. This stems from limited capabilities in interpreting design intent and a lack of transparency for refining intermediate results. To better understand these challenges, we conducted a formative study that identified concrete and actionable requirements for supporting iterative design with Generative Tools. Guided by these findings, we propose PrototypeFlow, a human-centered system for automated UI generation that leverages multi-modal inputs and models. PrototypeFlow takes natural language descriptions and layout preferences as input to generate the high-fidelity UI design. At its core is a theme design module that clarifies implicit design intent through prompt enhancement and orchestrates sub-modules for component-level generation. Designers retain full control over inputs, intermediate results, and final prototypes, enabling flexible and targeted refinement by steering generation and directly editing outputs. Our experiments and user studies confirmed the effectiveness and usefulness of our proposed PrototypeFlow.
… design work require further exploration—especially regarding the best ways to incorporate human input and oversight into an AI-driven collaborative … collaborative tools, such as design …
… There is a growing concern that traditional design tools and … , were driven by a coherent and fruitful interaction between … , both with the LLM - external - and within the design team - …
The introduction of generative AI into multi-user applications raises novel considerations for the future of collaborative work. How might collaborative work practices change? How might we incorporate generative AI into shared tools with users’ needs at the forefront? We examine these questions in the context of a remote team conducting ideation tasks – an example of collaborative work enabled by a shared digital workspace. We conducted a user study with 17 professionals experienced with virtual group ideation workshops. Our study examined their use of the Collaborative Canvas, a virtual canvas tool with integrated generative AI capabilities that we created as a probe. Participants saw value in using generative AI to assist with group facilitation and to augment perspectives and ideas. However, they worried about losing human perspectives and critical thinking, as well as reputational harms resulting from harmful AI outputs. Participants shared suggestions for appropriate ways to incorporate generative AI capabilities within multi-user applications and identified needs for transparency of content ownership, private digital spaces, and specialized AI capabilities. Based on participants’ insights, we share implications and opportunities for the incorporation of generative AI into collaborative work in ways that place user needs at the forefront.
The emergence of Large Language Model (LLM) agents enables us to build agent-based intelligent systems that move beyond the role of a “tool” to become genuine collaborators with humans, thereby realizing a novel human-agent collaboration paradigm. Our vision is that LLM agents should resemble remote human collaborators, which allows HCI researchers to ground the future exploration in decades of research on trust, awareness, and common ground in remote human collaboration, while also revealing the unique opportunities and challenges that emerge when one or more partners are AI agents. This workshop1 establishes a foundational research agenda for the new era by posing the question: How can the rich understanding of remote human collaboration inspire and inform the design and study of human-agent collaboration? We will bring together an interdisciplinary group from HCI, CSCW, and AI to explore this critical transition. The 180-minute workshop will be highly interactive, featuring a keynote speaker, a series of invited lightning talks, and an exploratory group design session where participants will storyboard novel paradigms of human-agent partnership. Our goal is to enlighten the research community by cultivating a shared vocabulary and producing a research agenda that charts the future of collaborative agents.
This study re-examines the role of Explainable AI (XAI) within human-AI collaborative environments and proposes a design and evaluation framework for a human-AI collaboration system that integrates Large Language Models (LLMs) and state-of-the-art AI agent technology. The proposed methodology, which consists of an AI model, an explanation generation module, and a human-AI interface, enhances the adaptability and reliability of explanations. A key contribution of this research is the introduction of an LLM-XAI collaborative architecture that integrates personalized, adaptive explanations with a feedbackdriven improvement mechanism. Notably, the system presents a novel paradigm for explanations that distinguishes it from conventional XAI methods by utilizing Chain-of-Thought reasoning traces, natural language explanations, and a multi-stage verification mechanism provided by Deep Research and LLM-based agents. The system defines core quality metrics such as explainability, transparency, reliability, interactivity, and adaptability, and concurrently develops a multi-dimensional evaluation framework to assess these metrics using both quantitative and qualitative data. This system is structured with a feedback loop that enables continuous learning and improvement while transparently explaining the AI’s decisionmaking process. The quality of explanations is also assessed with quantitative metrics, and the system improves continuously through user feedback. This study also presents quantitative and qualitative evaluation metrics and user research methodologies to validate the system’s effectiveness, which is expected to contribute to achieving trust-based human-AI collaboration. Furthermore, to demonstrate its practical applicability, a pilot implementation in a medical diagnosis support scenario is presented, offering an ideal model where humans and AI collaborate complementarily, thereby playing a crucial role in promoting the ethical use and social acceptance of AI systems.
… of LLM in redefining GDfC. Based on the division of the generative design process, this paper identifies three human-LLM collaboration schemes to demonstrate the potential roles of …
Advancements in large language models (LLMs) are sparking a proliferation of LLM-powered user experiences (UX). In product teams, designers often craft UX to meet user needs, but it is unclear how they engage with LLMs as a novel design material. Through a formative study with 12 designers, we find that designers seek a translational process that enables design requirements to shape and be shaped by LLM behavior, motivating a need for designerly adaptation to facilitate this translation. We then built Canvil, a Figma widget that operationalizes designerly adaptation. We used Canvil as a probe to study designerly adaptation in a group-based design study (6 groups, N = 17), finding that designers constructively iterated on both adaptation approaches and interface designs to enhance end-user interaction with LLMs. Furthermore, designers identified promising collaborative workflows for designerly adaptation. Our work opens new avenues for processes and tools that foreground designers’ human-centered expertise when developing LLM-powered applications.
Visualization artifacts have long served as anchors for collaboration and knowledge transfer in data analysis. While effective for human–human collaboration, little is known about their role in capturing and externalizing knowledge when working with large language models (LLMs). Despite the growing role of LLMs in analytics, their linear text-based workflows limit the ability to structure artifacts into useful and traceable representations of the analytical process. We argue that dynamic visual representations of evolving analysis—organizing artifacts and provenance into semantic structures, such as idea development and shifts in inquiry—are critical for effective human–LLM workflows. We demonstrate the current opportunities and limitations of using LLMs to track, structure, and visualize analytic processes, and propose a research agenda to leverage rapid advances in LLM capabilities. Our goal is to present a compelling argument for maximizing the role of visualization as a catalyst for more structured, transparent, and insightful human–LLM analytical interactions.
Agency, the capacity to proactively shape events, is central to how humans interact and collaborate. While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these models should possess in order to proactively manage the direction of interaction and collaboration. In this paper, we investigate Agency as a desirable function of LLMs, and how it can be measured and managed. We build on social-cognitive theory to develop a framework of features through which Agency is expressed in dialogue – indicating what you intend to do (Intentionality), motivating your intentions (Motivation), having self-belief in intentions (Self-Efficacy), and being able to self-adjust (Self-Regulation). We collect a new dataset of 83 human-human collaborative interior design conversations containing 908 conversational snippets annotated for Agency features. Using this dataset, we develop methods for measuring Agency of LLMs. Automatic and human evaluations show that models that manifest features associated with high Intentionality, Motivation, Self-Efficacy, and Self-Regulation are more likely to be perceived as strongly agentive.
Despite demonstrating robust capabilities in performing tasks related to general-domain data-operation tasks, Large Language Models (LLMs) may exhibit shortcomings when applied to domain-specific tasks. We consider the design of domain-specific AI-powered data analysis tools from two dimensions: interaction and user agency. We implemented two design probes that fall on the two ends of the two dimensions: an open-ended high agency (OHA) prototype and a structured low agency (SLA) prototype. We conducted an interview study with nine data scientists to investigate (1) how users perceived the LLM outputs for data analysis assistance, and (2) how the two design probes, OHA and SLA, affected user behavior, performance, and perceptions. Our study revealed insights regarding participants’ interactions with LLMs, how they perceived the results, and their desire for explainability concerning LLM outputs, along with a noted need for collaboration with other users, and how they envisioned the utility of LLMs in their workflow.
Traditional AI-assisted decision-making systems often provide fixed recommendations that users must either accept or reject entirely, limiting meaningful interaction—especially in cases of disagreement. To address this, we introduce Human-AI Deliberation, an approach inspired by human deliberation theories that enables dimension-level opinion elicitation, iterative decision updates, and structured discussions between humans and AI. At the core of this approach is Deliberative AI, an assistant powered by large language models (LLMs) that facilitates flexible, conversational interactions and precise information exchange with domain-specific models. Through a mixed-methods user study, we found that Deliberative AI outperforms traditional explainable AI (XAI) systems by fostering appropriate human reliance and improving task performance. By analyzing participant perceptions, user experience, and open-ended feedback, we highlight key findings, discuss potential concerns, and explore the broader applicability of this approach for future AI-assisted decision-making systems.
… a solid foundation for systematically designing AI Agents and Custom … remain essential to Human-Computer Interaction, even if … These tools often include analytics for assessing prompt …
… This chapter demonstrated how LLM-powered tools are transforming professional work across domains, from software development and content creation to data analysis and creative …
… with the same level of user experience or at least a common … problem by introducing a generative design pattern-based … user experience across devices, but also generative because …
AI can now generate high-fidelity UI mock-up screens from a high-level textual description, promising to support UX practitioners’ work. However, it remains unclear how UX practitioners would adopt such Generative UI (GenUI) models in a way that is integral and beneficial to their work. To answer this question, we conducted a formative study with 37 UX-related professionals that consisted of four roles: UX designers, UX researchers, software engineers, and product managers. Using a state-of-the-art GenUI tool, each participant went through a week-long, individual mini-project exercise with role-specific tasks, keeping a daily journal of their usage and experiences with GenUI, followed by a semi-structured interview. We report findings on participants’ workflow using the GenUI tool, how GenUI can support all and each specific roles, and existing gaps between GenUI and users’ needs and expectations, which lead to design implications to inform future work on GenUI development.
… , helping to guarantee that user interfaces are not only modern … the overall user experience (UX) and user interface (UI) … dialogues or with more involved texts. It does not come up with …
The integration of Generative Artificial Intelligence (GAI) into software design tools has transformed the early stages of mobile application development, particularly prototype creation from natural-language prompts. This study evaluates the usability and effectiveness of GAI-assisted prototyping tools for generating high-fidelity mobile application prototypes. A controlled laboratory usability study was conducted in which undergraduate Information Technology Engineering students used and evaluated four widely adopted prototyping platforms: Figma, Uizard, Visily, and Stitch. Participants employed these tools to recreate mobile interfaces corresponding to the interaction model of the Duolingo application. The System Usability Scale (SUS) was used to assess perceived usability and effectiveness from the users’ perspective. The results indicate that all evaluated tools enabled rapid prototype generation; however, significant differences emerged in usability, structural fidelity, and perceived control. Figma and Stitch achieved the highest usability scores and demonstrated greater alignment with the reference prototype (82.86 and 80.36, respectively). Visily achieved a favorable usability score (78.57), while Uizard obtained a moderate score (67.14). Although Uizard and Visily exhibited strong automation capabilities and faster initial generation, their outputs required additional manual refinement to achieve higher fidelity and customization. Participant feedback emphasized the importance of output quality, responsiveness, and foundational design knowledge in achieving satisfactory results. Overall, the findings suggest that current GAI-based prototyping tools are effective and valuable in real-world software development contexts. However, their effectiveness appears closely related to the degree of user control, responsiveness, and the ability to iteratively refine AI-generated interface components.
… aimed to investigate user interface design with generative AI … architecture automatically generates designs, and prototypes. … These consist of text, photos, film, documents, code and 3D …
Automating UI design through artificial intelligence offers a transformative shift in digital prototyping workflows. This study introduces a novel benchmark for AI-driven UI generation, comparing two distinct interaction paradigms-feedbackbased refinement and question-asking AI models-to enhance user-driven design customization. Leveraging advanced VisionLanguage Models (VLMs), it assess their ability to translate hand-drawn sketches into structured UI code. A user study with design practitioners evaluates the efficiency, usability, and adaptability of AI-generated prototypes, highlighting key challenges in model interpretability and layout coherence. The findings provide insights into optimizing AI-assisted UI creation, bridging human-centered design with generative intelligence. The system achieved up to $78.6 \%$ syntactic accuracy on the Test-UI corpus and demonstrated a $3.0 \%$ average gain in layout fidelity through interactive refinement.
… report we made, CREO AI Creativity Report 2021, presents the typical cases in the creative process of our Human-AI Co-Creation Model. Our study shows that AI can work far more than …
As generative artificial intelligence (GAI) becomes increasingly integrated into the design domain, research has begun to explore how it can be meaningfully incorporated into traditional design practices, fostering the development of more collaborative design processes. This study proposes a Human–AI Co-Creative Design Process (HAI-CDP) model and evaluates its impact on designers’ creativity through a comparative experimental design. The results indicate that the HAI-CDP substantially improves creative performance over the traditional design process. For novice designers, its primary value lies in facilitating idea generation, whereas for experienced designers, it contributes more to elevating the quality and refinement of creative outcomes. Although the Human–AI Co-Creative Design Process lowers the entry barrier to creative engagement, the findings also reaffirm that design experience remains a critical factor shaping creative output.
… Co-creation systems involving humans and AI are blurring the lines between design and art in such a way … The human-AI interaction in the creative domain has been expanded by three …
Due to the remarkable content generation capabilities, large language models (LLMs) have demonstrated potential in supporting early-stage conceptual design. However, current interaction paradigms often struggle to effectively facilitate multi-round idea exploration and selection, leading to random outputs, unclear iterations, and cognitive overload. To address these challenges, we propose a human-AI co-ideation framework aimed at tracking the evolution of design ideas. This framework leverages a structured idea representation, an analogy-based reasoning mechanism and interactive visualization techniques. It guides both designers and AI to systematically explore design spaces. We also develop a prototype system, IdeationWeb, which integrates an intuitive, mind map-like visual interface and interactive methods to support co-ideation. Our user study validates the framework’s feasibility, demonstrating enhanced collaboration and creativity between humans and AI. Furthermore, we identified collaborative design patterns from user behaviors, providing valuable insights for future human-AI interaction design.
The integration of Artificial Intelligence (AI) into creative and design processes has shifted from automation towards co-creation, positioning AI as a collaborative partner rather than a replacement. As AI-driven tools become more embedded in human-centred design, understanding their impact on interaction dynamics, ethics, and usability is critical. This review examines key advancements in human-AI co-design and co-creation fields, focusing on interaction frameworks, ethical considerations, non-linear collaboration models, domain-specific applications, and user experience (UX) design. Recent research emphasizes the need for structured frameworks that facilitate effective communication and partnership between humans and AI in creative tasks. Mixed-initiative and explainable AI (XAI) approaches play a crucial role in enhancing transparency and interpretability, allowing designers to co-create with greater trust and autonomy. Ethical concerns, such as AI’s influence on user perception and decision-making, are also gaining prominence, calling for responsible AI deployment in co-creative settings. Additionally, non-linear collaboration models redefine AI’s role as an adaptive assistant throughout iterative design stages, aligning with the dynamic nature of creative processes. Domain-specific applications, ranging from game and product design to choreography and smart manufacturing, illustrate the versatility of AI in augmenting human creativity. AI-assisted UX design further extends this impact by personalizing user experiences and streamlining workflows, ultimately improving efficiency and engagement. Despite these advancements, challenges remain in balancing AI autonomy with human control, evaluating its impact on creative workflows, and developing inclusive methodologies that cater to diverse design disciplines. This review synthesizes current research trends and identifies future directions for designing AI systems that empower, rather than replace, human expertise in creative industries.
The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes. However, designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs, and there is a limited framework that describes the process of co-ideation between a designer and ChatGPT. This research thus aimed to explore how LLMs can act as codesigners and influence creative ideation processes of industrial designers and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with custom GPT. A survey was first conducted to detect how LLMs influenced the creative ideation processes of industrial designers and to understand the problems that designers face when using ChatGPT to ideate. Then, a framework which based on mapping content to guide the co-ideation between humans and custom GPT (named as Co-Ideator) was promoted. Finally, a design case study followed by a survey and an interview was conducted to evaluate the ideation performance of the custom GPT and framework compared with traditional ideation methods. Also, the effect of custom GPT on co-ideation was compared with a non-artificial intelligence (AI)-used condition. The findings indicated that if users employed co-ideation with custom GPT, the novelty and quality of ideation outperformed by using traditional ideation.
With the rapid advancement of generative AI technologies, the collaboration between designers and generative AI during the conceptual design stage has fostered a novel paradigm of Human-AI Co-creation in design. However, relevant systematic reviews remain relatively scarce. This study analyses 80 research papers and publications from prominent databases such as Web of Science (WOS), Scopus, ScienceDirect, and Google Scholar, based on rigorous selection criteria and utilizing the CiteSpace bibliometric analysis tool. We developed a comprehensive systemic model for Human-AI Co-creation Design: cognition, process, methodology, and outcome. Key findings encompass the transformation of design cognition from designer-centric to Human-AI collaborative cognition, the shift of design process from experience-driven to constraint-driven, the evolution of design method from unimodal to multimodal interaction, and the indirect influence of generative AI on design outcome. Furthermore, this paper discusses the boundaries of Human-AI Co-creation Design with generative AI and envisions future research directions for multidisciplinary collaboration and the future models of AI and designer creative fusion. Through a systematic analysis and summary of the current paradigm shift, this paper identifies the limitations of existing research and provides insights into future research directions for building a more efficient Human-AI Co-creation design paradigm.
Thanks to their generative capabilities, large language models (LLMs) have become an invaluable tool for creative processes. These models have the capacity to produce hundreds and thousands of visual and textual outputs, offering abundant inspiration for creative endeavors. But are we harnessing their full potential? We argue that current interaction paradigms fall short, guiding users towards rapid convergence on a limited set of ideas, rather than empowering them to explore the vast latent design space in generative models. To address this limitation, we propose a framework that facilitates the structured generation of design space in which users can seamlessly explore, evaluate, and synthesize a multitude of responses. We demonstrate the feasibility and usefulness of this framework through the design and development of an interactive system, Luminate, and a user study with 14 professional writers. Our work advances how we interact with LLMs for creative tasks, introducing a way to harness the creative potential of LLMs.
AI-based creativity-support systems are gaining attention from designers and researchers. However, a research gap exists on how to tailor those systems by maximizing flexibility based on human needs and preferences. This study proposes a schematic human-AI co-creation framework to maximize system flexibility and enhance creative outcome generation. The framework proposes the involvement of AI in three levels of creation and allows humans to adjust between the levels anytime during the creative process based on their preferences. We tentatively define how AI should collaborate at the three levels. To implement the framework, a co-creation system (GSM) was built to support humans in creating sculpture maquettes with AI. It includes three key components: a prompt-based generated model (DALL·E), advanced computer vision, and robotic arms. A user interface is provided to ensure transparency. Preliminary user studies have demonstrated that the system enhances flexibility and allows users to generate more creative maquettes.
Recent research suggests that working with generative artificial intelligence (AI), such as ChatGPT, can produce more creative outcomes than humans alone. However, does AI retain its creative edge when humans have access to alternative information sources, such as another human or the internet. We explored this question in a between‐group experiment with 202 German participants across four conditions (human–human dyads, human–Internet, and two human–AI groups with basic or specific instructions) and four creativity tasks (two alternate uses tasks, a consequences task, and a problem‐solving task). Results showed that the human–human condition obtained higher creativity scores in the divergent thinking tasks than the remaining groups. No significant between‐group differences were observed in the problem‐solving task. Moreover, interacting in human dyads made people more creatively confident, an effect not observed in the other groups. In addition, we compared human‐rated outcomes with AI‐based automated scoring (Ocsai). Interestingly, notable discrepancies emerged between the AI assessment and the human‐judged results, raising concerns about AI's susceptibility to “elaboration bias.” These findings highlight the benefits of human collaboration for creativity and call for further studies about the reliability and potential biases of AI in evaluating creative performance.
The emergence of generative AI technologies has led to an increasing number of people collaborating with AI to produce creative works. Across two experimental studies, in which we carefully designed and programmed state-of-the-art human–AI interfaces, we examine how the design of generative AI systems influences human creativity (poetry writing). First, we find that people were most creative when writing a poem on their own, compared to first receiving a poem generated by an AI system and using sophisticated tools to edit it (Study 1). Following this, we demonstrate that this creativity deficit dissipates when people co-create with—not edit—AI and establish creative self-efficacy as an important mechanism in this process (Study 2). Thus, our findings indicate that people must occupy the role of a co-creator, not an editor, to reap the benefits of generative AI in the production of creative works.
… of End-User development by exploring the impact of Human-AI Co-Creation on users. Our findings will inform the development of future tools and investigate their use in creative work. …
Growing interest in eXplainable Artificial Intelligence (XAI) aims to make AI and machine learning more understandable to human users. However, most existing work focuses on new algorithms, and not on usability, practical interpretability and efficacy on real users. In this vision paper, we propose a new research area of eXplainable AI for Designers (XAID), specifically for game designers. By focusing on a specific user group, their needs and tasks, we propose a human-centered approach for facilitating game designers to co-create with AI/ML techniques through XAID. We illustrate our initial XAID framework through three use cases, which require an understanding both of the innate properties of the AI techniques and users’ needs, and we identify key open challenges.
… and developing design guidelines to improve UX [… AI algorithms and perspectives of prior studies, we designed a prototype with which humans and AI can produce complex and creative …
This study explored how pre-service teachers (N=33) perceived human-technology relationships with generative AI (genAI). The study employed a research-creation approach and implemented a hands-on workshop, in which the participants engaged in a speculative design process using generative AI. The study focused on how participants, armed with their new tool, approached their designs, made design decisions, and interacted with the responsive tool. The qualitative analysis of the video data from students' project presentations employed thematic analysis, interpreting the students' responses in relational terms. The results revealed that the emerging human-technology relationships were primarily expressed through distributed decision-making, with the AI actively contributing both to the object of activity and to the emerging design process. The findings highlight that genAI is neither passive nor neutral tools but actively transforms both the design process and its outcomes, shaping how people experience new forms of agency in relation to such technology.
The integration of generative AI into the luxury fashion industry is a burgeoning trend with the potential to impact consumer behavior significantly. While prior research has demonstrated the benefits of AI design in the fashion industry, it also suggests that such approaches may not fully align with the distinct values of the luxury segment. Building on prior research, this paper investigates how AI can be more effectively integrated into luxury design. We propose that human‐AI co‐creation presents a promising approach, as it balances efficiency with the value people place on human effort. Across four empirical studies, we find that human‐AI co‐creation in luxury design generates more positive consumer responses than AI‐only designs. This effect is mediated by perceived design effort. Human‐AI co‐creation is viewed as involving greater design effort than AI‐only design, and human‐led co‐creation is perceived as requiring more effort than AI‐led co‐creation. We further examine the moderating roles of co‐creation modes and luxury product types, revealing that human‐led (vs. AI‐led) co‐creation enhances perceived design effort and, in turn, improves brand attitudes, particularly for hedonic (vs. functional) luxury products. These findings offer valuable insights for luxury brands navigating the era of intelligent design, highlighting the importance of balancing human creativity with AI capabilities to sustain brand value and foster consumer acceptance.
Human-AI co-creativity involves both humans and AI collaborating on a shared creative product as partners. In a creative collaboration, interaction dynamics, such as turn-taking, contribution type, and communication, are the driving forces of the co-creative process. Therefore the interaction model is a critical and essential component for effective co-creative systems. There is relatively little research about interaction design in the co-creativity field, which is reflected in a lack of focus on interaction design in many existing co-creative systems. The primary focus of co-creativity research has been on the abilities of the AI. This article focuses on the importance of interaction design in co-creative systems with the development of the Co-Creative Framework for Interaction design (COFI) that describes the broad scope of possibilities for interaction design in co-creative systems. Researchers can use COFI for modeling interaction in co-creative systems by exploring alternatives in this design space of interaction. COFI can also be beneficial while investigating and interpreting the interaction design of existing co-creative systems. We coded a dataset of existing 92 co-creative systems using COFI and analyzed the data to show how COFI provides a basis to categorize the interaction models of existing co-creative systems. We identify opportunities to shift the focus of interaction models in co-creativity to enable more communication between the user and AI leading to human-AI partnerships.
As AI systems become more integrated into society, the relationship between humans and AI is shifting from simple automation to co-creative collaboration. This evolution is particularly important in education, where human intuition and imagination can combine with AI’s computational power to enable innovative forms of learning and teaching. This study is grounded in the #ppAI6 model, a framework that describes six levels of creative engagement with AI in educational contexts, ranging from passive consumption to active, participatory co-creation of knowledge. The model highlights progression from initial interactions with AI tools to transformative educational experiences that involve deep collaboration between humans and AI. In this study, we explore how educators and learners can engage in deeper, more transformative interactions with AI technologies. The #ppAI6 model categorizes these levels of engagement as follows: level 1 involves passive consumption of AI-generated content, while level 6 represents expansive, participatory co-creation of knowledge. This model provides a lens through which we investigate how educational tools and practices can move beyond basic interactions to foster higher-order creativity. We conducted a systematic literature review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for reporting the levels of creative engagement with AI tools in education. This review synthesizes existing literature on various levels of engagement, such as interactive consumption through Intelligent Tutoring Systems (ITS), and shifts focus to the exploration and design of higher-order forms of creative engagement. The findings highlight varied levels of engagement across both learners and educators. For learners, a total of four studies were found at level 2 (interactive consumption). Two studies were found that looked at level 3 (individual content creation). Four studies focused on collaborative content creation at level 4. No studies were observed at level 5, and only one study was found at level 6. These findings show a lack of development in AI tools for more creative involvement. For teachers, AI tools mainly support levels two and three, facilitating personalized content creation and performance analysis with limited examples of higher-level creative engagement and indicating areas for improvement in supportive collaborative teaching practices. The review found that two studies focused on level 2 (interactive consumption) for teachers. In addition, four studies were identified at level 3 (individual content creation). Only one study was found at level 5 (participatory co-creation), and no studies were found at level 6. In practical terms, the review suggests that educators need professional development focused on building AI literacy, enabling them to recognize and leverage the different levels of creative engagement that AI tools offer.
Large language models (LLMs) show strong potential to support creative tasks, but the role of the interface design is poorly understood. In particular, the effect of different modes of collaboration between humans and LLMs on co-creation outcomes is unclear. To test this, we conducted a randomized controlled experiment (N = 486) comparing: (a) two variants of reflective, human-led modes in which the LLM elicits elaboration through suggestions or questions, against (b) a proactive, model-led mode in which the LLM independently rewrites ideas. By assessing the effects on idea quality, diversity, and perceived ownership, we found that the model-led mode substantially improved idea quality but reduced idea diversity and users’ perceived idea ownership. The reflective, human-led mode also improved idea quality, yet while preserving diversity and ownership. We independently validated the findings in a different context (N = 640). Our findings highlight the importance of designing interactions with generative AI systems as reflective thought partners that complement human strengths and augment creative processes.
… artificial intelligence for game content generation, and faces many unsolved interaction design … This workshop therefore convenes CHI and game researchers to advance mixed-initiative …
Machine learning models provide functions to transform and generate image and text data. This promises powerful applications but it remains unclear how users can interact with these models. With my research, I focus on designing, implementing, and evaluating functional prototypes for understanding human-AI interactions. Methodologically, I focus on web-based experiments with a mixed-methods approach. Furthermore, I use these prototypes and generative models as a material to understand fundamental concepts in human-AI interactions, such as initiative, intent, and control. In an already conducted study, for example, I showed that the levels of initiative and control afforded by the UI influence perceived authorship when writing text. For the future, I plan to carry out more studies on collaborative writing. With my dissertation, I contribute to how we will build human-AI interactions and how we will collaborate with computational generative systems in future.
… and interaction. This work shows the importance of user modeling in designing mixed-initiative systems and has the new feature of employing user models to vary when to interact with a …
Exploration with a generative formalism must necessarily account for the nature of interaction between humans and the design space explorer. Established accounts of design interaction are made complicated by two propositions in Woodbury and Burrow's Keynote on design space exploration. First, the emphasis on the primacy of the design space as an ordered collection of partial designs (version, alternatives, extensions). Few studies exist in the design interaction literature on working with multiple threads simultaneously. Second, the need to situate, aid, and amplify human design intentions using computational tools. Although specific research and practice tools on amplification (sketching, generation, variation) have had success, there is a lack of generic, flexible, interoperable, and extensible representation to support amplification. This paper addresses the above, working with design threads and computer-assisted design amplification through a theoretical model of dialogue based on Grice's model of rational conversation. Using the concept of mixed initiative, the paper presents a visual notation for representing dialogue between designer and design space formalism through abstract examples of exploration tasks and dialogue integration.
This study explores the integration of Generative Design Assistants (GDAs), specifically machine learning based tools, in the architectural design process. It investigates how these tools, once confined to experimental realms, are now influencing mainstream architectural practice, particularly among novice architects. The research focuses on third and fourth-year architecture students, examining how they adapt to and integrate these advanced AI tools into their design workflows. Through an empirical online workshop, the study collected data of design process recordings, design output success scores of students by an independent jury, and post-experiment surveys. This approach provided insights into the timing, frequency, and sequence of GDA usage, as well as the influence of specific GDA features on design success. The research reveals that three primary strategies emerged in students' GDA usage: continuous use throughout the design process, selective problem-solving use, and initial ideation use followed by traditional methods. However, an over-reliance on GDAs was noted to potentially limit the designer’s interpretive and developmental input. The survey shows that different GDAs have distinct strengths and impacts on the design process. In terms of selected GDAs for the experiment, ArchiGAN aids in discovery and ideation, while HouseGAN excels in reframing design problems. In conclusion, the study underscores the transformative potential and challenges of GDAs in architectural design and highlights the need for balanced GDA integration. The research outputs show that future research should focus on the long-term implications of GDAs in architectural education. This research aims to guide the effective integration of AI in architecture, enhancing the human designer's role rather than overshadowing it.
. The scope of this paper is to formulate and evaluate the structure of a viable design workflow that combines a variety of computational tools and uses artificial intelligence (AI) to enhance the designer’s capacity to explore design options within an expanded design space. In light of the autonomous and progressively post-anthropocentric generative capability of recent AI strategies for architectural design, we are interested in investigating some of the challenges involved in the insertion of such AI strategies into a new generative design system, involving data curation and the placement of any AI-assisted model in the overall workflow, as well as its (AI’s) reciprocity with other computational methods such as discrete assembly and agent-based modeling. The paper presents our interrogation of the proposed AI-assisted framework, demonstrated in experiments of formulating multiple design workflows following different strategies. The workflow strategies show that integrating AI networks into a framework with other computational tools affords a different kind of design exploration than other methods; the prospect of novel solutions is heavily dependent on the interconnectedness of such methods and the dataset curation process. Collectively, the work contributes to innovation in architectural education and practice through enhancing scientific research (in line with UN Sustainable Development Goal 9).
… Empirical studies have shown that GenAI, such as ChatGPT and Notably, can accelerate … of AIassisted workflows and the ethical concerns surrounding AI-generated outputs. This …
… waiting process, this study sought to deepen understanding of how visual feedback affects user perceptions and subjective experience, providing both theoretical and empirical support …
… design tasks and design process. Therefore, this study aims to explore how the alignment among AI-assisted … Empirical studies have shown that social influence is a strong predictor of …
As AI technology penetrates into customized fashion design, traditional TAM and UTAUT models have provided insights into the study of new technology acceptance, but the research on user intentions towards AI-assisted customization software requires the introduction of new variables. This study concentrates on the impact of interaction design (including interaction behavior, interaction content, and interaction form) and user perception (including perceived usefulness, ease of use, creativity, and social influence) on Chinese user intentions. The model in this study performs well in explaining user intentions to continue use. So it can provide a theoretical basis for the design optimization of other AI-assisted design products.
Digital media artists often face fragmented workflows, slow translation of ideas into concrete assets, limited personalised support and high technical barriers. This paper proposes an artificial-intelligence-assisted workflow designed to streamline the full creative pipeline for digital media art. We decompose creation into four stages—creative stimulation, asset generation, iterative co-creation and output optimisation—and embed generative models, computer vision and natural language processing at each stage. A central AI services layer combines a prompt engine, diffusion-based visual generators and retrieval modules, while a lightweight user-preference model adapts recommendations to individual styles. The system integrates into standard illustration, motion graphics and interaction design tools through dockable panels rather than replacing existing software. We evaluate the framework with 48 students and early-career practitioners completing illustration, motion and interactive tasks under two conditions: a traditional digital workflow and the proposed AI-assisted workflow. Metrics include ideation time, number of useful assets produced per hour, number of iterations to reach an acceptable concept, objective expert ratings and participant-reported outcomes. Results show that the AI-assisted workflow reduces early ideation time by an average of 43.2%, increases the rate of usable asset generation by 51.7% and lowers the number of required iterations by 29.4%, without a statistically significant drop in perceived authorship or originality. Participants report that AI support reduces technical friction and expands the range of explored visual directions while keeping final decisions in human hands. The study suggests that carefully designed AI assistance can lower barriers for digital media art creation and improve efficiency, while preserving room for individual expression and critical judgement.
Interior design often struggles to capture the subtleties of client experiences, leaving gaps between what clients feel and what designers can act upon. We present AIDED, a designer–AI co-design workflow that integrates multimodal client data into generative AI (GAI) design processes. In a within-subjects study with twelve professional designers, we compared four modalities: baseline briefs, gaze heatmaps, questionnaires visualizations, and AI-predicted overlays. Results show that questionnaire data were trusted, creativity-enhancing, and satisfying; gaze heatmaps increased cognitive load; and AI-predicted overlays improved GAI communication but required natural language mediation to earn trust. Interviews confirmed that an authenticity–interpretability trade-off is central to balancing client voices with professional control. Our contributions are: (1) a system that incorporates experiential client signals into GAI design workflows, (2) empirical evidence of how different modalities affect design outcomes, and (3) implications for future AI tools that support human–data interaction in creative practice.
Interactive AI systems, including search engines, recommender systems, conversational agents, and generative AI applications, are increasingly central to user experiences. However, …
… trends in HCI and highlights the role of generative AI in multimodal dataset synthesis and … language model (LLM) through the Ollama interface, guided by a base prompt tailored to …
… in Human-Computer Interaction (HCI) and UX research to systematically synthesize empirical … address large-scale generative models and data-driven interface generation, which are …
We are concurrently witnessing two significant shifts: voice and chat-based conversational user interfaces (CUIs) are becoming ubiquitous (especially more recently due to advances in generative AI and LLMs - large language models), and older people are becoming a very large demographic group (and increasingly adopting of mobile technology on which such interfaces are present). However, despite the recent increase in research activity, age-relevant and inter/cross-generational aspects continue to be underrepresented in both research and commercial product design. Therefore, the overarching aim of this workshop is to increase the momentum for research within the space of hands-free, mobile, and conversational interfaces that centers on age-relevant and inter- and cross-generational interaction. For this, we plan to create an interdisciplinary space that brings together researchers, designers, practitioners, and users, to discuss and share challenges, principles, and strategies for designing such interfaces across the life span. We thus welcome contributions of empirical studies, theories, design, and evaluation of hands-free, mobile, and conversational interfaces designed with aging in mind (e.g. older adults or inter/cross-generational). We particularly encourage contributions focused on leveraging recent advances in generative AI or LLMs. Through this, we aim to grow the community of CUI researchers across disciplinary boundaries (human-computer interaction, voice and language technologies, geronto-technologies, information studies, etc.) that are engaged in the shared goal of ensuring that the aging dimension is appropriately incorporated in mobile / conversational interaction design research.
This work investigates generative facial expression interfaces for intelligent agents from a meta-design perspective. We propose the Generative Personalized Facial Expression Interface (GPFEI) framework, which organizes rule-bounded spaces, character identity, and context–expression mapping to address challenges of control, coherence, and alignment in run-time facial expression generation. To operationalize this framework, we developed GenFaceUI, a proof-of-concept tool that enables designers to create templates, apply semantic tags, define rules, and iteratively test outcomes. We evaluated the tool through a qualitative study with twelve designers. The results show perceived gains in controllability and consistency, while revealing needs for structured visual mechanisms and lightweight explanations. These findings provide a conceptual framework, a proof-of-concept tool, and empirical insights that highlight both opportunities and challenges for advancing generative facial expression interfaces within a broader meta-design paradigm.
Generative UI is transforming interface design by facilitating AI-driven collaborative workflows between designers and computational systems. This study establishes a working definition of Generative UI through a multi-method qualitative approach, integrating insights from a systematic literature review of 127 publications, expert interviews with 18 participants, and analyses of 12 case studies. Our findings identify five core themes that position Generative UI as an iterative and co-creative process. We highlight emerging design models, including hybrid creation, curation-based workflows, and AI-assisted refinement strategies. Additionally, we examine ethical challenges, evaluation criteria, and interaction models that shape the field. By proposing a conceptual foundation, this study advances both theoretical discourse and practical implementation, guiding future HCI research toward responsible and effective generative UI design practices.
… Through advancements in generative models, AI enables a … intuitive, visual interfaces, solidifying HCI as critical to … GANs allowed for the generation of high-quality synthetic images. At …
… These tools are not available in other fields like HCI because UI patterns in this field are difficult to … to be used by automata (eg, algorithms, program analysis and synthesis techniques). …
Conversational agents (CAs), described as software with which humans interact through natural language, have increasingly attracted interest in both academia and practice because of improved capabilities driven by advances in artificial intelligence and, specifically, natural language processing. CAs are used in contexts such as peoples private lives, education, and healthcare, as well as in organizations to innovate or automate tasks for example, in marketing, sales, or customer service. In addition to these application contexts, CAs take on different forms in terms of their embodiment, the communication mode, and their (often human-like) design. Despite their popularity, many CAs are unable to fulfill expectations, and fostering a positive user experience is challenging. To better understand how CAs can be designed to fulfill their intended purpose and how humans interact with them, a number of studies focusing on human-computer interaction have been carried out in recent years, which have contributed to our understanding of this technology. However, currently, a structured overview of this research is lacking, thus impeding the systematic identification of research gaps and knowledge on which future studies can build. To address this issue, we conducted an organizing and assessing review of 262 studies, applying a sociotechnical lens to analyze CA research regarding user interaction, context, agent design, as well as CA perceptions and outcomes. This study contributes an overview of the status quo of CA research, identifies four research streams through cluster analysis, and proposes a research agenda comprising six avenues and sixteen directions to move the field forward
Recent years have seen an unprecedented level of technological uptake and engagement by the mainstream. From deepfakes for memes to recommendation systems for commerce, machine learning (ML) has become a regular fixture in society. This ongoing transition from purely academic confines to the general public is not smooth as the public does not have the extensive expertise in data science required to fully exploit the capabilities of ML. As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the ‘how’ and ‘why’ of human-computer interaction (HCI) within these frameworks. This is necessary for optimal system design and leveraging advanced data-processing capabilities to support decision-making involving humans. It is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. In this monograph, the authors focus on the following questions: (i) What does HCI currently look like for state-of-the-art AutoML algorithms? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, the authors project existing literature in HCI into the space of AutoML and review topics such as user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, they contemplate how AutoML may manifest in effectively open-ended environments. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
We review the history of human-automation interaction research, assess its current status and identify future directions. We start by reviewing articles that were published on this topic in the International Journal of Human-Computer Studies during the last 50 years. We find that over the years, automated systems have been used more frequently (1) in time-sensitive or safety-critical settings, (2) in embodied and situated systems, and (3) by non-professional users. Looking to the future, there is a need for human-automation interaction research to focus on (1) issues of function and task allocation between humans and machines, (2) issues of trust, incorrect use, and confusion, (3) the balance between focus, divided attention and attention management, (4) the need for interdisciplinary approaches to cover breadth and depth, (5) regulation and explainability, (6) ethical and social dilemmas, (7) allowing a human and humane experience, and (8) radically different human-automation interaction.
Generative AI, which is equipped with unique capabilities, is about to put the world of secure user interface (UI) design upside down and turn it into something full of endless possibilities in which users will be able to use the same opportunities and experienced solutions to protect their interaction in digital from any future security threats. This chapter takes a deep plunge into the merger of the generative AI with the secure user interface design, on the whole, presenting a complete exposition of the principals involved, methodologies applied, practical embodiment, and ultimate ramifications. The beginning will explore the building blocks of UI design principles and the user-centred iterative approach, wherein a robust framework for understanding Generative AI as a critical part of building secure, intuitive, and engaging user experiences is implemented. Further, it provides an overview of different types of generative AI approaches that could be deployed for secure UI design, such as GANs, VAEs, and autoregressive models, with their capabilities expanding the scope of security measures, which include authentication protocols, encryption, and user access rights while retaining usability and aesthetic appeal. Moreover, it surveys instance applications of the generative AI that support the Secure design of GUI, among the automatic generation of safe layout patterns, the dynamic change of the interface according to emerging threats, and the creation of cryptographic keys and secure symbols.
… design and evaluation of AI-driven learning tools, where students engage with pre-designed … CONCLUSIONS This study investigated the dynamics of human–AI collaboration in LLM-…
Abstract This action research study focuses on the integration of “AI assistants” in two Agile software development meetings: the Daily Scrum and a feature refinement, a planning meeting that is part of an in-house Scaled Agile framework. We discuss the critical drivers of success, and establish a link between the use of AI and team collaboration dynamics. We conclude with a list of lessons learnt during the interventions in an industrial context, and provide a assessment checklist for companies and teams to reflect on their readiness level. This paper is thus a road-map to facilitate the integration of AI tools in Agile setups.
In this work, we propose a novel Personalised Large Language Model (PLLM) agent, designed to advance the integration and adaptation of large language models within the field of human–robot interaction and human–computer interaction. While research in this field has primarily focused on the technical deployment of LLMs, critical academic challenges persist regarding their ability to adapt dynamically to user-specific contexts and evolving environments. To address this fundamental gap, we present a methodology for personalising LLMs using domain-specific data and tests using the NeuroSense EEG dataset. By enabling the personalised data interpretation, our approach promotes conventional implementation strategies, contributing to ongoing research on AI adaptability and user-centric application. Furthermore, this study engages with the broader ethical dimensions of PLLM, critically discussing issues of generalisability and data privacy concerns in AI research. Our findings demonstrate the usability of using the PLLM in a human–robot interaction scenario in real-world settings, highlighting its applicability across diverse domains, including healthcare, education, and assistive technologies. We believe the proposed system represents a significant step towards AI adaptability and personalisation, offering substantial benefits across a range of fields.
Mixed-initiative interaction is a naturally-occurring feature of human-human interactions. It is characterised by turn-taking, frequent change of focus, agenda and control among the "speakers". This human-based mixed-initiative interaction can be implemented through mixed-initiative systems. This is a popular approach to building intelligent systems that can collaborate naturally and effectively with people. Mixed-initiative systems exhibit various degrees of involvement with regards to the initiatives taken by the user or the system. In any discourse, the initiative may be shared between either, a learner and a system agent, or between two independent system agents. Both the parties in question establish and maintain a common goal and context, and proceed with an interaction mechanism involving initiative taking that optimises their progress towards the goal. However, the application of mixed-initiative interaction in web-based learning is very much limited. This paper discusses the design and implementation of a web-based learning system through mixedinitiative system known as JavaLearn. JavaLearn allows the interaction between the system (in the form of a software agent) and the individual learner. Here, the system supports the learning through a problem solving activity by demanding active learning behaviour from the learner with minimal natural language understanding by the agent and embodies the application-dependent aspects of the discourse. It guides the learner to solve the problem by giving adaptive advice, hints and engages the learner in the real time interaction in the form of "conversation". The principal features of this system are it is adaptive and is based on reflection, observation and relation. The system acquires its intelligence through the finite state machine and rule-based agents.
… that mixed-initiative decision sharing depends on designing … important design implication for mixedinitiative systems … be understood before mixed-initiative systems are realistic in …
… design of mixed-initiative AI systems. Although there is now active research in the area of mixed initiative interactive systems, … theory for the design of mixed initiative systems. The paper …
… Mixed-initiative interaction [8] has been studied for the past 30 years in the areas of artificial intelligence (AI… suggests a systematic way by which their capabilities for mixed-initiative in…
… mixed-initiative interaction between hierarchical planners and human operator under DARPAs Mixed-Initiative … In parallel, human factors and artificial intelligence practitioners analyzed …
This study addresses the problem that enterprise IT service desks increasingly embed AI assistants in support portals and ticket workflows, yet many organizations lack quantitative evidence on whether human oversight and automation quality jointly improve user experience and service performance. The purpose was to test a quantitative, cross-sectional, case study-based model linking Human-AI Collaboration (HAC) and Workflow Automation Effectiveness (WAE) to User Experience (UX) and perceived IT Support Service Performance (SP). Survey data were collected from enterprise support cases; 320 questionnaires were distributed, 259 were returned, and 247 valid responses were analyzed (77.2% usable response rate; 71.7% end users and 28.3% IT support personnel; 54.3% used AI support weekly or more). Constructs were measured with multi-item five-point Likert scales and showed favorable perceptions: HAC M = 3.91 (SD = 0.64), WAE M = 3.84 (SD = 0.69), UX M = 3.88 (SD = 0.62), and SP M = 3.79 (SD = 0.66), with good to excellent reliability (Cronbach alpha 0.86 to 0.91). The analysis plan applied descriptive statistics, reliability testing, Pearson correlations, multiple regression, and bootstrapped mediation (5,000 samples). Associations were positive and significant (HAC with UX r = 0.62 and UX with SP r = 0.63, both p < .001). Regression indicated that HAC (beta = 0.41) and WAE (beta = 0.33) explained 49% of UX variance (R2 = 0.49, p < .001); WAE (beta = 0.38), HAC (beta = 0.21), and UX (beta = 0.29) explained 56% of SP variance (R2 = 0.56, p < .001). UX partially mediated the HAC to SP relationship (indirect beta = 0.29, 95% CI [0.19, 0.40]). Implications suggest that AI enabled IT support should be governed as a hybrid workflow with clear escalation rules and reliable automation, and continuously evaluated using joint metrics that track experience alongside efficiency outcomes.
This study conducts an empirical exploration of generative Artificial Intelligence (AI) tools across the game development pipeline, from concept art creation to 3D model integration in a game engine. Employing AI generators like Leonardo AI, Scenario AI, Alpha 3D, and Luma AI, the research investigates their application in generating game assets. The process, documented in a diary-like format, ranges from producing concept art using fantasy game prompts to optimizing 3D models in Blender and applying them in Unreal Engine 5. The findings highlight the potential of AI to enhance the conceptualization phase and identify challenges in producing optimized, high-quality 3D models suitable for game development. This study reveals the current limitations and ethical considerations of AI in game design, suggesting that while generative AI tools hold significant promise for transforming game development, their full integration depends on overcoming these hurdles and gaining broader industry acceptance.
Applications of Artificial Intelligence AI are increasingly significant for designers across various fields, with a particular emphasis on architectural design. These applications offer support by providing relevant data and suggesting diverse design ideas. This study explores the applications of AI in the architectural computational design process, employing a mixed-review approach that combines bibliometric analysis and systematic review. The objective is to study the uncovered research area figured from the review by clarifying the primary functions of AI in all architectural design stages, as well as the associated opportunities and challenges . This study examines the application scale, methodologies, and tools of AI in architectural design. Then, it conducts a comprehensive survey of commonly used AI tools, analyzing and comparing them based on phase classification, deployment classification, scale of application, and their integration with BIM, VR, and parametric design Additionally, the study proposes an AI-powered Architectural Computational Design Process (AI-ACD), a workflow designed to aid architects in effectively incorporating AI technologies into various stages of computational architectural design processes. Afterwords the study also introduces a classification of commonly used AI tools and maps them to specific design tasks within the (AI-ACD) workflow. Multiple design scenarios and a set of core integration principles are proposed to demonstrate how AI engagement can be tailored to different levels of use and project contexts. Finally, the study presents a matrix that maps each core integration principle to the five key design stages The study demonstrates how AI-ACD, an AI-assisted workflow, enhances the architectural design process through visualization, data analysis, and optimization tools, adaptable to architecture, urban design, and heritage, ultimately boosting creativity, efficiency, and design quality.
… Artificial Intelligence (AI) technologies in the design process … In conclusion, the case studies of AI in architecture illustrate … AI’s ability to optimize designs, enhance energy efficiency, …
This study proposes a novel artificial intelligence (AI)-assisted design model that combines Variational Autoencoders (VAE) with reinforcement learning (RL) to enhance innovation and efficiency in cultural and creative product design. By introducing AI-driven decision support, the model streamlines the design workflow and significantly improves design quality. The study establishes a comprehensive framework and applies the model to four distinct design tasks, with extensive experiments validating its performance. Key factors, including creativity, cultural adaptability, and practical application, are evaluated through structured surveys and expert feedback. The results reveal that the VAE + RL model surpasses alternative approaches across multiple criteria. Highlights include a user satisfaction rate of 95%, a Structural Similarity Index (SSIM) score of 0.92, model accuracy of 93%, and a loss reduction to 0.07. These findings confirm the model's superiority in generating high-quality designs and achieving high user satisfaction. Additionally, the model exhibits strong generalization capabilities and operational efficiency, offering valuable insights and data support for future advancements in cultural product design technology.
… automation interaction design research and industry application needs. This … review explores adaptive automation technologies that dynamically allocate tasks between humans and …
The projected introduction of conditional automated driving systems to the market has sparked multifaceted research on human–machine interfaces (HMIs) for such systems. By moderating the roles of the human driver and the driving automation system, the HMI is indispensable in avoiding side effects of automation such as mode confusion, misuse, and disuse. In addition to safety aspects, the usability of HMIs plays a vital role in improving the trust and acceptance of the automated driving system. This paper aggregates common research methods and findings based on an extensive literature review. Empirical studies, frameworks, and review articles are included. Findings and conclusions are presented with a focus on study characteristics such as test cases, dependent variables, testing environments, or participant samples. These methods and findings are discussed critically, taking into consideration requirements for usability assessments of HMIs in the context of conditional automated driving. The paper concludes with a derivation of recommended study characteristics framing best practice advice for the design of experiments. The advised selection of scenarios and metrics will be applied in a future validation study series comprising a driving simulator experiment and three real driving experiments on test tracks in Germany, the USA, and Japan.
Human-Machine-Interfaces are with no doubt one of the constitu- tive parts of an automation system. However, it is not till recently that they have received appropriate attention. It is because of a major concern about as- pects related to maintenance, safety, achieve operator awareness, etc has been gained. Even there are in the market software solutions that allow for the design of efficient and complex interaction systems, it is not widespread the use of a rational design of the overall interface system, especially for large scale systems where the monitoring and supervision systems may include hundreds of interfacing screens. It is on this respect hat this communication provides an example of such development also by showing how to include the automation level operational modes into the interfacing system. Another important aspect is how the human operator can enter the control loop in different ways, and such interaction needs to be considered as an integral part of the automation procedure as well as the communication of the automation device.In this paper the application of design and operational modes guidelines in automation are
… been used to evaluate HAI. It has been used to evaluate human-automation interfaces for usability … Blandford, “An approach to formal verification of human computer interaction,” Formal …
… experts to simulate the human-computer interaction process and evaluate the usability of … The heuristic evaluation method is low cost to use, is not affected by the time period of machine …
… automated methods. Despite the potential advantages, the space of usability evaluation automation is … In this article, we discuss the state of the art in usability evaluation automation, and …
… designer. However, if the process of constructing and using formal models could be automated as part of the interface design … System for semi-Automated GOMS Evaluation). Given the …
… we review theoretical tools for understanding human interruption… interface design to help people effectively manage interruptions. … However, modern automated and computer-aided com…
<em>Nowadays modern information systems (emerging technologies) are increasingly becoming an integral part of our daily lives and has begun to pose a serious challenge for human-computer interaction (HCI) professionals, as emerging technologies in the area of mobile and cloud computing, and internet of things (IoT), are calling for more devotion from HCI experts in terms of systems interface design. As the number of mobile platforms users, nowadays comprises of children's, elderly people, and people with disabilities or disorders, all demanding for an effective user interface that can meet their diverse needs, even on the move, at anytime and anywhere. This paper, review current articles (43) related to HCI interface design approaches to modern information systems design with the aim of identifying and determining the effectiveness of these methods. The study found that the current HCI design approaches were based on desktop paradigm which falls short of providing location-based services to mobile platforms users. The study also discovered that almost all the current interface design standard used by HCI experts for the design of user's interface were not effective &amp; supportive of emerging technologies due to the flexibility nature of these technologies. Based on the review findings, the study suggested the combination of Human-centred design with agile methodologies for interface design, and call on future works to use qualitative or quantitative approach to further investigate HCI methods of interface design with much emphasis on cloud-based technologies and other organizational information systems.</em>
… arithmetic, evaluation methods and software framework of the information systems to eventually realize and apply the theory of human-computer interaction. Therefore, the using …
… a promising measure for ergonomic design and evaluation of human–computer interaction. … Modern ship bridges are highly automated and therefore the safety of the ship operations is …
… level of automation in any system design is the evaluation of … Designers automate every subsystem that leads to an … informed consent in humanmachine collaboration: The role …
… an evaluation by Jagacinski of the utility of theoretical models (in control) for describing human performance with complex automated … Using a brain-computer interface to steer a HRP-2 …
The purpose of this communication is to show an additional advantage of the well known guide for start and stop modes, GEMMA, that should motivate its use as well as to introduce the consideration of the human operator as an integral part of the automation procedure. The inclusion of the human operator as well as his interplay with the automation device needs some guidelines that can be drawn from joining the GEMMA structured approach and some concepts borrowed from cognitive ergonomic theory and human-computer interaction. Finally, this paper shows some examples of human-machine interfaces (industrial panel, interface display screen).
最终合并将文献分为四大维度:首先,人机协同的理论构建组探讨了合作的核心范式与协作模型;其次,工具构建组侧重于Generative UI等技术落地的系统实践;第三,混合主导与意图管理组专门处理技术性的交互逻辑与AI的自主性行为;最后,评估与影响组则涵盖了行业认知、可用性评估及社会应用中的伦理与认知因素。这一分类旨在覆盖从抽象理论到具体技术实现,再到评估实践的全过程,为研究AI辅助设计流程及交互设计方法提供系统性视角。