交互设计视角下,AIGC与认知能力协同发展
认知机制与人机理解的理论框架
该组文献探讨了人机交互中的核心认知概念,如共享理解(PSU)、心理理论(ToM)以及用户与AI之间的认知关系分类。同时引入了对大语言模型进行行为表型分析(Phenotyping)的实验方法,旨在为AIGC时代的交互设计奠定认知科学基础。
- On the Same Page: Dimensions of Perceived Shared Understanding in Human-AI Interaction(Qingyu Liang, Jaime Banks, 2025, ArXiv)
- Theory of Mind in Human-AI Interaction(Qiaosi Wang, S. Walsh, Mei Si, Jeff Kephart, Justin D. Weisz, Ashok K. Goel, 2024, Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
- The case for human–AI interaction as system 0 thinking(Massimo Chiriatti, M. Ganapini, Enrico Panai, Mario Ubiali, Giuseppe Riva, 2024, Nature Human Behaviour)
- The Role of Design in the Future of Human-AI Interaction(Janin Koch, Wendy E. Mackay, Albrecht Schmidt, 2025, Adjunct Proceedings of the Sixth Decennial Aarhus Conference: Computing X Crisis)
- 17 FROM CULTURAL COGNITION TO USER ENGAGEMENT: CONSTRUCTING AN EVALUATION FRAMEWORK FOR AIGC TECHNOLOGY IN THE DESIGN OF CREATIVE PRODUCTS(Maocong Lin, Gaofeng Mi, 2025, Current Opinion in Psychiatry)
- Exploring Cognitive Strategies in Human-AI Interaction: ChatGPT's Role in Creative Tasks(Jelle Boers, Terra Etty, Martine Baars, Kim van Boekhoven, 2025, Journal of Creativity)
- Classifying Epistemic Relationships in Human-AI Interaction: An Exploratory Approach(Shengnan Yang, Rongqian Ma, 2025, ArXiv)
- CogBench: a large language model walks into a psychology lab(Julian Coda-Forno, Marcel Binz, Jane X. Wang, Eric Schulz, 2024, ArXiv)
协同创造与设计范式的演进
聚焦于AIGC如何改变创意流程与设计范式。研究涵盖了从“交互”向“关系过程”的转变、元宇宙中的协同创作机制,以及设计特征(如拟人化、交互性)如何通过认知灵活性中介影响创造力,强调AIGC作为协作伙伴对人类灵感的激发作用。
- Exploring the Intersection of Generative AI and Cognitive Science: Insights and Implications(Napat Sukthong, 2024, 2024 International Conference on Intelligent Computing and Next Generation Networks (ICNGN))
- The influence of individuals’ capability to use generative AI on their idea generation: the mediating role of cognitive information-processing styles(Patrick Held, Tim Heubeck, Reinhard Meckl, 2025, European Journal of Innovation Management)
- Augmented Learning for Joint Creativity in Human-GenAI Co-Creation(Y. Luan, YeunJoon Kim, Jing Zhou, 2025, Information Systems Research)
- Sketching with generative AI: verbal but not visual inspiration mitigates cognitive fixations(Yaxin Liu, Maxwell S. Kay, Adam E. Green, Roger E. Beaty, 2025, No journal)
- Dreaming Phantom in Immersive Experience: AIGC For Artistic Practice(Jiayang Huang, Yiran Chen, D. Yip, 2023, Proceedings of the 16th International Symposium on Visual Information Communication and Interaction)
- A Case Study on Video Creation Education Using Generative AI: Worldbuilding, Cognitive Load, and Self-Efficacy Changes Among Multinational Students(S. Shin, 2025, Korea Jouranl of Communication Studies)
- Introducing the concept of relational processes in Human-AI creativity(Àlex Valverde-Valencia, 2025, Hipertext.net)
- Unveiling the cognitive mechanisms of human–AGI co-creation: evidence from a case-based EEG and behavioral study(Fang Wang, Jiawen Wang, Shanshan Guo, Linpo Xia, 2026, No journal)
- AI-Driven Creativity Unleashed: Exploring the Synergistic Effects of UGC and AIGC in Metaverse Gaming from a User Perspective(Yanxiang Zhang, Wenbin Hu, 2024, Proceedings of the 29th International ACM Conference on 3D Web Technology)
- THE EFFECTS OF ARTIFICIAL INTELLIGENCE GENERATIVE CONTENT ASSISTED PRODUCT DESIGN ON COLLEGE STUDENTS CREATIVE ABILITY AND MOTIVATION(Fanglian Li, Muhamad Ezran Zainal Abdullah, Guan Tan Tse, 2025, International Journal of Education, Psychology and Counseling)
- Analyzing the adoption of AIGC tools in fashion design: An S-O-R framework integrating task–technology fit(Mengyun Yang, Jiabing Jin, 2025, PLOS One)
- The impact of AIGC design features on user creativity: The mediating roles of cognitive flexibility and cognitive persistence.(Zhixun Chen, Zihan Li, Ying Qu, 2026, Acta psychologica)
- Incorporating AIGC into Design Ideation: A Study on Self-Efficacy and Learning Experience Acceptance under Higher-Order Thinking(Kuo-Liang Huang, Yi-chen Liu, M. Dong, 2024, Thinking Skills and Creativity)
- Evaluating the impact of AIGC-Supported design ideation on Designers' cognitive load and creativity(Jinchi Fu, Wanming Zhong, Muyao Shen, Dengkai Chen, 2025, Displays)
- Creative personal identity in the age of generative AI: A social-cognitive pathway of AI literacy, self-efficacy, and mindset(Hanhui Li, Yurui Zhang, Mingwen Chen, Tao Zhao, Min Jou, 2025, Comput. Hum. Behav.)
- Balancing Affective Engagement and Cognitive Load in Generative-AI-Based Learning: Empathy, Immersion, and Emotional Design in Design Education(Wonsub Lee, Sungbok Chang, Jungho Suh, 2025, Education Sciences)
教育赋能与高阶思维能力的培养
探讨AIGC在教育环境中的应用,重点研究如何通过AI作为苏格拉底式的对手或辅助工具,提升学习者的批判性思维、自我效能感及自主学习能力。研究分析了技术介入对传统教育理论的影响,以及在数字阅读等场景下的认知过程。
- AI as a Socratic Opponent: Comparative network analyses from a college psychology course(Shantanu Tilak, Baylee Brown, Hovhannes Madanyan, Gabriella Washington, Jazzmin Collier, Rebecca Ragnedda, J. Mitchell, Jasha Brewington, B. Hall, Kristopher Barnum, Courtney Moore, Allure Harris, Beckham Rombaoa, J. Evans, Emily Shipp, Makenzie Short, Jamal Thomas, Carmello Browne, Hassan Abbasi, Trent Hammer, Nathan C. Prince, Kadie Kennedy, 2025, Journal of Sociocybernetics)
- Generative AI Dependency in Higher Education: Investigating Continuance Intention, Cognitive Response, and Creativity(Hongjie Ping, Wei Wang, Yingying Xie, Shengnan Lv, Jielu Li, Lingling Weng, 2025, 2025 7th International Conference on Computer Science and Technologies in Education (CSTE))
- Revisiting Knowles’ "Self-Directed Learning" Theory in the Age of AIGC: A Conceptual Reconstruction Based on the Relationship Between "Technological Dependence" and "Learner Autonomy" in Adult Learners with Disabilities(Wei Da, 2025, iEducation)
- Comparative Analysis of GPT-4 and Human Graders in Evaluating Human Tutors Giving Praise to Students(Dollaya Hirunyasiri, Danielle R. Thomas, Jionghao Lin, K. Koedinger, Vincent Aleven, 2023, No journal)
- Enhancing Critical Thinking: Exploring Human-AI Synergy in Student Cognitive Development(Imane JAI LAMIMI, Sara El Jemli, Imane Zeryouh, 2025, Arab World English Journal)
- Unveiling cognitive processes in digital reading through behavioural cues: A hybrid intelligence (HI) approach(Yoon Lee, Gosia Migut, Marcus Specht, 2025, Br. J. Educ. Technol.)
- Dancing with swarms: eco-centered psychological facilitation and the future of learning(P. Lushyn, Y. Sukhenko, 2025, Bulletin of Postgraduate education (Series Social and Behavioral Sciences; Management and Administration))
- AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction(Koji Yatani, Zefan Sramek, Chi-Lan Yang, 2024, ArXiv)
情感计算与心理健康交互干预
集中于AIGC在心理健康领域的应用实践,包括CBT疗法、PTSD干预、共情对话及模拟患者培训。研究强调了AI如何通过模拟人类情感认知、利用VR环境和游戏化交互来提供社会支持并增强治疗参与度。
- Design and Implementation of an AI-Driven Mental Health Chatbot: A Generative AI Model(Dev Gupta, Vinita Swami, Divyanshu Shukla, K. Nimala, 2024, 2024 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES))
- A Novel Cognitive Behavioral Therapy–Based Generative AI Tool (Socrates 2.0) to Facilitate Socratic Dialogue: Protocol for a Mixed Methods Feasibility Study(Philip Held, Sarah A. Pridgen, Yaozhong Chen, Zuhaib Akhtar, Darpan Amin, Sean Pohorence, 2024, JMIR Research Protocols)
- DeepThInk: Designing and probing human-AI co-creation in digital art therapy(Xuejun Du, Pengcheng An, Justin Leung, April Li, L. Chapman, J. Zhao, 2023, Int. J. Hum. Comput. Stud.)
- Exploring ChatGPT's Capabilities, Stability, Potential and Risks in Conducting Psychological Counseling through Simulations in School Counseling(Yang Ni, Y. Cao, 2025, ArXiv)
- Design of a Multimodal Virtualization PTSD Therapy Interactive System Based on AIGC(Yinan Shang, 2026, Exploring Science Academic Conference Series)
- NeuroBridge: Using Generative AI to Bridge Cross-neurotype Communication Differences through Neurotypical Perspective-taking(Rukhshan Haroon, Kyle Wigdor, Katie Yang, Nicole Toumanios, Eileen T Crehan, Fahad R. Dogar, 2025, Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility)
- Leveraging Large Language Models for Simulated Psychotherapy Client Interactions: Development and Usability Study of Client101(Daniel Cabrera Lozoya, Mike Conway, Edoardo Sebastiano De Duro, Simon D'Alfonso, 2024, JMIR Medical Education)
- Enhancing therapeutic engagement in Mental Health through Virtual Reality and Generative AI: a co-creation approach to trust building(Attilio Della Greca, Ilaria Amaro, Paola Barra, Emanuele Rosapepe, Genny Tortora, 2024, 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM))
- The Immersive Art Therapy Driven by AIGC: An Innovative Approach to Alleviating Children's Nyctophobia(Jinlin Miao, Zhiyuan Zhou, Yilei Wu, Fenggui Rao, Fanjing Meng, 2025, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
- BetterMood: A human-like AI counseling service for adolescents and young adults(Do Hyung Kim, Soeun Baek, Joonsung Lee, Taehwi Lee, Soyeon Park, Beomchan You, J. Hur, Minah Kim, Chang-Gun Lee, 2025, Digital Health)
- Gamifying intimacy: AI-driven affective engagement and human-virtual human relationships(Liang Ge, Tingting Hu, 2025, Media, Culture & Society)
- AI sensation and engagement: Unpacking the sensory experience in human-AI interaction(P. Foroudi, Reza Marvi, Dongmei Zha, 2025, Int. J. Inf. Manag.)
信任构建、心理驱动与接受度模型
利用TAM、ELM、C-A-B等理论模型,探讨用户对AIGC的信任动态、信誉评估及拟人化特征如何驱动参与行为。研究重点在于信任的校准(Calibration)以及用户在有限理性下如何构建对AI的心理预期。
- Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-generated information(Mengxue Ou, Han Zheng, Yueliang Zeng, Preben Hansen, 2024, New Media & Society)
- Trust Under Bounded Rationality: Exploring Human-AI Interaction in Decision-Making Through Large Language Models(Waleed Almutairi, Ibrahim Almatrodi, 2025, SAGE Open)
- Beyond Usefulness: A Cognitive-Affective-Behavioral Analysis of Student Trust and Dependence on Generative AI in Higher Education(Hao Zheng, Xinyi Hu, Yonggu Wang, 2025, Proceedings of the 2025 International Conference on Educational Technology and Artificial Intelligence)
- An investigation of factors influencing user information adoption behavior in human–AI interaction contexts: a hybrid SEM and fsQCA approach(Gan Tang, Junbo Mao, Miaocheng Yang, 2026, Aslib Journal of Information Management)
- Implicit Expectations and Cognitive Construction: Dual Pathways Shaping Graduate Students’ Sustained Engagement With Generative AI(Hongfeng Zhang, Fanbo Li, Xiaolong Chen, 2025, Journal of Educational Computing Research)
- The impact of AI anchor anthropomorphism on users’ willingness to co-create value in tourism live-streaming contexts: the mediating role of social presence and the moderating role of perceived control(Qiongwei Ye, Yuting Li, Yumei Luo, Zhilin Pang, 2026, Frontiers in Psychology)
- Research on the Audience Aesthetic Acceptance Mechanism of Brand Artistic Personification Communication from the Perspective of AIGC(汝吉 魏, 2025, Art Research Letters)
- Artificial Fantasy: The Effect of Generative AI on Cognitive Empathy in Creative Problem Solving(Alina A. Karl, Emily Theophilou, Davinia Hernández Leo, 2025, No journal)
- “Trust, but Verify”: A Reflexive Thematic Analysis of Human–AI Interaction(Mohammad Khan, Christopher Fong, Shilpi Tripathi, 2025, Advances in Social Sciences Research Journal)
- Intentional or Designed? The Impact of Stance Attribution on Cognitive Processing of Generative AI Service Failures(Dong Lv, Rui Sun, Qiuhua Zhu, Jiajia Zuo, Shukun Qin, Yue Cheng, 2024, Brain Sciences)
认知负荷、风险管理与认知主权
批判性地分析了AIGC带来的认知挑战,包括自动化悖论、认知负荷过载、过度依赖导致的技能退化(认知怠惰)以及AI幻觉。研究提出了重新构建认知主权的重要性,并探讨了服务失败时的认知冲突机制。
- Ironies of Generative AI: Understanding and Mitigating Productivity Loss in Human-AI Interaction(Auste Simkute, Lev Tankelevitch, Viktor Kewenig, A. Scott, Abigail Sellen, Sean Rintel, 2024, International Journal of Human–Computer Interaction)
- Generative AI and Cognitive Challenges in Research: Balancing Cognitive Load, Fatigue, and Human Resilience(Syed Md Faisal Ali Khan, Salem Suhluli, 2025, Technologies)
- Expropriated Minds: On Some Practical Problems of Generative AI, Beyond Our Cognitive Illusions(Fabio Paglieri, 2024, Philosophy & Technology)
- AI Hallucination and Strategies to Overcome: Enhancing Human-AI Interaction(Shrishti Kushwah, Nigam Dave, 2025, 2025 International Conference on Artificial Intelligence and Machine Vision (AIMV))
- Cognitive dissonance in programming education: A qualitative exploration of the impact of generative AI on application-directed learning(Mark G. Dawson, Rowan Deer, Samuel Boguslawski, 2025, Computers in Human Behavior Reports)
- The Cognitive Cost of AI Assistance: Protecting Human Thinking in the Age of Generative AI(Jonathan H. Westover, 2025, Human Capital Leadership Review)
- Keeping up with generative AI: effects of engagement characteristics, cognitive appraisals, and affective reactions on user adaptation(Xinyu Lu, Jisu Kim, 2025, Behaviour & Information Technology)
- Revisiting Rogers' Paradox in the Context of Human-AI Interaction(Katherine M. Collins, Umang Bhatt, Ilia Sucholutsky, 2025, ArXiv)
- Reconstructing Cognitive Sovereignty in the Era of Generative AI: A Mixed-Methods Study on User Correction of AI-Generated Misinformation(Chenyu Gu, Xiaojie Zhuo, Kunling Jiang, 2025, International Journal of Human–Computer Interaction)
- Timing Matters: How Generative AI Impacts Creativity Through Motivation and Cognitive Processes(H. Chung, Jeong-hyun Lee, Soobin Wang, MinSeok Jo, Hyunjee Hannah Kim, 2025, Academy of Management Proceedings)
特定群体的认知增强与交互优化方法
探讨AI作为认知增强工具在老年人决策辅助、边缘智力障碍者赋能等特定场景的潜力。同时涵盖了方法论研究,如通过人类反馈(RLHF)、迭代设计和认知测量技术来优化人机协同系统,提升交互的有效性。
- Transforming Individuals with Borderline Intellectual Functioning into Cognitively Augmented Workers: AI-Integrated Co-Adaptive, Closed-Loop Brain–Computer Interface(Hyunghun Kim, 2025, MechEcology)
- Preference-Aligned Options from Generative AI Compensates for Age-Related Cognitive Decline in Decision Making(S. Ishibashi, K. Tamura, Ayana Goma, Kenta Yamamoto, Kouhei Masumoto, 2025, ArXiv)
- Human-AI Interaction in the Age of Large Language Models(Diyi Yang, 2024, No journal)
- 5 VISUAL SIMULACRA AND CULTURAL DISPLACEMENT: RECONSTRUCTING CULTURAL ILLUSION IN AIGC ANIMATION THROUGH PERCEPTUAL PSYCHOLOGY AND COGNITIVE SEMIOTICS(Xin Zhao, Xianhui Liu, 2025, Current Opinion in Psychiatry)
- ExpertGen: A Comparative Analysis of User Performance, Cognitive Workload, and Trust in Domain-Tailored Generative AI(N. Jiang, Wei Zhou, Sogand Hasanzadeh, Vincent G. Duffy, 2025, No journal)
- Harnessing generative AI: Exploring its impact on cognitive engagement, emotional engagement, learning retention, reward sensitivity, and motivation through reinforcement theory(Huili Yang, 2025, Learning and Motivation)
- SCIENTIFIC-METHODICAL ASPECT OF IMPROVING HUMAN COMPUTER COMMUNICATION SYSTEM IN SOFTWARE DEVELOPMENT(Ergasheva Shaxnoza Mavlonboevna, 2024, International Journal of Pedagogics)
- Facilitating Human Feedback for GenAI Prompt Optimization(J. Sherson, Florent Vinchon, 2024, No journal)
- Human-in-the-Loop Interaction for continuously Improving Generative Model in Conversational Agent for Behavioral Intervention(Xin Sun, J. Bosch, Jan de Wit, E. Krahmer, 2023, Companion Proceedings of the 28th International Conference on Intelligent User Interfaces)
- Cognitive Measurement with Generative AI: A Novel Interactive Situational Assessment of Learning Motivation and Strategy Using LLM Multi-Agents(Yi Zhang, Haotian Feng, Chen Xue, Yatong Zu, Hao Xu, 2025, No journal)
本报告从交互设计与认知科学的交叉视角,系统梳理了AIGC与人类认知能力协同演化的多维路径。研究不仅涵盖了共享理解、心理理论等基础理论框架,还深入探讨了在创意设计、教育赋能、心理健康及特定群体认知增强等领域的应用实践。报告重点分析了人机协作中的信任校准、认知负荷及认知主权等伦理与心理挑战,并提出了通过人类反馈迭代与认知测量来优化交互系统的策略。整体而言,AIGC被视为人类认知能力的延伸,其设计核心在于平衡自动化效率与人类高阶思维,实现人机智能的互补与协同进化。
总计74篇相关文献
To ensure the effective utilization of information resources by users in the era of artificial intelligence, it is crucial to explore the factors influencing user information adoption behavior and its configurational pathways within human–AI interaction contexts, which is the aim of this study. This study focuses on users of AIGC platforms and employs the Elaboration Likelihood Model (ELM) as a theoretical foundation. Data analysis is conducted using Structural Equation Modeling (SEM) and fuzzy-set Qualitative Comparative Analysis (fsQCA). The SEM results indicate that, with the exception of technological characteristics, all other factors positively influence user information adoption behavior. The fsQCA identifies four distinct configurations that contribute to information adoption behavior. The findings suggest that AIGC platforms should enhance user information adoption by optimizing interaction systems, ensuring information quality, simplifying operational processes, and integrating emotional design.
With the rapid advancement of artificial intelligence (AI) technology, the integration of AI-generated content (AIGC) in creative processes has sparked significant sociological interest. This study investigated gender differences in creative ability and motivation when using AI-assisted tools. Through a quasi-experimental design, this research examined 70 college participants to compare AI-assisted versus traditional approaches. Using standardized assessment metrics, the study measured these three key variables across gender groups. The results revealed three key findings. First, participants using AI-assisted tools demonstrated significantly higher creative ability compared to the control group, with male participants showing particularly strong performance improvements. Second, the AI-assisted group showed elevated levels of motivation across both gender groups. The findings contribute to understanding gender-based differences in human-AI interaction and creative processes in technological environments. This study advances the theoretical discourse on gender differences in AI-augmented creative processes and provides insights into the evolving relationship between gender, creativity, and technological advancement in contemporary society.
Abstract Generative AI (GenAI) systems offer opportunities to increase user productivity in many tasks, such as programming and writing. However, while they boost productivity in some studies, many others show that users are working ineffectively with GenAI systems and losing productivity. Despite the apparent novelty of these usability challenges, these ‘ironies of automation’ have been observed for over three decades in Human Factors research on the introduction of automation in domains such as aviation, automated driving, and intelligence. We draw on this extensive research alongside recent GenAI user studies to outline four key reasons for productivity loss with GenAI systems: a shift in users’ roles from production to evaluation, unhelpful restructuring of workflows, interruptions, and a tendency for automation to make easy tasks easier and hard tasks harder. We then suggest how Human Factors research can also inform GenAI system design to mitigate productivity loss by using approaches such as continuous feedback, system personalization, ecological interface design, task stabilization, and clear task allocation. Thus, we ground developments in GenAI system usability in decades of Human Factors research, ensuring that the design of human-AI interactions in this rapidly moving field learns from history instead of repeating it.
No abstract available
As AI systems become integral to knowledge-intensive work, questions arise not only about their functionality but also their epistemic roles in human-AI interaction. While HCI research has proposed various AI role typologies, it often overlooks how AI reshapes users'roles as knowledge contributors. This study examines how users form epistemic relationships with AI-how they assess, trust, and collaborate with it in research and teaching contexts. Based on 31 interviews with academics across disciplines, we developed a five-part codebook and identified five relationship types: Instrumental Reliance, Contingent Delegation, Co-agency Collaboration, Authority Displacement, and Epistemic Abstention. These reflect variations in trust, assessment modes, tasks, and human epistemic status. Our findings show that epistemic roles are dynamic and context-dependent. We argue for shifting beyond static metaphors of AI toward a more nuanced framework that captures how humans and AI co-construct knowledge, enriching HCI's understanding of the relational and normative dimensions of AI use.
No abstract available
As AI systems increasingly shape human experiences in work, communication, and decision-making, the way we design interactions with these systems plays a critical role in ensuring ethical, transparent, and human-centered AI. However, HCI and design researchers are often underrepresented in AI teams and discussions. Hence, this workshop explores the future of design in Human-AI Interaction (HAI) by addressing key challenges such as explainability, trust, job augmentation, social AI, and sustainability. Through interactive discussions and hands-on design sprints, participants will prototype AI interfaces that foster trust, inclusivity, and responsible AI usage. We will reflect on how design practice will change over the next decade as well as how design and HCI methods can address these challenges for shaping an AI-powered future that enhances rather than replaces humans.
Shared understanding plays a key role in the effective communication in and performance of human-human interactions. With the increasingly common integration of AI into human contexts, the future of personal and workplace interactions will likely see human-AI interaction (HAII) in which the perception of shared understanding is important. Existing literature has addressed the processes and effects of PSU in human-human interactions, but the construal remains underexplored in HAII. To better understand PSU in HAII, we conducted an online survey to collect user reflections on interactions with a large language model when it sunderstanding of a situation was thought to be similar to or different from the participant's. Through inductive thematic analysis, we identified eight dimensions comprising PSU in human-AI interactions: Fluency, aligned operation, fluidity, outcome satisfaction, contextual awareness, lack of humanlike abilities, computational limits, and suspicion.
Theory of Mind (ToM), humans’ capability of attributing mental states such as intentions, goals, emotions, and beliefs to ourselves and others, has become a concept of great interest in human-AI interaction research. Given the fundamental role of ToM in human social interactions, many researchers have been working on methods and techniques to equip AI with an equivalent of human ToM capability to build highly socially intelligent AI. Another line of research on ToM in human-AI interaction seeks to understand people’s tendency to attribute mental states such as blame, emotions, and intentions to AI, along with the role that AI should play in the interaction (e.g. as a tool, partner, teacher, facilitator, and more) to align with peoples’ expectations and mental models. The goal of this line of work is to distill human-centered design implications to support the development of increasingly advanced AI systems. Together, these two research perspectives on ToM form an emerging paradigm of “Mutual Theory of Mind (MToM)” in human-AI interaction, where both the human and the AI each possess the ToM capability. This workshop aims to bring together different research perspectives on ToM in human-AI interaction by engaging with researchers from various disciplines including AI, HCI, Cognitive Science, Psychology, Robotics, and more to synthesize existing research perspectives, techniques, and knowledge on ToM in human-AI interaction, as well as envisioning and setting a research agenda for MToM in human-AI interaction.
Humans learn about the world, and how to act in the world, in many ways: from individually conducting experiments to observing and reproducing others' behavior. Different learning strategies come with different costs and likelihoods of successfully learning more about the world. The choice that any one individual makes of how to learn can have an impact on the collective understanding of a whole population if people learn from each other. Alan Rogers developed simulations of a population of agents to study these network phenomena where agents could individually or socially learn amidst a dynamic, uncertain world and uncovered a confusing result: the availability of cheap social learning yielded no benefit to population fitness over individual learning. This paradox spawned decades of work trying to understand and uncover factors that foster the relative benefit of social learning that centuries of human behavior suggest exists. What happens in such network models now that humans can socially learn from AI systems that are themselves socially learning from us? We revisit Rogers' Paradox in the context of human-AI interaction to probe a simplified network of humans and AI systems learning together about an uncertain world. We propose and examine the impact of several learning strategies on the quality of the equilibrium of a society's 'collective world model'. We consider strategies that can be undertaken by various stakeholders involved in a single human-AI interaction: human, AI model builder, and society or regulators around the interaction. We then consider possible negative feedback loops that may arise from humans learning socially from AI: that learning from the AI may impact our own ability to learn about the world. We close with open directions into studying networks of human and AI systems that can be explored in enriched versions of our simulation framework.
As artificial intelligence (AI) technologies, including generative AI, continue to evolve, concerns have arisen about over-reliance on AI, which may lead to human deskilling and diminished cognitive engagement. Over-reliance on AI can also lead users to accept information given by AI without performing critical examinations, causing negative consequences, such as misleading users with hallucinated contents. This paper introduces extraheric AI, a human-AI interaction conceptual framework that fosters users' higher-order thinking skills, such as creativity, critical thinking, and problem-solving, during task completion. Unlike existing human-AI interaction designs, which replace or augment human cognition, extraheric AI fosters cognitive engagement by posing questions or providing alternative perspectives to users, rather than direct answers. We discuss interaction strategies, evaluation methods aligned with cognitive load theory and Bloom's taxonomy, and future research directions to ensure that human cognitive skills remain a crucial element in AI-integrated environments, promoting a balanced partnership between humans and AI.
No abstract available
Artificial Intelligence (AI) has become deeply integrated into professional workflows, offering efficiency, scalability, and decision-support across sectors. Yet, questions remain about how users calibrate trust in AI and how reliance on these systems shapes human cognition. This study explores the psychological dimensions of trust, transparency, and cognitive load in human–AI interaction. Semi-structured interviews were conducted with twelve professionals across psychology, technology, and leadership domains. Data were analysed using Braun and Clarke’s reflexive thematic analysis, revealing two superordinate themes: (1) trust as conditional, shaped by verification practices and expectations of source transparency, and (2) AI’s dual role in reducing cognitive load while raising concerns about diminishing creativity and imagination. Findings highlight that professionals value AI as a supportive assistant that saves time and streamlines tasks but remain cautious about accuracy, hallucinations, and overreliance. The study contributes to qualitative research on human–AI interaction by emphasising the need for explainability, verifiable outputs, and safeguards against cognitive complacency. It recommends psychologically informed design strategies that balance efficiency with transparency and preserve users’ epistemic agency.
The growing integration of artificial intelligence (AI) into human-computer interaction (HCI) has transformed digital experiences, providing personalized support, automation, and decision-making assistance. Nevertheless, a significant challenge undermining the reliability and trustworthiness of AI systems is AI hallucination, the occurrence in which AI produces incorrect, misleading, or non-factual information. These hallucinations erode user trust, interfere with usability, and raise ethical issues in AI-powered interfaces such as chatbots, virtual assistants, and decision-support tools.This paper investigates the underlying causes of AI hallucination from an HCI viewpoint, assessing how incorrect outputs affect user interaction, cognitive load, and decision-making processes. It looks into the dynamics of trust in human-AI collaboration and the consequences of hallucinations for the uptake of AI-driven applications. The study proposes techniques to reduce AI hallucinations, including reinforcement learning with human feedback (RLHF), enhanced model interpretability, and hybrid AI-human oversight structures.By addressing AI hallucination as a usability and reliability concern within HCI, this research seeks to connect the divide between AI-generated content and effective human interaction. It advocates for a human-centered AI design framework to ensure that AI systems produce accurate responses and articulate uncertainty. The results warrant the creation of AI-enhanced interactions in areas like medicine, finance, and education, where hallucinations can be disastrous. The study urges the discussion of reliable AI in human-computer interaction, providing practical guidelines for AI model design that are more reliable, explainable, and aligned with user expectations.
This study investigates the role of artificial intelligence (AI) and large language models (LLMs) within Simon’s bounded rationality framework, focusing on factors such as preferences, competence, learning, and persuasion that influence decision-makers’ trust in AI outcomes. Data were collected using mixed methods, including surveys and interviews, followed by descriptive and thematic analyses to explore the trust dynamics in human-AI interactions under bounded rationality. Participants highlighted the effectiveness of AI systems in decision-making constrained by bounded rationality and discussed how AI systems might mitigate these limitations. The findings emphasize the critical role of trust in facilitating effective human-AI interactions, indicating that AI-provided explanations not only support decision-making but also enhance users’ trust in these systems. This study identifies trust as a multifaceted and dynamic aspect of human-AI interactions, suggesting that AI developers can improve trustworthiness through transparency, demonstrated competence, and continuous learning. Enhancing these factors is expected to drive widespread adoption and improve the overall user experience with AI systems.
Large language models (LLMs) have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI interaction using LLMs. The first one explores how LLMstransform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part looks at social skill learning via LLMs by empowering therapists and learners with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI collaboration via LLMs can empower individuals and foster positive change. We conclude by discussing how LLMs enable collaborative intelligence by redefining the interactions between humans and AI systems.
Artificial intelligence (AI) has emerged as a transformative tool, integrated across various sectors. In education, AI has generated significant excitement among students for its potential to enhance learning experiences. However, concerns about overreliance on AI temper this enthusiasm, as it may undermine the development of critical thinking skills. Numerous studies have highlighted the risks associated with students’ excessive use of generative AI (GAI) in academic tasks, noting its potential to diminish cognitive abilities. However, the optimal use of AI to enhance students’ critical thinking skills remains under-researched. Therefore, this study seeks to answer the question: How does generative AI influence students’ critical thinking, self-efficacy, and decision-making? This study aims to explore the synergic relationship between human intelligence and artificial intelligence in augmenting essential thinking skills among students and building upon their existing cognitive resources through self-efficacy, learning motivation, and decision-making. Specifically, it explores the cause-and-effect connections among GAI, self-efficacy, decision-making, learning motivation, and critical thinking skills. A quantitative methodology used an online questionnaire to collect responses from 165 undergraduate, master’s, and doctoral students. Statistical analyses, including bootstrapping techniques, were conducted to examine direct and indirect effects. The results revealed that GAI has a significant positive influence on self-efficacy, learning motivation, decision-making, and critical thinking skills. In turn, self-efficacy, learning motivation, and decision-making significantly impact critical thinking skills. The mediating results indicated that GAI can indirectly boost students’ critical thinking by enhancing self-efficacy, learning motivation, and decision-making. This suggests that AI capabilities can transform the cognitive learning process.
Recent research reveals concerning patterns in how artificial intelligence tools may be affecting human cognitive processes. This paper examines emerging evidence demonstrating potential reductions in cognitive engagement when individuals rely on generative AI tools for knowledge work. The findings suggest implications for organizational creativity, problem-solving capability, and cognitive resilience. Drawing on both empirical research and established cognitive science principles, this paper outlines evidence-based approaches for mitigating potential negative effects, including structured AI usage protocols, cognitive protection practices, and hybrid thinking methodologies. Organizations implementing these approaches are better positioned to leverage AI's efficiency benefits while preserving the distinctive human cognitive capabilities that drive innovation and complex decision-making.
Older adults often experience increased difficulty in decision making due to age-related declines particularly in contexts that require information search or the generation of alternatives from memory. This study examined whether using generative AI for information search enhances choice satisfaction and reduces choice difficulty among older adults. A total of 130 participants (younger, n = 56; older, n = 74) completed a music-selection task under AI-use and AI-nonuse conditions across two contexts: previously experienced (road trip) and not previously experienced (space travel). In the AI-nonuse condition, participants generated candidate options from memory; in the AI-use condition, GPT-4o presented options tailored to individual preferences. Cognitive functions, including working memory, processing speed, verbal comprehension, and perceptual reasoning, were assessed. Results showed that AI use significantly reduced perceived choice difficulty across age groups, with larger benefits in unfamiliar contexts. Regarding cognitive function, among older adults, lower cognitive function was associated with fewer recalled options, higher choice difficulty, and lower satisfaction in the AI-nonuse condition; these associations were substantially attenuated when AI was used. These results demonstrate that generative AI can mitigate age-related cognitive constraints by reducing the cognitive load associated with information search during decision making. While the use of AI reduced perceived difficulty, choice satisfaction remained unchanged, suggesting that autonomy in decision making was preserved. These findings indicate that generative AI can support everyday decision making by compensating for the constraints in information search that older adults face due to cognitive decline.
As higher education undergoes rapid transformation driven by Artificial Intelligence (AI), the integration of Generative AI (GenAI) has become essential for preparing future-ready creative professionals. In this context, design education plays a leading role in exploring how GenAI can enhance students’ experiential learning. This study empirically examined how three experience dimensions—Educational, Entertainment, and Aesthetic—shape Empathy, Immersion, Satisfaction, and Learning Outcomes in a GenAI-based self-character workshop. A total of 185 design students participated, and the data were analyzed using Structural Equation Modeling (SEM). The results revealed that both Entertainment (β = 0.334, p < 0.001) and Aesthetic (β = 0.434, p < 0.001) experiences significantly and positively predicted Empathy and also increased Immersion (β = 0.215, p < 0.001; β = 0.154, p < 0.05). In contrast, Educational experience showed a non-significant or slightly negative effect. Furthermore, Empathy enhanced Immersion (β = 0.220, p < 0.01), Satisfaction (β = 0.173, p < 0.05), and Learning Outcomes (β = 0.305, p < 0.001). Immersion also improved Learning Outcomes (β = 0.253, p < 0.05) but slightly reduced short-term Satisfaction (β = −0.186, p < 0.05), indicating a cognitive-load trade-off between concentration and immediate enjoyment. These findings demonstrate that GenAI-based creative activities can effectively foster both emotional engagement and learning performance when instructional design minimizes unnecessary cognitive burden. The study contributes to understanding how emotionally meaningful and aesthetically engaging experiences can advance AI-integrated design education in the digital transformation era.
This study addresses the gap in understanding graduate students’ sustained engagement behavior (SEB) with generative artificial intelligence (GAI) by integrating the Technology Acceptance Model (TAM), Expectation Confirmation Theory (ECT), and Theory of Reasoned Action (TRA) into a comprehensive embedding model. It introduces the Technology Readiness Index for Innovation (TRII) and Perception-Oriented Learning Style (POLS) as key factors, analyzed through Structural Equation Modeling (SEM) and Qualitative Comparative Analysis (QCA). Data from 862 graduate students in China were tested for reliability and validity. SEM results demonstrated that TRII significantly influences usage expectations (UE), effort expectancy (EE), performance expectancy (PE), and SEB, with cognitive and affective factors mediating these relationships. QCA revealed multiple causal pathways leading to high SEB, highlighting the principle of equifinality. The integration of SEM and QCA provided insights into dual pathways—implicit expectation development and cognitive system processing—that shape GAI adoption, offering practical implications for effective implementation in higher education.
This study examines the interaction between cognitive demands and generative artificial intelligence (GenAI) technologies in shaping the quality and influence of academic research. While GenAI tools such as ChatGPT and Elicit are increasingly adopted to ease information processing and automate repetitive tasks, their broader impact on researchers’ cognitive performance remains underexplored. Using data from 998 researchers and applying structural equation modeling (SEM-PLS), we examined the effects of cognitive load, task fatigue, and resilience on research outcomes, with GenAI immersion as a higher-order moderator. Results reveal that both cognitive load and fatigue negatively affect research quality, while engagement and resilience offer partial protection. Unexpectedly, high immersion in GenAI intensified the negative impact of cognitive strain, suggesting that over-reliance on AI can amplify mental burden rather than reduce it. These results enhance the design and responsible integration of AI technologies in academic environments by demonstrating that sustainable adoption necessitates a balance between efficiency and human creativity and resilience. The study provides evidence-based insights for researchers, institutions, and policymakers seeking to optimize AI-supported workflows without compromising research integrity or well-being.
This study investigates how individuals’ capability to use generative artificial intelligence (GenAI) influences their idea generation and explores the cognitive mechanisms underlying this relationship. Drawing on cognitive experiential theory, which posits that individuals rely on two distinct and stable information processing styles (rational and experiential), this study examines how these styles mediate the link between GenAI usage capability and idea generation and all underlying relationships between these constructs. This study employs a quantitative research design based on survey data from 399 business consultants located in Germany, Austria, and Switzerland at a leading global consultancy. Partial least squares structural equation modeling (PLS-SEM) is applied to test the hypothesized structural relationships. The findings demonstrate that (1) individuals’ capability to use GenAI enhances their idea generation, (2) individuals’ capability to use GenAI influences both information processing styles, (3) rational information processing style enhances idea generation and not experiential information processing and (4) significant mediation effect of individuals’ tendency to rely on the rational system that translates GenAI usage capability into idea generation. This study enriches GenAI research in innovation management by identifying individuals’ capability to use GenAI as a critical antecedent of idea generation. This capability perspective complements recent studies focusing on the extent, frequency or purpose of GenAI usage and its influence on creative outputs.
ABSTRACT Generative artificial intelligence (GenAI) is expected to substantially change users’ established routines of accomplishing tasks, such as information search and content creation. Despite such promising potential, many users are still not incorporating GenAI into their routine internet use. This study draws on the adaptation to information technology (AIT) model to examine how users adapt to GenAI and the influencing factors, including cognitive appraisals, affective reactions, and engagement characteristics. An online survey was conducted with GenAI users recruited on Prolific. The results showed that cognitive appraisals (perceived opportunity, threat, and control) and affective reactions (enjoyment, trust, and anxiety) influence users’ various adaptations to varying degrees. Furthermore, engagement characteristics, including the frequency and breadth of using GenAI tools and user involvement, are significant predictors of cognitive appraisals. The study contributes to the nascent literature on GenAI tools by uncovering the impact of cognitive appraisals and affective reactions on users’ adaptation to GenAI tools, meanwhile revealing the influence of engagement characteristics on users’ appraisals. The findings provide a basis for encouraging certain adaptation behaviours and help understand factors that hinder users’ active adaptation to GenAI.
With the growing integration of Generative Artificial Intelligence (GAI) in higher education, understanding how students cognitively, emotionally, and behaviorally engage with such tools is critical. This study proposes and tests a dynamic Cognition–Affect–Behavior (C-A-B) model to investigate the psychological mechanisms underlying students’ interaction with GAI. Survey data were collected from a Chinese university (N = 392). Structural equation modeling and cluster analysis were then conducted to examine the proposed model. The results indicate that cognitive factors—such as perceived functionality, metacognitive ability, and self-efficacy—affect behavior indirectly through emotional pathways. Trust enhances the influence of cognition on strategic behaviors, whereas anxiety weakens it, suggesting a bidirectional affective regulation process between cognition and emotion. Cluster analysis further identifies four user profiles—Dependency-oriented, Cautious Observers, Deep Integration Users, and Rational Adapters—linked to academic seniority and engagement styles. This study extends the traditional C-A-B model by introducing feedback loops between emotion and cognition, offering theoretical insights into AI-mediated learning. Practically, it highlights affect-sensitive and tailored strategies to foster reflective, autonomous, and effective use of GAI among students. Limitations and directions for future research are also discussed.
No abstract available
This study examines the relationship between the continuance use of generative artificial intelligence (AI) and creativity among higher education students, emphasizing the mediating role of cognitive response. Drawing on the Expectation-Confirmation Model for Information Systems Continuance (ECM-ISC) and the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, the research investigates how satisfaction, affect, and personality traits influence students’ intention to use AI tools and, through reflective cognitive engagement, enhance their creative performance. Data were collected from 288 undergraduate students via a structured questionnaire and analyzed using path analysis. The findings indicate that while satisfaction, affect, and personality traits significantly boost the intention to use AI, this intention impacts creativity only indirectly through cognitive response. These results highlight the importance of reflective engagement in harnessing AI for creative tasks and offer insights for its balanced integration into educational settings.
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
No abstract available
This paper discusses some societal implications of the most recent and publicly discussed application of advanced machine learning techniques: generative AI models, such as ChatGPT (text generation) and DALL-E (text-to-image generation). The aim is to shift attention away from conceptual disputes, e.g. regarding their level of intelligence and similarities/differences with human performance, to focus instead on practical problems, pertaining the impact that these technologies might have (and already have) on human societies. After a preliminary clarification of how generative AI works (Sect. 1), the paper discusses what kind of transparency ought to be required for such technologies and for the business model behind their commercial exploitation (Sect. 2), what is the role of user-generated data in determining their performance and how it should inform the redistribution of the resulting benefits (Sect. 3), the best way of integrating generative AI systems in the creative job market and how to properly negotiate their role in it (Sect. 4), and what kind of “cognitive extension” offered by these technologies we ought to embrace, and what type we should instead resist and monitor (Sect. 5). The last part of the paper summarizes the main conclusions of this analysis, also marking its distance from other, more apocalyptic approaches to the dangers of AI for human society.
Background: With the rapid expansion of the generative AI market, conducting in-depth research on cognitive conflicts in human–computer interaction is crucial for optimizing user experience and improving the quality of interactions with AI systems. However, existing studies insufficiently explore the role of user cognitive conflicts and the explanation of stance attribution in the design of human–computer interactions. Methods: This research, grounded in mental models theory and employing an improved version of the oddball paradigm, utilizes Event-Related Spectral Perturbations (ERSP) and functional connectivity analysis to reveal how task types and stance attribution explanations in generative AI influence users’ unconscious cognitive processing mechanisms during service failures. Results: The results indicate that under design stance explanations, the ERSP and Phase Locking Value (PLV) in the theta frequency band were significantly lower for emotional task failures than mechanical task failures. In the case of emotional task failures, the ERSP and PLV in the theta frequency band induced by intentional stance explanations were significantly higher than those induced by design stance explanations. Conclusions: This study found that stance attribution explanations profoundly affect users’ mental models of AI, which determine their responses to service failure.
The rapid advancements in Generative Artificial Intelligence (AI) have revolutionized domains such as natural language processing, computer vision, and creative content generation. Simultaneously, Cognitive Science seeks to understand the mechanisms of human cognition, including memory, decision-making, and creativity. This paper explores the intersection of these fields, investigating how Generative AI models can simulate cognitive processes and how Cognitive Science insights can inform AI development. Methodologies include experiments with Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and GPT-3 to assess simulations of memory, creativity, and decision-making. Empirical findings demonstrate how VAEs enable memory reconstruction, GANs simulate decision-making processes, and Transformer-based models like GPT-3 exhibit creative capabilities. This study provides valuable insights into advancing AI research while deepening the theoretical understanding of human cognition.
Artificial Intelligent Generation Content (AIGC), has been widely disseminated in the fields of technology, academia, and the arts. This project explores the application of various AI tools and the visualization of dream experiences through multimedia. It utilizes AI-generated multimodal materials as perceptible dream content, employs light and mechanical installations to create immersive dream atmospheres, and employs a fictional AI-Mulan narrator to recount her dream story. Through artistic practice, it delves into Mulan's unconscious realm and conducts a psychoanalysis of a historical figure. It represents an interdisciplinary exploration of art and psychoanalysis through AI visualization.
AI-Generated Content (AIGC) tools have rapidly emerged in the field of apparel design in recent years, but how designers adopt these tools and the psychological mechanisms behind them are unclear. This study constructs a model based on the stimulus-organism-response (S-O-R) theory, aiming to reveal how external stimulus variables (perceived content quality (PCQ), personalized fit (PF), industry pressure (IP), perceived technological risk (PTR)) are adopted through psychological state variables (self-efficacy (SE), innovativeness (INN), and task technology fit (TTF)) influence apparel designers’ AIGC adoption intentions. Based on the questionnaire data of 267 Chinese fashion designers, partial least squares structural equation modeling (PLS-SEM) was used for empirical analysis. The results showed that: PCQ and PF significantly enhanced SE and TTF, while PTR significantly inhibited the above psychological mechanisms; SE, INN, and TTF positively influenced adoption intention, with TTF having the most significant effect, and IP did not show a significant effect. The findings not only validate the applicability of the S-O-R theory in creative technology adoption, but also emphasize the key role of TTF matching and psychological-cognitive factors in promoting the application of AIGC tools, providing theoretical support and practical insights for subsequent tool optimization and user guidance.
From the perspective of AIGC, brand art personification communication, as an innovative marketing strategy, is gradually becoming a new bridge for communication between enterprises and consumers. This study aims to explore in depth the audience acceptance of brand art personification communication, and analyze its applicability, effectiveness, and influencing factors in different cultural backgrounds. With the rapid development of artificial intelligence and big data technology, the implementation methods of brand art personification strategy are becoming increasingly diverse, but its impact mechanism on audience psychology and behavior is still unclear. Driven by AIGC (Gener-ative Artificial Intelligence) technology, brand art personification communication is undergoing a paradigm shift from static symbols to intelligent interaction. This study is based on the Technology Acceptance Model (TAM) and aesthetic psychology theory, exploring how the anthropomorphic images generated by AIGC (such as virtual idols and AI spokespersons) affect audience aesthetic acceptance through multimodal symbol systems, dynamic emotion calculation, and cultural adaptation mechanisms. Expect to provide new theoretical support and practical guidance for brand communication.
This study presents a participatory co-creation installation based on Artificial Intelligence Generated Content (AIGC) for alleviating children’s Nyctophobia through art therapy. By conducting interviews with parents, children, and psychologists, the study analyzes the causes of nyctophobia and proposes a solution that integrates intelligent technologies. The study employs real-time interactive technologies to transform children’s behaviors into positive visual experiences, aiding them in building a sense of security in dark environments. The study’s contribution lies in proposing a design framework that integrates serious games and exposure therapy, offering new perspectives for psychological interventions targeting children’s mental health.
No abstract available
No abstract available
The metaverse is a shared, immersive 3D world where individuals engage in virtual reality and exchange their interests, perspectives, and resources. User-Generated Content (UGC) serves as the core driving force in constructing the metaverse. This article concentrates on the synergistic effects of UGC and Artificial Intelligence-Generated Content (AIGC) within metaverse games, exploring how AI technology can unleash user creativity. Through interviews with 80 Chinese metaverse gamers aged 14-24, this study identifies user expectations for metaverse platforms to offer interactive, multi-user collaborative, multi-sensory, and emotional communication, as well as support for media integration, to facilitate the collaborative creation of UGC and AIGC. Based on the interview findings, this article proposes three mechanisms: a multi-user collaborative creation mechanism, an intelligent interactive scene generation mechanism, and a highly controllable AI generation mechanism, aiming to provide guidance and suggestions for the future development of metaverse platforms.
No abstract available
The deep integration of AIGC technology into adult education for persons with disabilities raises a fundamental question: can technological dependence and learner autonomy coexist? This study examines the explanatory power of Knowles’ self-directed learning theory for adult learning among persons with disabilities through proposition deduction and conceptual reconstruction. Findings indicate that the core logic of Knowles’ theory - learner-centered agency - remains valid despite technological intervention. This is because psychological maturity and experiential accumulation, the prerequisites for self-directed learning, function independently of physical ability. However, the implementation pathway requires modification: from unmediated autonomy to technology-mediated autonomy. For persons with disabilities, reliance on AIGC tools constitutes functional dependence. This mechanism empowers learners to transcend physical limitations and secures their right to participate, forming a synergistic rather than antagonistic relationship with learner autonomy. Technology enables learning access while learners determine what to learn, how to learn, and how to evaluate outcomes. Accordingly, this study constructs a three-tier analytical framework: the functional tier addresses participation access, the mechanism tier ensures technology serves learner goals, and the value tier orients toward the integration of inclusive learning and self-actualization. This study transcends the historical limitations of Knowles’ technology-absent theoretical context, resolves scholarly debates regarding whether technological dependence undermines learner autonomy through a typological distinction between functional and alienating dependence, and provides theoretical guidance for technology design and educational practice in AIGC-era adult education for persons with disabilities.
Currently, VR exposure therapy for Post-Traumatic Stress Disorder (PTSD) remains primarily based on single modalities, which struggles to deliver sufficient immersive experiences and emotional responses, leading to certain limitations in treatment efficacy. To simulate multi-sensory channels in human natural interactions and overcome the single-sensory constraints of traditional exposure therapy, this paper explores the application of AIGC technology in the personalized generation of virtual scenes to enhance immersion and engagement in treatment. By systematically dissecting the intrinsic mechanisms of VR exposure therapy, this study proposes a personalized content adaptation method driven by patients’ real-time states and integrates thermal control arrays, muscle electrical stimulation, and user visual behavior analysis technologies to construct a multimodal interaction framework, encompassing coordinated multi-channel synergies such as haptic feedback, auditory resonance, and visual immersion. Building on this foundation, a multimodal virtualized PTSD therapy interactive system based on AIGC is designed and implemented. This research offers new perspectives for PTSD treatment, advancing exposure therapy toward multimodality, intelligence, and universality, while holding significant engineering application value in fields such as neurorehabilitation and psychological therapy.
With the rapid development of artificial intelligence, Artificial Intelligence generated content (AIGC) has been increasingly applied in the design industry and has become an important tool for enhancing user efficiency. However, the impact of AIGC interaction design features on user creativity remains insufficiently examined. Based on the Stimulus Organism Response model and the Dual Pathway to Creativity Model (DPCM), this study constructs a theoretical framework that identifies intelligence, anthropomorphism, stylistic diversity, and interactivity as independent variables, and cognitive flexibility and cognitive persistence as mediating variables. Data were collected from 504 Chinese users who work in the design field with experience using AIGC tools and analyzed through Partial Least Squares Structural Equation Modeling. Results show that the four design features significantly influence user creativity and that cognitive flexibility and cognitive persistence mediate these relationships. Notably, interactivity demonstrated the strongest direct effect, suggesting that highly interactive AIGC tools are particularly effective in stimulating creative production. The study provides theoretical and practical insights for optimizing AIGC tools, improving user creative experiences, and promoting creativity developments in both AIGC applications and the design industry.
No abstract available
Conversational agent (CA) for psychotherapy and behavioral intervention has great potential to provide solutions that can benefit human health. However, most CA for behavior intervention and healthcare are based on pre-scripted conversations and rules instead of generative models, because the generative model is not stable enough to be used in the highly sensitive domain like behavioral intervention. Based on the fact that generative models and reinforcement learning techniques have been widely used in various domains, a CA integrating generative models for behavioral interventions is proposed in this work and the approach is expected to continuously improve the generative model and the agent itself based on collected human feedback from both client and therapist during the interaction. The approach involves techniques, such as few-shot generation by language models, prompt engineering, and reinforcement learning from human feedback (RLHF) as the Human-in-the-Loop interaction. We expect that this approach can enable the generative models to be used in highly sensitive fields such as mental healthcare and behavioral intervention.
This study investigates the optimization of Generative AI (GenAI) systems through human feedback, focusing on how varying feedback mechanisms influence the quality of GenAI outputs. We devised a Human-AI training loop where 32 students, divided into two groups, evaluated AI-generated responses based on a single prompt. One group assessed a single output, while the other compared two outputs. Preliminary results from this small-scale experiment suggest that comparative feedback might encourage more nuanced evaluations, highlighting the potential for improved human-AI collaboration in prompt optimization. Future research with larger samples is recommended to validate these findings and further explore effective feedback strategies for GenAI systems.
Individuals with borderline intellectual functioning (BIF), defined by intelligence quotients (IQ) between 70 and 85, face persistent disadvantages in education, employment, and social participation. Brain–artificial intelligence interfaces (BAIs) are defined as AI–integrated, co-adaptive, closed-loop extensions of bidirectional brain–computer interfaces (BCIs) that decode neural signals and deliver context-aware feedback in real-time. Unlike open-loop BCIs, BAIs enable continuous two-way interaction between the human brain and AI, providing adaptive support for working memory, attentional control, and procedural guidance. This paper analyzes the structural barriers affecting individuals with BIF and evaluates the potential for ethically designed BAIs to enhance workforce participation through integration as cognitively augmented workers (CAWs). Economic modeling suggests substantial national benefits, including gains in gross domestic product (GDP), higher tax revenues, and reduced reliance on welfare systems. Safeguards are outlined for protecting mental autonomy, governing neural data, and ensuring equitable labor regulation. A phased implementation program is further proposed, linking engineering trials and workplace pilots to quasi-experimental evaluation and general equilibrium analysis. Taken together, these elements constitute the paper’s core contribution: a unified conceptual, economic, and governance framework for integrating individuals with BIF as CAWs through co-adaptive BAIs. Responsibly developed BAIs, grounded in co-adaptation, offer a pathway to individual empowerment and inclusive societal progress through scalable cognitive augmentation.
This mixed methods participatory study was co-authored by 19 undergraduate students and their instructor in an introductory psychology class, with help from two research assistants. Participant observers evaluated and reflected upon the use of artificial intelligence (AI) language models as surrogate agents to support classroom discussion forums. The study forms a practical example of the use of generative AI in collaborative learning where human agents take the dominant role in conversation, acting as an applied effort to bring life to contemporary theoretical literature in educational technology. An M- and P-individual framework rooted in Gordon Pask’s cybernetics is used to structure out human-computer interaction feedback loops occurring during class discussions. Live chats were held during each lecture on a Google community, wherein students would respond to a weekly prompt posted by the instructor and respond to peers. Two of these sessions were held on the Character.AI and DeepAI platforms. Four groups of students interacted with language models of Freud and Piaget during sessions related to human consciousness and development, with one student “driver” prompting the AI following group brainstorming. Comparable discussions from the business-as-usual classes on the nervous system and human learning are compared to AI discussions, using the igraph network analysis package in RStudio. Comparative network visualizations highlight the possibility to create transitive distributed discussions using AI in college classrooms. To better understand student-to-student interactions guiding the driver’s prompting in AI chats, qualitative insights are shared from each group.
Figure 1: NeuroBridge architecture and interaction flow. Users begin by entering a topic and then engage in a loop of sending messages, receiving responses, and getting feedback.Flowchart of the NeuroBridge system architecture. Users begin by entering a topic, which is used by the Scenario Generator to create a scenario. The user then types in a message, which is rephrased by the Message Options Generator to create three message options. The user selects one of these options, and based on this choice, the Response Generator produces a response from the AI character. The Feedback Generator then provides positive or constructive feedback based on the user’s choice. If constructive feedback was given, the user must send a continue message to clarify, prompting a response from the AI character. The process then repeats as the user types another message. Communication challenges between autistic and neurotypical individuals stem from a mutual lack of understanding of each other’s distinct, and often contrasting, communication styles. Yet, autistic individuals are expected to adapt to neurotypical norms, making interactions inauthentic and mentally exhausting for them. To help redress this imbalance, we build NeuroBridge, an online platform that utilizes large language models (LLMs) to simulate: a) an AI character that is direct and literal, a style common among many autistic individuals, and b) four cross-neurotype communication scenarios in a feedback-driven conversation between this character and a neurotypical user. Through NeuroBridge, neurotypical individuals gain a firsthand look at autistic communication, and reflect on their role in shaping cross-neurotype interactions. In a user study with 12 neurotypical participants, we find that NeuroBridge improved their understanding of how autistic people may interpret language differently, with all describing autism as a social difference that “needs understanding by others” after completing the simulation. Participants valued its personalized, interactive format and described AI-generated feedback as "constructive", "logical" and "non-judgmental". Most perceived the portrayal of autism in the simulation as accurate, suggesting that users may readily accept AI-generated (mis)representations of disabilities. To conclude, we discuss design implications for disability representation in AI, the need for making NeuroBridge more personalized, and LLMs’ limitations in modeling complex social scenarios.
Objective: In-person counseling faces limitations in timing and geographical accessibility, often causing adolescents and young adults (AYAs) to miss timely psychological support. With the advent of large language models (LLMs), the mental health care industry has increasingly focused on developing chat counseling services as supplementary tools to reduce these barriers. However, existing services have two primary limitations: tendency toward generic advice and absence of human-like dialogue. To overcome these limitations, this article proposes BetterMood, a human-like AI counseling service specifically for Korean-speaking AYAs. Methods: Our design for BetterMood separately addressed the content and delivery of counseling dialogue. For content, we develop a concern-aware counseling LLM refined through prompt-engineering with a novel prompt derived from collected counseling data. For delivery, we create a human-like AI counselor that employs a chunk-based streaming methodology to enable human-like dialogue. We then conducted a user study with 10 adolescents, 110 young adults, and 8 professional clinicians to assess the feasibility and user experience across four domains: (i) interaction capability, (ii) perceived support, (iii) usability, and (iv) ethical safety. Results: Our user study indicates that BetterMood’s interactive capabilities, particularly its ability to suggest appropriate responses, received positive feedback from 90.0% of adolescents, 90.9% of young adults, and 75.0% of professional clinicians. Stratified analysis revealed that outcomes regarding perceived support and usability of the service differed across cohorts and initial screening status. Furthermore, independent evaluations by eight professional clinicians demonstrated moderate agreement for individual ratings but excellent reliability for the aggregated assessment. Conclusion: Positive user experience and high inter-rater reliability among clinicians support BetterMood’s potential as an accessible supplementary tool for initial psychological support.
Abstract Background In recent years, large language models (LLMs) have shown a remarkable ability to generate human-like text. One potential application of this capability is using LLMs to simulate clients in a mental health context. This research presents the development and evaluation of Client101, a web conversational platform featuring LLM-driven chatbots designed to simulate mental health clients. Objective We aim to develop and test a web-based conversational psychotherapy training tool designed to closely resemble clients with mental health issues. Methods We used GPT-4 and prompt engineering techniques to develop chatbots that simulate realistic client conversations. Two chatbots were created based on clinical vignette cases: one representing a person with depression and the other, a person with generalized anxiety disorder. A total of 16 mental health professionals were instructed to conduct single sessions with the chatbots using a cognitive behavioral therapy framework; a total of 15 sessions with the anxiety chatbot and 14 with the depression chatbot were completed. After each session, participants completed a 19-question survey assessing the chatbot’s ability to simulate the mental health condition and its potential as a training tool. Additionally, we used the LIWC (Linguistic Inquiry and Word Count) tool to analyze the psycholinguistic features of the chatbot conversations related to anxiety and depression. These features were compared to those in a set of webchat psychotherapy sessions with human clients—42 sessions related to anxiety and 47 related to depression—using an independent samples t test. Results Participants’ survey responses were predominantly positive regarding the chatbots’ realism and portrayal of mental health conditions. For instance, 93% (14/15) considered that the chatbot provided a coherent and convincing narrative typical of someone with an anxiety condition. The statistical analysis of LIWC psycholinguistic features revealed significant differences between chatbot and human therapy transcripts for 3 of 8 anxiety-related features: negations (t56=4.03, P=.001), family (t56=–8.62, P=.001), and negative emotions (t56=–3.91, P=.002). The remaining 5 features—sadness, personal pronouns, present focus, social, and anger—did not show significant differences. For depression-related features, 4 of 9 showed significant differences: negative emotions (t60=–3.84, P=.003), feeling (t60=–6.40, P<.001), health (t60=–4.13, P=.001), and illness (t60=–5.52, P<.001). The other 5 features—sadness, anxiety, mental, first-person pronouns, and discrepancy—did not show statistically significant differences. Conclusions This research underscores both the strengths and limitations of using GPT-4-powered chatbots as tools for psychotherapy training. Participant feedback suggests that the chatbots effectively portray mental health conditions and are generally perceived as valuable training aids. However, differences in specific psycholinguistic features suggest targeted areas for enhancement, helping refine Client101’s effectiveness as a tool for training mental health professionals.
The recent introduction of generative artificial intelligence (GenAI) has opened new opportunities for human–GenAI co-creation, in which humans and GenAI collaborate to produce creative outcomes. However, our findings indicate that mere integration does not guarantee augmented learning—the foundation for the continuous improvement of joint creativity over time. Effective integration depends on how well humans understand and collaborate with GenAI. We propose a two-step approach. First, organizations should critically assess GenAI’s strengths and limitations, recognizing its capacity to analyze data and generate diverse ideas, but also its lack of contextual and emotional understanding. Second, organizations should design targeted strategies and training programs that cultivate employees’ skills in Idea Co-Development—a co-creation activity in which humans and GenAI engage in critical feedback exchanges and the joint refinement of generated ideas. Our study demonstrates that even basic explanations and examples of Idea Co-Development significantly enhance joint creativity, suggesting that formal training with contextualized exercises can further amplify results. From a policy and design perspective, GenAI developers should build systems that actively support co-creative interaction through features such as feedback loops and prompts for elaboration. Together, these organizational and technological initiatives can foster more effective, sustained, and creative human-GenAI collaboration.
Large language models (LLMs) have significantly advanced the field of artificial intelligence. Yet, evaluating them comprehensively remains challenging. We argue that this is partly due to the predominant focus on performance metrics in most benchmarks. This paper introduces CogBench, a benchmark that includes ten behavioral metrics derived from seven cognitive psychology experiments. This novel approach offers a toolkit for phenotyping LLMs' behavior. We apply CogBench to 35 LLMs, yielding a rich and diverse dataset. We analyze this data using statistical multilevel modeling techniques, accounting for the nested dependencies among fine-tuned versions of specific LLMs. Our study highlights the crucial role of model size and reinforcement learning from human feedback (RLHF) in improving performance and aligning with human behavior. Interestingly, we find that open-source models are less risk-prone than proprietary models and that fine-tuning on code does not necessarily enhance LLMs' behavior. Finally, we explore the effects of prompt-engineering techniques. We discover that chain-of-thought prompting improves probabilistic reasoning, while take-a-step-back prompting fosters model-based behaviors.
Effective human-computer communication (HCC) systems are crucial in software development to ensure user satisfaction, productivity, and system efficiency. This paper explores scientific and methodical approaches to enhance HCC systems, focusing on principles from human-computer interaction (HCI), usability engineering, and cognitive psychology. The research emphasizes iterative design processes, user-centered methodologies, and the integration of feedback loops to optimize interface design and user experience. Case studies and empirical data illustrate the effectiveness of these approaches in improving HCCsystems, highlighting their impact on software usability and user engagement.
Background Digital mental health tools, designed to augment traditional mental health treatments, are becoming increasingly important due to a wide range of barriers to accessing mental health care, including a growing shortage of clinicians. Most existing tools use rule-based algorithms, often leading to interactions that feel unnatural compared with human therapists. Large language models (LLMs) offer a solution for the development of more natural, engaging digital tools. In this paper, we detail the development of Socrates 2.0, which was designed to engage users in Socratic dialogue surrounding unrealistic or unhelpful beliefs, a core technique in cognitive behavioral therapies. The multiagent LLM-based tool features an artificial intelligence (AI) therapist, Socrates, which receives automated feedback from an AI supervisor and an AI rater. The combination of multiple agents appeared to help address common LLM issues such as looping, and it improved the overall dialogue experience. Initial user feedback from individuals with lived experiences of mental health problems as well as cognitive behavioral therapists has been positive. Moreover, tests in approximately 500 scenarios showed that Socrates 2.0 engaged in harmful responses in under 1% of cases, with the AI supervisor promptly correcting the dialogue each time. However, formal feasibility studies with potential end users are needed. Objective This mixed methods study examines the feasibility of Socrates 2.0. Methods On the basis of the initial data, we devised a formal feasibility study of Socrates 2.0 to gather qualitative and quantitative data about users’ and clinicians’ experience of interacting with the tool. Using a mixed method approach, the goal is to gather feasibility and acceptability data from 100 users and 50 clinicians to inform the eventual implementation of generative AI tools, such as Socrates 2.0, in mental health treatment. We designed this study to better understand how users and clinicians interact with the tool, including the frequency, length, and time of interactions, users’ satisfaction with the tool overall, quality of each dialogue and individual responses, as well as ways in which the tool should be improved before it is used in efficacy trials. Descriptive and inferential analyses will be performed on data from validated usability measures. Thematic analysis will be performed on the qualitative data. Results Recruitment will begin in February 2024 and is expected to conclude by February 2025. As of September 25, 2024, overall, 55 participants have been recruited. Conclusions The development of Socrates 2.0 and the outlined feasibility study are important first steps in applying generative AI to mental health treatment delivery and lay the foundation for formal feasibility studies. International Registered Report Identifier (IRRID) DERR1-10.2196/58195
This paper describes the development, design, and maturation of an AI-based mental health chatbot that allows a user to engage with more accessible, higher forms of mental health assistance. As developed as a retrieval-based system which leveraged predefined responses contained in a database and based upon input received from a user, it matured into a far more scalable system which included the generative AI capabilities that allowed users to achieve far more personalized, contextually relevant interactions. This deployment variant robustly leverages on the chaining framework provided by LangChain to enable chaining of prompt templates, utilizes the OllamaLLM model that uses the Llama3 language model, and fully exploits other NLP methods. Therefore, the rule-based system is migrated to a generative model that integrates both algorithms of machine learning for effective enhancement of both understanding and generating. The primary issues with this development were about the conversational context, empathetic and coherent response, and the scalability of the system. All these were achieved through advanced prompt engineering, a set of rigorous evaluation metrics like BLEU score, accuracy, precision, recall, and F1 score besides continuous user feedback for refining the performance of this model. This paper discusses methodologies to overcome the challenges and implications for the future of AI and mental health care with an emphasis on embedding technological advancements within empathetic human-computer interaction. The findings point out this chatbot is effective in providing emotional support and how AI is changing its role to improve the services of mental health.
Comparative Analysis of GPT-4 and Human Graders in Evaluating Human Tutors Giving Praise to Students
Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.
Learner behaviours often provide critical clues about learners' cognitive processes. However, the capacity of human intelligence to comprehend and intervene in learners' cognitive processes is often constrained by the subjective nature of human evaluation and the challenges of maintaining consistency and scalability. The recent widespread AI technology has been applied to learning analytics (LA), aiming at a more accurate, consistent and scalable understanding of learning to compensate for challenges that human intelligence faces. However, machine intelligence has been criticized for lacking contextual understanding and difficulties dealing with complex human emotions and social cues. In this work, we aim to understand learners' internal cognitive processes based on the external behavioural cues of learners in a digital reading context, using a hybrid intelligence (HI) approach, bridging human and machine intelligence. Based on the behavioural frameworks and the insights from human experts, we scope specific behavioural cues that are known to be relevant to learners' attention regulation, which is highly relevant for learners' cognitive processes. We utilize the public WEDAR dataset with 30 subjects' video data, behaviour annotation and pre–post tests on multiple choice and summarization tasks. We apply the explainable AI (XAI) approach to train the machine learning model so that human evaluators can also understand which behavioural features were essential for predicting the usage of the cognitive processes (ie, higher‐order thinking skills [HOTS] and lower‐order thinking skills [LOTS]) of learners, providing insights for the next‐round feature engineering and intervention design. The result indicates that the dominant use of attention regulation behaviours is a reliable indicator of low use of LOTS with 79.33% prediction accuracy, while reading speed is a valuable indicator for predicting the overall usage of HOTS and LOTS, ranging from 60.66% to 78.66% accuracy, highly surpassing random guess of 33.33%. Our study demonstrates how various combinations of behavioural features supported by HI can inform learners' cognitive processes accurately and interpretably, integrating human and machine intelligence. What is already known about this topic Human attention is a cognitive process that allows us to choose and concentrate on relevant information, which leads to successful learning. In affective computing, certain behavioural cues (eg, attention regulation behaviours) are used to indicate learners' attentional states during learning. What this paper adds Attention regulation behaviours during digital reading can work as predictors of different levels of cognitive processes (ie, the utilization of higher‐order thinking skills [HOTS] and lower‐order thinking skills [LOTS]), leveraged by computer vision and machine learning. By developing an explainable AI model, we can predict learners' cognitive processes, which often cannot be achieved by human observations, while understanding behavioural components that lead to such machine decisions is critical. It can provide valuable machine‐driven insights into the relationship between humans' external and internal states in learning. Based on the frameworks spanning cognitive AI, psychology and education, expert knowledge can contribute to initial feature selection and engineering for the hybrid intelligence (HI) model development and next‐round intervention design. Implications for practice and/or policy Human and machine intelligence form an iterative cycle to build a HI to understand and intervene in learners' cognitive processes in digital reading, balancing each other's strengths and weaknesses in decision‐making. It can eventually inform automated feedback loops in widespread e‐learning, a new education norm since the COVID‐19 pandemic. Our framework also has the potential to be extended to other scenarios with digital reading, providing concrete examples of where human intelligence and machine intelligence can contribute to building a HI. It represents more systematic supports that apply to real‐life practices.
Purpose This study aims to examine ChatGPT-4’s potential and stability when simulating school-counseling dialogues, offering an exploratory snapshot of its ability to convey warmth, empathy and acceptance. Drawing on 80 real student questions, this paper aims to assess response consistency and identify risk markers, such as randomness and hallucination. The goal of this study is to inform future research, guide human–AI (artificial intelligence) collaboration and support policy development on deploying large language model chatbots for accessible mental health interventions. Design/methodology/approach This paper prompted ChatGPT-4 with 80 authentic college student counseling questions and collected three nondeterministic replies per query. Automated analysis used three natural language processing (NLP) models – EmoRoBERTa for emotion detection, a neural network for empathy classification and VADER for sentiment analysis – to quantify warmth, empathy and acceptance. Stability was evaluated via Fleiss’ κ for empathy labels and ICC(2,1) for continuous sentiment scores. Additional Chi-square and one-way ANOVA tests examined categorical shifts and mean-score drift, and Pearson correlation assessed the relation between question and response length. Findings ChatGPT-4 achieved 97.5% warm responses, 94.2% empathy classification and a mean compound sentiment score of 0.93 ± 0.19. Stability metrics indicated moderate reliability (κ = 0.59; ICC = 0.62), while occasional confusing or realization labels (2.5% of outputs) and minor sentiment drift underscored randomness as a risk. A positive correlation (r = 0.60, p < 0.001) revealed longer queries elicit longer replies. These results highlight both the promise and the limits of LLM chatbots in school-counseling simulations. Research limitations/implications As an offline simulation using a single GPT-4 model and automated proxies rather than clinician ratings or clinical outcomes, findings remain exploratory. The de-identified public data set may not capture live user dynamics. Future work should involve multi-model comparisons, mixed-methods validation with human raters and end-users, live pilot deployments and clinical trials to assess safety, usability and therapeutic impact in real-world educational settings. Practical implications High warmth and empathy rates suggest ChatGPT-4 could augment low-intensity support – drafting psycho-educational messages or after-hours coping tips under human oversight. Stability metrics can inform prompt-engineering benchmarks and guardrail triggers. Schools and self-help apps may pilot AI-assisted chat interfaces with escalation protocols, bias and privacy audits and human-in-the-loop triage to optimize counselor workflows, extend reach and mitigate risks through transparent policy and workflow design. Social implications Deployment of LLM chatbots can democratize mental health resources for youth, lowering barriers of cost, stigma and provider shortages. However, risks of misinformation, bias and overreliance necessitate digital-literacy education, equitable governance and community engagement frameworks. Policymakers and practitioners must balance innovation with safeguards – such as mandatory audit trails and accountability measures – to ensure vulnerable populations benefit safely from AI-mediated counseling. Originality/value This paper is among the first to apply quantitative stability metrics (κ, ICC) and NLP-based emotion analysis to ChatGPT-4 in a school-counseling simulation, enriched by a practitioner’s deployment insights. It integrates parasocial-interaction and computational social-science theories to link technical capabilities with design patterns, policy recommendations and a roadmap for future mixed-methods research on AI in mental-health interventions.
Trust is a fundamental component of effective therapeutic relationships, significantly influencing patient engagement and treatment outcomes in mental health care. This paper presents a preliminary study aimed at enhancing trust through the co-creation of virtual therapeutic environments using generative artificial intelligence (AI). We propose a multimodal AI model, integrated into a virtual reality (VR) platform developed in Unity, which generates three-dimensional (3D) objects from textual descriptions. This approach allows patients to actively participate in shaping their therapeutic environment, fostering a collaborative atmosphere that enhances trust between patients and therapists. The methodology is structured into four phases, combining non-immersive and immersive experiences to co-create personalized therapeutic spaces and 3D objects symbolizing emotional or psychological states. Preliminary results demonstrate the system’s potential in improving the therapeutic process through the real-time creation of virtual objects that reflect patient needs, with high-quality mesh generation and semantic coherence. This work offers new possibilities for patient-centered care in mental health services, suggesting that virtual co-creation can improve therapeutic efficacy by promoting trust and emotional engagement.
This study investigates the role of Artificial General Intelligence (AGI) as a Creativity Support System (CSS) through a human-AI collaboration framework integrating multimodal data analysis. We conducted a dual-method investigation combining psychometric surveys with neurophysiological monitoring to compare AGI and traditional search engines in creative tasks. AI-driven data processing enabled the evaluation of creative outputs, while EEG signal processing quantified prefrontal alpha oscillations as neural indicators of cognitive states. Our computational approach identified Creative Self-Efficacy and Critical Thinking as computational mediators, with machine learning models revealing how individual traits modulate human-AI interaction patterns. The data-intensive analysis demonstrated AGI's superior capacity in: (1) inducing neurocognitive patterns conducive to creativity; (2) optimizing creative performance through personalized cognitive support. Industry-specific computational modeling revealed distinct enhancement pathways: technology sectors benefited from AGI's analytical capabilities, manufacturing from its adaptive scaffolding, and education from its balanced approach. These findings establish a neurocomputational framework for human-AGI co-creation, providing algorithmic guidelines for developing adaptive creativity support systems.
Art therapy has been an essential form of psychotherapy to facilitate psychological well-being, which has been promoted and transformed by recent technological advances into digital art therapy. However, the potential of digital technologies has not been fully leveraged; especially, applying AI technologies in digital art therapy is still under-explored. In this paper, we propose an AI-infused art-making system, DeepThInk, to investigate the potential of introducing a human-AI co-creative process into art therapy, by collaborating with five experienced registered art therapists over ten months. DeepThInk o ff ers a range of tools which can lower the expertise threshold for art-making while improving users’ creativity and expressivity. We gathered the insights of DeepThInk through expert reviews and a two-part user evaluation with both synchronous and asynchronous therapy setups. This longitudinal iterative design process helped us derive and contextualize design principles of human-AI co-creation for art therapy, shedding light on future design in relevant domains.
With the advancement of large language models and multimodal interaction technologies, AI anchors capable of substituting human hosts have been increasingly applied in the live streaming e-commerce field, demonstrating anthropomorphic characteristics that extend beyond physical appearance. Among these applications, the impact of the anthropomorphism level of AI anchors on users’ willingness to engage in human–machine value co-creation in tourism live streaming contexts remains an underexplored yet critical area. Existing studies mostly focus on the impact of anthropomorphism on purchase intention, but overlook the underlying mechanism in high-interaction contexts. Grounded in the social response theory, social presence theory and self-determination theory, this study investigates tourism live streaming as a contextual setting through experimental designs involving two levels of anthropomorphism (high vs. low). It systematically examines the impact of AI anchor anthropomorphism on users’ willingness to co-create value, the mediating role of social presence, and the moderating role of perceived control. The findings indicate that: (1) The level of anthropomorphism exhibited by AI anchors significantly and positively influences users’ willingness to participate in human–machine value co-creation, with participants in the high anthropomorphism condition reporting significantly greater willingness than those in the low anthropomorphism condition; (2) Social presence mediates this relationship; and (3) Perceived control negatively moderates the path between anthropomorphism and social presence—higher perceived control attenuates the positive effect of anthropomorphism on social presence, but does not moderate the direct relationship between anthropomorphism and willingness to co-create. This study elucidates users’ dual psychological needs for “social connection” and “autonomous control” in human–machine collaborative settings, highlights the importance of balancing these competing demands, and offers both theoretical insights and practical implications for the design of AI-driven interactions in tourism live streaming.
This article examines the gamification of intimacy with AI, through China’s XingYe, a multimodal AI companion platform that integrates role-playing game (RPG) mechanics, algorithmic responsiveness and user-generated markets to reconfigure human-virtual human relationships. Drawing on 9-month autoethnographic engagement, we argue that XingYe operationalises what we term the gamification of intimacy – a design paradigm that commodifies emotional labour by rendering affection efficient, quantifiable and achievement-oriented. Users engineer customisable AI companions through ludic acts of co-creation, navigating tiered progression systems and gacha-style rewards that transform intimacy into a structured, transactional process. Simultaneously, XingYe blurs boundaries between fiction and lived experience, enabling real-time narrative remediation and the monetisation of AI agents as tradable commodities. This hybrid relationality challenges traditional notions of parasociality, positioning AI-mediated intimacy as a liminal space of technocultural negotiation where algorithmic agency and user desire converge. By framing emotional bonds as both labour and leisure, XingYe exemplifies the industrial production of connection under platform capitalism, raising critical questions about agency, data sovereignty and the neoliberal optimisation of vulnerability. The study contributes to debates on human-machine communication by interrogating how gamified AI systems reshape intimacy into a crowdsourced, market-driven practice, urging scholars to transcend anthropocentric frameworks and address the ethical implications of affective commodification in digital ecosystems.
The integration of Generative Artificial Intelligence into creative processes raises scenarios that escape the classical notion of interaction between Humans and Computers. The emergence of co-creation, creative collaboration and distributed agency suggest that Human-AI creation is relational rather than interactive. This paper reviews this transition within different perspectives on humans and technologies for creativity and presents a theoretical contribution for the understanding of these processes, embracing the complexity of this new creative paradigm and interrelating cognitive, affective and behavioral dimensions. The paper conceptualizes relational processes in creativity as mutual influence processes in which Human and AI actors collaborate iteratively, sharing their agency and reciprocally modeling their behavior, knowledge structures, and affective responses. The concept is then applied to the audiovisual industry to explore the emerging dynamics in those processes while enabling a critical look at the implications of this new forms of creation regarding classical and new workflows and debates on labor and ethics. In the end, some conclusions are presented along with three initial research directions for relational processes in Human-AI creativity, highlighting the importance of raising critical awareness of these relationships in education.
The evolution of artificial intelligence (AI) facilitates the creation of multimodal information of mixed quality, intensifying the challenges individuals face when assessing information credibility. Through in-depth interviews with users of generative AI platforms, this study investigates the underlying motivations and multidimensional approaches people use to assess the credibility of AI-generated information. Four major motivations driving users to authenticate information are identified: expectancy violation, task features, personal involvement, and pre-existing attitudes. Users evaluate AI-generated information’s credibility using both internal (e.g. relying on AI affordances, content integrity, and subjective expertise) and external approaches (e.g. iterative interaction, cross-validation, and practical testing). Theoretical and practical implications are discussed in the context of AI-generated content assessment.
This paper explores the convergence between swarm intelligence principles, stigmergic coordination, and Eco-Centered Psychological Facilitation (ECPF) as a transformative framework for reimagining education and psychology in the posthuman age. Drawing on recent research in collective intelligence, distributed cognition, stigmergic coordination, and AI-mediated learning environments, it argues that psychological and educational facilitation should be reconceptualized as ecological tuning—the artful creation of rhythmic, spatial, and relational conditions that enable emergent self-organization through environmental traces. The concept of stigmergy—coordination through traces left in the environment—provides a concrete mechanism for understanding how collective intelligence emerges in educational settings. Through an integration of insights from complexity science, urban morphogenesis, posthumanist philosophy, and ethical AI frameworks, this study outlines comprehensive design principles for hybrid human-AI learning environments where participants actively "stigmergize" meaning through iterative trace-making. It proposes an ecological ethics of minimal intervention combined with metacommunicative awareness as the foundation for psychological and educational transformation, positioning the facilitator as a conductor of temporal rhythms, keeper of stigmergic traces, and cultivator of collective reflexivity. The implications extend beyond traditional pedagogical boundaries, suggesting new possibilities for understanding human development, collective intelligence, and the co-evolution of human and artificial cognitive systems in educational contexts through the lens of stigmergic self-organization.
本报告从交互设计与认知科学的交叉视角,系统梳理了AIGC与人类认知能力协同演化的多维路径。研究不仅涵盖了共享理解、心理理论等基础理论框架,还深入探讨了在创意设计、教育赋能、心理健康及特定群体认知增强等领域的应用实践。报告重点分析了人机协作中的信任校准、认知负荷及认知主权等伦理与心理挑战,并提出了通过人类反馈迭代与认知测量来优化交互系统的策略。整体而言,AIGC被视为人类认知能力的延伸,其设计核心在于平衡自动化效率与人类高阶思维,实现人机智能的互补与协同进化。