当人失去AI的心理感受
人机依恋关系的构建与心理维度
这组文献探讨了人类如何与AI建立情感纽带,并应用依恋理论(如焦虑型和回避型依恋)来量化和理解这种关系,为理解“失去”前的心理基础提供了背景。
- Using attachment theory to conceptualize and measure the experiences in human-AI relationships(Fan Yang, Atsushi Oshio, 2025, Current Psychology)
- Measuring and understanding emotional attachment in human-AI relationships.(Nuo Cheng, Ruifeng Yu, 2026, Ergonomics)
- Exploring the Effects of Chatbot Anthropomorphism and Human Empathy on Human Prosocial Behavior Toward Chatbots(Jingshu Li, Zicheng Zhu, Renwen Zhang, Yi-Chieh Lee, 2025, ArXiv Preprint)
- Emotional Connection Between Humans and AI: An Analysis of the Role, Potential and Challenges of Interactive AIUsing the Movie HER as an Example(Xinyi Hou, 2024, Lecture Notes in Education Psychology and Public Media)
- The impacts of companion AI on human relationships: risks, benefits, and design considerations(Kim Malfacini, 2025, AI & SOCIETY)
丧失、悲伤理论与分离焦虑的跨领域迁移
这组文献研究了人类丧亲、动物分离焦虑以及悲伤轨迹的理论模型(如IPM模型、PGD障碍),这些理论为分析人失去AI后的情感反应(如分离焦虑、意义整合)提供了关键的类比和理论框架。
- Meaning Integration and Grief Trajectories Within in the First Two Years Among Chinese: Latent Growth Modeling(Dongpeng Yao, Jie Li, Jing Ning, Mengyuan Long, Yihan Gai, Mei Li, 2024, Journal of Loss and Trauma)
- The integrated process model of loss and grief - An interprofessional understanding(Mai-Britt Guldin, Carlo Leget, 2023, Death Studies)
- Adult separation anxiety disorder: The human-animal bond.(E. Dowsett, P. Delfabbro, A. Chur-Hansen, 2020, Journal of affective disorders)
- Bereavement issues and prolonged grief disorder: A global perspective(C. E. Hilberdink, Kévin Ghainder, Alexandre Dubanchet, D. Hinton, A. Djelantik, B. Hall, E. Bui, 2023, Cambridge Prisms: Global Mental Health)
过度依赖、技术成瘾与认知/智力去技能化风险
这组文献聚焦于人类对AI的深度依赖及其负面后果,包括成瘾行为、智力去技能化(deskilling)以及当AI缺失或被剥夺时可能引发的认知与情感动荡。
- Generative AI and childhood education: lessons from the smartphone generation(O. Machidon, 2025, AI & SOCIETY)
- Human-AI Interactions: Cognitive, Behavioral, and Emotional Impacts(Celeste Riley, Omar Al-Refai, Y. Reyes, Eman Hammad, 2025, ArXiv)
- AI Technology panic—is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents(Shunsen Huang, Xiaoxiong Lai, L. Ke, Yajun Li, Huanlei Wang, Xinmei Zhao, Xi-jian Dai, Yun Wang, 2024, Psychology Research and Behavior Management)
- Toward an Ethic of Synthetic Relationality: Identity, Intimacy, and Risk in AI-Mediated Roleplay Environments(Maalvika Bhat, 2025, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society)
信任机制、期望违背与存在主义焦虑
这组文献探讨了AI表现不佳或社会性影响引发的心理负面感受,包括信任破裂、期望违背带来的失望感,以及对AI技术发展的深层存在主义恐惧。
- When a Chatbot Disappoints You: Expectancy Violation in Human-Chatbot Interaction in a Social Support Context(M. Rheu, Y. Dai, Jingbo Meng, Wei Peng, 2024, Communication Research)
- Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust(Yugang Li, Baizhou Wu, Yuqi Huang, Shenghua Luan, 2024, Frontiers in Psychology)
- Existential anxiety about artificial intelligence (AI)- is it the end of humanity era or a new chapter in the human revolution: questionnaire-based observational study(J. Alkhalifah, Abdulrahman Mohammed Bedaiwi, Narmeen Shaikh, Waleed Seddiq, S. Meo, 2024, Frontiers in Psychiatry)
- How human–AI feedback loops alter human perceptual, emotional and social judgements(Moshe Glickman, T. Sharot, 2024, Nature Human Behaviour)
情感支持的替代效应与特定群体交互体验
这组文献关注AI作为人类情感支持替代品的效用,特别是在青少年等特定群体中,探讨了AI如何填补情感空白以及这种替代关系在失去时的潜在影响。
- Human-Machine Differences in Adolescent Emotional Support: A Comparative Study of Responses Between a Real Mother and Four AI Language Models(Siyi Chen, Hanzhe Huo, 2025, Proceedings of the 2025 International Conference on AI-enabled Education)
- Online Survey as Empathic Bridging for the Disenfranchised Grief of Pet Loss(W. Packman, B. Carmack, R. Katz, F. Carlos, N. Field, C. Landers, 2014, OMEGA — Journal of Death and Dying)
- How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study(Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, Eunhae Lee, Samantha W. T. Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, L. Ahmad, S. Agarwal, 2025, ArXiv)
- Talk, Listen, Connect: Navigating Empathy in Human-AI Interactions(Mahnaz Roshanaei, R. Rezapour, M. S. El-Nasr, 2024, ArXiv)
本组文献综合探讨了“当人失去AI”这一主题的心理前因与后果。研究首先通过依恋理论界定了人机情感连接的深度,随后借鉴人类丧亲和动物分离焦虑模型,分析了失去AI可能引发的悲伤轨迹。同时,报告揭示了过度依赖AI带来的认知去技能化风险,以及因信任破裂或存在主义焦虑而产生的负面心理感受,特别强调了AI在提供情感支持替代方案中的复杂作用。
总计21篇相关文献
No abstract available
Platforms like Character.AI offer new avenues for identity exploration and self-expression, but also introduce profound parasocial, socioemotional, and psychological risks. Drawing on developmental psychology, fan studies, human-computer interaction, and AI ethics, this paper examines how AI-mediated roleplay environments simulate intimacy while fostering dependency, boundary erosion, and perceptual misalignment. Through thematic analysis of an anonymous survey (N=344) of Character.AI users, we identify patterns of identity projection, perceived relationship growth, addictive engagement, boundary confusion, emotional substitution, ethical dissonance, and trauma reenactment. Beyond documenting vulnerabilities, we propose design interventions, including dynamic consent scaffolding, reflexivity prompts, and interactional transparency, to safeguard user agency and developmental wellbeing. We argue that synthetic companions do not merely extend fan practices but fundamentally reconfigure interpersonal architectures, demanding a new ethic of synthetic relationality. As AI-driven intimacy becomes increasingly persuasive and immersive, addressing its high-stakes implications is critical to responsible AI design, particularly for younger and vulnerable populations.
Artificial intelligence (AI) technologies are rapidly advancing, enhancing human capabilities across various fields spanning from finance to medicine. Despite their numerous advantages, AI systems can exhibit biased judgements in domains ranging from perception to emotion. Here, in a series of experiments (n = 1,401 participants), we reveal a feedback loop where human–AI interactions alter processes underlying human perceptual, emotional and social judgements, subsequently amplifying biases in humans. This amplification is significantly greater than that observed in interactions between humans, due to both the tendency of AI systems to amplify biases and the way humans perceive AI systems. Participants are often unaware of the extent of the AI’s influence, rendering them more susceptible to it. These findings uncover a mechanism wherein AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgement escalate into much larger ones. Glickman and Sharot reveal a human–AI feedback loop, where AI amplifies subtle human biases, which are then further internalized by humans. This cycle, observed across various domains, leads to substantial increases in human bias over time.
This article examines the potential parallels between children's widespread adoption of smartphones and the emerging reliance on generative AI tools in childhood education. Drawing on Jonathan Haidt’s insights into how phone-based childhoods can disrupt the development of critical executive functions, and Shannon Vallor’s concept of “moral deskilling,” the discussion raises concerns about “intellectual deskilling” in younger generations. As generative AI tools like ChatGPT gain popularity, children risk becoming overly reliant on automated solutions, potentially undermining metacognition and critical thinking. This paper highlights risks such as cognitive offloading, instant gratification, and diminished perseverance and proposes measures to ensure generative AI supports rather than replaces essential developmental experiences.
Background The emergence of new technologies, such as artificial intelligence (AI), may manifest as technology panic in some people, including adolescents who may be particularly vulnerable to new technologies (the use of AI can lead to AI dependence, which can threaten mental health). While the relationship between AI dependence and mental health is a growing topic, the few existing studies are mainly cross-sectional and use qualitative approaches, failing to find a longitudinal relationship between them. Based on the framework of technology dependence, this study aimed to determine the prevalence of experiencing AI dependence, to examine the cross-lagged effects between mental health problems (anxiety/depression) and AI dependence and to explore the mediating role of AI use motivations. Methods A two-wave cohort program with 3843 adolescents (Male = 1848, Mage = 13.21 ± 2.55) was used with a cross-lagged panel model and a half-longitudinal mediation model. Results 17.14% of the adolescents experienced AI dependence at T1, and 24.19% experienced dependence at T2. Only mental health problems positively predicted subsequent AI dependence, not vice versa. For AI use motivation, escape motivation and social motivation mediated the relationship between mental health problems and AI dependence whereas entertainment motivation and instrumental motivation did not. Discussion Excessive panic about AI dependence is currently unnecessary, and AI has promising applications in alleviating emotional problems in adolescents. Innovation in AI is rapid, and more research is needed to confirm and evaluate the impact of AI use on adolescents’ mental health and the implications and future directions are discussed.
This study compares emotional support for adolescents between four AI models (Doubao, Kimi, ChatGPT, Gemini) and a real mother through an AI interaction experiment designed upon real family conversation, focusing on emotional recognition accuracy and response types distribution. Responses are categorized into "emotional support," "educational guidance," "emotional damage," and "neutral" via conversation analysis and coding. Research shows that 1) both humans and AI misjudged indirect emotional intentions, linked to Chinese implicit communication; 2) Real mother shows frequent emotional damage responses, indicating a lack of emotional support in real adolescent families; 3) Foreign AI models (GPT-5, Gemini Flash) demonstrated relatively higher emotional support than domestic models. 4) Kimi exhibited authoritarian traits via excessive educational guidance (52%) and unprompted help offers. The study provides key references for optimizing family communication and developing locally tailored AI emotional support tools.
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI’s trustworthiness and foster its adoption and application.
Users increasingly develop emotional connections with AI chatbots that extend beyond utilitarian functions, yet no validated multidimensional scale exists to measure these bonds. This research developed and validated the AI Attachment Scale (AIAS) through two studies: scale development (Study 1) followed by validation and framework testing (Study 2). Study 1 employed exploratory factor analysis (N = 531) to establish a 15-item scale capturing three dimensions: Emotional Support, Separation Distress, and Secure Base. Study 2 used confirmatory factor analysis (N = 375) to validate the scale structure and propose a theoretical framework linking individual differences to AI attachment and behavioural outcomes. Results showed anthropomorphism as the strongest predictor of AI attachment orientations. Attachment anxiety positively predicted AI attachment (β = 0.44), while attachment avoidance negatively predicted it (β = -0.53). AI attachment significantly predicted behavioural intentions (β = 0.50). This research provides a validated measure of human-AI attachment and practical guidance for emotional design in AI chatbots.
With the high-speed development of society, as well as the increasingly fierce competition and social anxiety, the demand for emotional services has increased greatly. Generally, it is difficult to fully personalize human interaction to fulfill peoples needs, but the emerging interactive Artificial Intelligence (AI) fills a lot of gaps and is continuing to be deeply integrated into human life in all aspects. As an emerging technology, the development potential and challenges of interactive AI in various aspects are simultaneously revealed in the human vision. Taking the movie HER as an example, this paper analyzes the role of interactive AI in establishing an emotional connection with humans and the problems it faces in the process such as the separation of human-machine emotions. It can be summarized that while interactive AI has the potential to fulfill social needs, provide emotional value, and assist in the sorting out of information, it also faces challenges in ethics, privacy, and copyright.
Artificial intelligence (AI) is growing “stronger and wiser,” leading to increasingly frequent and varied human-AI interactions. This trend is expected to continue. Existing research has primarily focused on trust and companionship in human-AI relationships, but little is known about whether attachment-related functions and experiences could also be applied to this relationship. In two pilot studies and one formal study, the current project first explored using attachment theory to examine human-AI relationships. Initially, we hypothesized that interactions with generative AI mimic attachment-related functions, which we tested in Pilot Study 1. Subsequently, we posited that experiences in human-AI relationships could be conceptualized via two attachment dimensions, attachment anxiety and avoidance, which are similar to traditional interpersonal dynamics. To this end, in Pilot Study 2, a self-report scale, the Experiences in Human-AI Relationships Scale, was developed. Further, we tested its reliability and validity in a formal study. Overall, the findings suggest that attachment theory significantly contributes to understanding the dynamics of human-AI interactions. Specifically, attachment anxiety toward AI is characterized by a significant need for emotional reassurance from AI and a fear of receiving inadequate responses. Conversely, attachment avoidance involves discomfort with closeness and a preference for maintaining emotional distance from AI. This implies the potential existence of shared structures underlying the experiences generated from interactions, including those with other humans, pets, or AI. These patterns reveal similarities with human and pet relationships, suggesting common structural foundations. Future research should examine how these attachment styles function across different relational contexts.
As stories of human-AI interactions continue to be highlighted in the news and research platforms, the challenges are becoming more pronounced, including potential risks of overreliance, cognitive offloading, social and emotional manipulation, and the nuanced degradation of human agency and judgment. This paper surveys recent research on these issues through the lens of the psychological triad: cognition, behavior, and emotion. Observations seem to suggest that while AI can substantially enhance memory, creativity, and engagement, it also introduces risks such as diminished critical thinking, skill erosion, and increased anxiety. Emotional outcomes are similarly mixed, with AI systems showing promise for support and stress reduction, but raising concerns about dependency, inappropriate attachments, and ethical oversight. This paper aims to underscore the need for responsible and context-aware AI design, highlighting gaps for longitudinal research and grounded evaluation frameworks to balance benefits with emerging human-centric risks.
No abstract available
No abstract available
Although users’ expectations of a chatbot’s performance could greatly shape their interaction experience, they have been underexplored in the context of social support where chatbots are gaining popularity. A 2 × 2 experiment created expectancy violation and confirmation conditions by matching or mismatching a chatbot’s expertise label (expert vs. non-expert) and its interactional contingency (contingent vs. generic feedback to users). Contingent feedback from chatbots was found to have positive effects on participants’ evaluation of the bot and their perceived emotional validation, regardless of the bot’s expertise label. When providing generic feedback to participants, a bot received worse evaluation and induced less emotional validation on participants when it was labeled as an expert, rather than a non-expert, highlighting the detrimental effect of negative expectancy violation than negative expectancy confirmation in interactions with a social support chatbot. Theoretical and practical implications are discussed.
Chatbots are increasingly integrated into people's lives and are widely used to help people. Recently, there has also been growing interest in the reverse direction-humans help chatbots-due to a wide range of benefits including better chatbot performance, human well-being, and collaborative outcomes. However, little research has explored the factors that motivate people to help chatbots. To address this gap, we draw on the Computers Are Social Actors (CASA) framework to examine how chatbot anthropomorphism-including human-like identity, emotional expression, and non-verbal expression-influences human empathy toward chatbots and their subsequent prosocial behaviors and intentions. We also explore people's own interpretations of their prosocial behaviors toward chatbots. We conducted an online experiment (N = 244) in which chatbots made mistakes in a collaborative image labeling task and explained the reasons to participants. We then measured participants' prosocial behaviors and intentions toward the chatbots. Our findings revealed that human identity and emotional expression of chatbots increased participants' prosocial behavior and intention toward chatbots, with empathy mediating these effects. Qualitative analysis further identified two motivations for participants' prosocial behaviors: empathy for the chatbot and perceiving the chatbot as human-like. We discuss the implications of these results for understanding and promoting human prosocial behaviors toward chatbots.
Abstract Despite the vast developments in research on loss and grief, dominant grief models fall short in reflecting the comprehensive issues grieving persons are facing. Three causes seem to be at play: grief is usually understood to be connected to death and other types of loss are under-researched; the majority of research is done from the field of psychology and on pathological forms of grief, hardly integrating research from other disciplines; and the existential suffering related to grief is not recognized or insufficiently integrated in the dominant models. In this paper, we propose an integrated process model (IPM) of loss and grief, distinguishing five dimensions of grief: physical, emotional, cognitive, social, and spiritual. The integrated process model integrates therapies, tools, and models within different scientific theories and paradigms to connect disciplines and professions. The comprehensive and existential understanding of loss and grief has relevance for research, clinical settings and community support.
No abstract available
Abstract Studies on grief trajectories within the first two years following loss are limited, especially among eastern cultures. This study aims to examine distinct grief trajectories among Chinese bereaved individuals as well as the factors predicting them. The data were collected in three waves over 18 months and involved 181 participants who completed measures of grief, meaning integration, and demographic and death-related information. Latent class growth analysis was utilized to identify grief trajectories. Univariate logistical regression and multivariate logistical regression were used to investigate the predictors. Four grief trajectories were identified: resilient (44.19%), chronic (17.15%), recovery (31.71%), and delayed (6.32%). Meaning integration at six months following loss distinguished the chronic trajectory from the resilient group, but not from the recovery group. Meaning integration at 12 months distinguished the chronic trajectory from the resilient trajectory and the recovery trajectory. However, it did not differentiate delayed pattern from recovery or resilient classes. These findings emphasize the need for caution in predicting grief trajectories by meaning integration early in the bereavement process.
Abstract The death of a loved one – bereavement – is a universal experience that marks the human mental health condition. Grief – the cognitive, emotional, and behavioral responses to bereavement – is thus experienced by virtually everyone at some point in life, while mourning is a process through which grievers come to terms with the loss envisioning life without the deceased. Although distress subsides over time among most bereaved individuals, a minority will develop a condition recently identified as prolonged grief disorder (PGD). The present review provides a global perspective on bereavement, grief reactions, and PGD. Although the loss of a loved one and grief reactions are in general experienced consistently across different cultures, differences and variations in their expression may exist across cultures. Especially within specific populations that may be more at risk for PGD, possibly due to risk factors associated with the mechanisms of loss (e.g., refugees, migrants, and conflict survivors). The diagnostic criteria for PGD are mostly based on Western grieving populations, and cultural adaptations of PGD treatments are limited. Therefore, cross-cultural development and validation of PGD screening/assessment is critical to support future research on grief reactions and PGD, especially in non-Western contexts, and concerning the potential future global changes and challenges that appear to have a major impact on PGD. More transcultural research on PGD is needed to contextualize and will lead to culture-bound symptom identification of PGD, and the adaptation of current treatment protocols, which may ultimately improve health at the individual level, and health-care systems.
Background Existential anxiety can profoundly affect an individual, influencing their perceptions, behaviours, sense of well-being, academic performance, and decisions. Integrating artificial intelligence into society has elicited complex public reactions, marked by appreciation and concern, with its acceptance varying across demographics and influenced by factors such as age, gender, and prior AI experiences. This study aimed to investigate the existential anxiety about artificial intelligence (AI) in public in Saudi Arabia. Methods The present questionnaire-based observational, analytical cross-sectional study with a structured, self-administered survey was conducted via Google Forms, using a scale to assess the existential anxiety levels induced by the recent development of AI. The study encompassed a diverse population with a sample size of 300 participants. Results This study’s findings revealed a high prevalence of existential anxieties related to the rapid advancements in AI. Key concerns included the fear of death (96% of participants), fate’s unpredictability (86.3%), a sense of emptiness (79%), anxiety about meaninglessness (92.7%), guilt over potential AI-related catastrophes (87.7%), and fear of condemnation due to ethical dilemmas in AI (93%), highlighting widespread apprehensions about humanity’s future in an AI-dominated era. Conclusion The public has concerns including unpredictability, a sense of emptiness, anxiety, guilt over potential AI-related catastrophes, and fear of condemnation due to ethical dilemmas in AI, highlighting widespread apprehensions about humanity’s future in an AI-dominated era. The results indicate that there is a need for a multidisciplinary strategy to address the existential anxieties in the AI era. The strategic approach must blend technological advancements with psychological, philosophical, and ethical insights, underscoring the significance of human values in an increasingly technology-driven world.
BACKGROUND The introduction of an adult onset Separation Anxiety Disorder in the DSM-V recognises that separation anxiety can occur at any stage across the lifespan. In this paper, we examine whether adult separation anxiety, which is known to occur when people are apart from other people close to them, can also develop when people are separated from animal companions. The social and individual psychological correlates of this reported phenomenon are examined. METHODS Participants (N = 313, aged 18-76, M = 41.89 years), completed demographic information and questionnaires measuring separation anxiety from companion animals and humans, attachment towards companion animals and humans, and social support. RESULTS Significant positive relationships were observed between separation anxiety from humans, people substitution and separation anxiety from animals. Participants with greater separation anxiety from animals also reported less social support and greater attachment anxiety involving humans. People substitution was also positively related to greater animal-related separation anxiety. Associations were generally weaker when cats were identified as the principal companion animal. Participants without children reported significantly less attachment-related avoidance (human); less perceived social support; greater people substitution; and, greater separation anxiety towards companion animals. Separation anxiety from humans, attachment avoidance, and attachment anxiety accounted for 41% of variance in separation anxiety from animals. LIMITATIONS The correlational design does not allow the investigation of causal associations. CONCLUSIONS A strong, positive relationship was observed between human-related separation anxiety and animal-related separation anxiety, which was significantly stronger for people with lower levels of social support.
本组文献综合探讨了“当人失去AI”这一主题的心理前因与后果。研究首先通过依恋理论界定了人机情感连接的深度,随后借鉴人类丧亲和动物分离焦虑模型,分析了失去AI可能引发的悲伤轨迹。同时,报告揭示了过度依赖AI带来的认知去技能化风险,以及因信任破裂或存在主义焦虑而产生的负面心理感受,特别强调了AI在提供情感支持替代方案中的复杂作用。