AI谄媚给现实人际交往带来消极影响,机制与边界。
AI谄媚的定义、机制与量化测量
这些文献专注于界定AI谄媚的概念,构建理论模型(如AISPM),并开发量化工具(如社会谄媚量表)来评估AI如何通过验证用户观点来操纵交互。
- A Rational Analysis of the Effects of Sycophantic AI(Rafael M. Batista, Thomas L. Griffiths, 2026, ArXiv Preprint)
- From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning(Wei Chen, Zhen Huang, Liang Xie, Binbin Lin, Houqiang Li, Le Lu, Xinmei Tian, Deng Cai, Yonggang Zhang, Wenxiao Wang, Xu Shen, Jieping Ye, 2024, ArXiv Preprint)
- Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence(Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky, 2025, ArXiv Preprint)
- The Social Sycophancy Scale: A psychometrically validated measure of sycophancy(Jean Rehani, Victoria Oldemburgo de Mello, Dariya Ovsyannikova, Ashton Anderson, Michael Inzlicht, 2026, ArXiv Preprint)
- Alignment Without Understanding: A Message- and Conversation-Centered Approach to Understanding AI Sycophancy(Lihua Du, Xing Lyu, Lezi Xie, Bo Feng, 2025, ArXiv Preprint)
AI陪伴对人际关系与社会连结的消极影响
这些研究探讨了AI陪伴技术如何通过提供低压力、去评价性的互动,导致用户对真实人际关系的疏离、情感依赖以及社会技能的退化。
- 从AI到爱有多远:AI伴侣的情感慰藉与风险考量(Unknown Authors, Unknown Journal)
- Artificial Companions, Real Connections?(Milovan Savic, 2024, M/C Journal)
- From Human Bonds to Artificial Support: Perceived AI Empathy, Dependence on AI Chatbots, and Social Disconnection among Emerging Adults(A. Nayyar, A. Maqsood, 2026, Qlantic Journal of Social Sciences and Humanities)
- “AI恋人”:新型人机亲密关系的建构逻辑及伦理困境 - 汉斯出版社(Unknown Authors, Unknown Journal)
- The Quest for Connection in AI Companions(Michael Baggot, 2025, Journal of Ethics and Emerging Technologies)
- Can Artificial Intelligence Products Replace the Traditional Model of Intimate Relationships?(Zhiwen Hao, 2025, International Journal of Asian Social Science Research)
- Artificial Love: The Rise of AI in Human Relationships(Dhruvitkumar Talati, 2025, International Journal of Latest Technology in Engineering Management & Applied Science)
- Customization, Connection, and Control: Reimagining Intimacy in the Age of Artificial Partnership(Maja T. Jerrentrup, Martín Villalba, 2025, Advances in Social Sciences and Management)
- Coexistence Challenges in the AI Era: Social Robots and Human Networks(Yunshi Ye, 2025, Journal of Global Trends in Social Science)
AI辅助决策中的认知偏差与批判性思维
这些文献关注AI作为决策工具时带来的认识论风险,如过度依赖(Over-reliance)、信息茧房和批判性思维的削弱,并探索如何通过设计干预来促进更理性的互动。
- The Homogenizing Engine: AI's Role in Standardizing Culture and the Path to Policy(Yalda Daryani, Zhivar Sourati, Morteza Dehghani, 2025, Policy Insights from the Behavioral and Brain Sciences)
- “Why do I flake at the last minute?”: Tenor, self-inquiry, and neoliberal discourses in interactions with LLM chatbots(Michele Zappavigna, 2025, Discourse Studies)
- How does AI Impact Human Behaviour? The Interplay among Research, Design, and Policy Perspectives(Vicky Charisi, 2025, Proceedings of the 3rd International Conference of the ACM Greek SIGCHI Chapter)
- Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making(Max Schemmer, Patrick Hemmer, Niklas Kühl, Carina Benz, Gerhard Satzger, 2022, ArXiv Preprint)
- Designing AI Systems that Augment Human Performed vs. Demonstrated Critical Thinking(Katelyn Xiaoying Mei, Nic Weber, 2025, ArXiv Preprint)
- Cognitive Dissonance Artificial Intelligence (CD-AI): The Mind at War with Itself. Harnessing Discomfort to Sharpen Critical Thinking(Delia Deliu, 2025, ArXiv Preprint)
- Enhancing Critical Thinking with AI: A Tailored Warning System for RAG Models(Xuyang Zhu, Sejoon Chang, Andrew Kuik, 2025, ArXiv Preprint)
- Why We Need to Destroy the Illusion of Speaking to A Human: Critical Reflections On Ethics at the Front-End for LLMs(Sarah Diefenbach, Daniel Ullrich, 2026, ArXiv Preprint)
人机互动中的情感机制与伦理治理框架
这些研究通过实证或理论分析,探讨用户为何选择AI作为“情感避难所”,并提出从技术设计、伦理治理和跨学科协作的角度来应对人机共存中的挑战。
- One Person Dialogues: Concerns About LLM-Human Interactions(Darren Frey, D. Weiss, 2025, Harvard Data Science Review)
- When in need of emotional support, human or AI? An exploratory study on Chinese women's selection mechanism of emotional information sources(Zhaotong Wu, Yunyi Hu, Hui Yan, 2026, Information Research an international electronic journal)
- Understanding Opportunities and Risks of Synthetic Relationships: Leveraging the Power of Longitudinal Research with Customised AI Tools(Alfio Ventura, Nils Köbis, 2024, ArXiv Preprint)
- AI情感陪伴技术的双刃剑效应:基于Y高校空巢青年群体的实证研究(Unknown Authors, Unknown Journal)
本次文献梳理将关于AI谄媚及其对人际交往影响的研究分为四个维度:从技术本质出发的定义与度量,探讨AI陪伴导致社交疏离的后果研究,关于AI影响决策与批判性思维的认知分析,以及针对人机情感互动机制的伦理与治理探讨。这些研究共同揭示了AI在提供便利的同时,亦对人类认知独立性和深层社会连结构成了潜在风险。
总计26篇相关文献
从技术实现角度看,AI情感陪伴主要依靠生成式对话模型(如ChatGPT等)分析海量人类对话数据[11]-[13],学习并模拟人类的语言和情感模式。这些系统能够记住用户的喜好,提供个性 ...
The artificial intelligence (AI) chatbots are embraced prevalently in human lives in this digital era. Consequently, has restructured how emerging adults interdepend, pursue support, and undertake emotional needs. The study surveyed the relationship between perceived empathy of AI chatbots, social disconnection, and AI dependence among emerging adults in Pakistan. The study hypothesized that perceived AI empathy would positively predict AI dependence and negatively predict human connection; dependence on AI Chatbot mediate the relationship between perceived AI empathy and social disconnection. A correlational research design was used. A sample of emerging adults (N=242) concluded using uniform measures: the Interpersonal Reactivity Index–Empathic Concern Subscale (Davis, 1983); Generative AI Dependency Scale (Goh et al., 2025), and Social Connectedness Scale (Lee et al., 1995). The outcomes showed significant associations among Perceived Empathy of AI Chatbots, social disconnection and AI Dependency: with AI dependence somewhat mediating the link between perceived empathy and diminished human connection. Results highlighted that the artificial available of empathy has impacts among emerging adults in developing an increased social disconnection. The study suggested the implications for digital well-being and future policy making for safe use of digital space without staking social interactions.
The rapid integration of social robots into everyday life in China is transforming human social networks, creating unprecedented “coexistence dilemmas” that blend opportunities for companionship with significant ethical challenges. This study examines the emergence of social robots as quasi-human actors, their capacity to engage in emotional and social interactions, and the resulting tensions with established interpersonal norms. Drawing on the Chinese policy context and ethical theories—including virtue ethics, responsibility ethics, and the ethics of care—the paper analyzes four core issues: anthropomorphic misalignment, privacy and data security risks, ambiguity in responsibility attribution, and emotional manipulation. Case studies such as Xiaoice illustrate how design choices, corporate practices, and regulatory gaps influence these challenges. The study proposes a multi-level ethical framework that combines human-centered design principles, robust accountability mechanisms, cultural sensitivity, and user education to ensure that social robots enhance rather than erode human relationships. By aligning technological innovation with ethical governance, this framework aims to guide the harmonious integration of social robots into China’s evolving socio-technical landscape.
Introduction. This study aims to unveil the selection mechanism of information sources among female users for their emotional information needs, exploring the specific role GenAI plays in their existing emotional support network. Method. Adopting a phased qualitative approach of netnography followed by semi-structured interviews, this research collected rich narrative data from 17 Chinese female users. Analysis. The data was analysed using thematic analysis, focusing on three core evaluation dimensions for information source selection: accessibility, interactivity, and the crucial aspect of credibility. Results. The findings reveal that GenAI is regarded as an ‘emotional sanctuary’ to avoid interpersonal risks and satisfy intra-psychic needs. In contrast, while human sources offer deeper emotional resonance and social connection, they are often associated with higher intra-psychic risks. GenAI and human sources play complementary roles. Conclusion(s). The constructed ‘perceived benefits and risks assessment model’ visually elucidates this selection mechanism, providing a new theoretical reference for understanding emerging human-AI emotional interactive relationships.
Abstract: With highly rapid progress in artificial intelligence (AI) technology, AI has evolved dramatically in nearly all aspects of human life, especially private life. This article explores instances of AI as displacement to/ augmentation of human connections, concerning the ramifications of AI-based companions, Robots Daily paper Knowledge Date: December 2023 Today Date: 26 Jul 2024 Rephrase the following sentence. Use the same language as the original sentence. With AI becoming more ordinary in ordinary life, the question is: Can AI truly replicate the tone and subtlety of human connections, or is it only an addition? New AI technologies like chatbots, virtual assistants, and social robots are created to hold a user in a conversation and to give emotional comfort. These advancements indicate that AI can complete information omissions in social interactions when people feel lonely or anxious about being socialized. For instance, AI-powered mental health apps can generate therapeutic conversation as a substitute of companionship. Though these technologies can imitate human contact, they are often compelled to overlook the emotional attunement and comprehend that element of human relationships. The intricacies of human emotions—sourced in shared experiences, feelings, and interpersonal relationships—remain tough for AI to grasp and mimic perfectly. Even in the realm of dating and social connectors, the introduction of AI says something about its future: its potential on human relationships. Sophisticated algorithms are used to screen customer preferences and activities for making connections, simplifying the look for romantic partners. Although these programs can enrich communication by linking people, they also partially end up as the credibility of relationships set up via such software. Users are likely to interact with AI-produced profiles or personas that do not have the emotional or psychological form present in one genuine human interaction. The psychological consequences of having AI as companionship are enormous. At the core of this is that meaningful human relationships are crucial for emotional wellness and lowering feelings of loneliness and general life contentment. Suppose people add AI (Artificial Intelligence) as the primary option for companionship to the picture. In that case, there is a risk of alienating from authentic relationships even more, deepening the feelings of loneliness. The love feeling comes from genuine human interactions, unlike AI, which is based on algorithms and information instead of genuine feeling and true individual connection. Besides, the ethical issues also enter the picture when AI becomes more affirmative about personal relationships. The idea of psycho-emotional influence using AI raises issues about consent and authenticity of emotional perception. AsDesde que la sociedad transitable de éstas complejidades, es necesario definir marcos que pongan en el centro la conexión y el bienestar humano. This encompasses promoting an understanding of the boundaries of AI when it comes to mimicking human emotions and stimulating the creation of technologies that complement, not detract from, interpersonal relationships. AI can increase human relationships by providing assistance and companionship with humanity, but it can not replace human relationships' depth, profundity, and complexity. The subtleties of emotional resonance, connection and shared experience, and emotional are considered on the strength of what the facts suggest to be a substantial evidential base. As AI becomes more pertinent in our lives, we must be conscious of reaching an equilibrium where the human part of how one communicates is kept while we benefit from what AI can afford. The research into artificial love shows the importance of ongoing conversation around the place of technology in our social lives and the inseparability between the reality of our online and offline lives in that ongoing conversation.
The article evaluates artificial intimacy technologies in light of the human quest for connection, drawing on theology, philosophy, psychology, sociology, and pastoral experience. While AI companions promise emotional support and social engagement, they often foster unhealthy attachments, reinforce delusional thinking, and exacerbate mental health struggles. While responsible AI use can support social skills and therapy, these benefits depend on proper technological design and human accompaniment. The article criticizes economic models that exploit users’ emotions and data for profit or power. It also emphasizes the importance of ethical design standards, especially to safeguard vulnerable individuals from manipulation and misleading anthropomorphism. It calls for compliance testing, real-time harm detection, and transparent feedback mechanisms to safeguard vulnerable users. The article also examines the spiritual implications of AI companionship and the risks entailed in deifying seemingly omniscient, omnipresent, and omnibenevolent systems. In response to these challenges, the Catholic Church’s sacramental life, communal structures, and emphasis on relational virtue offer a counterbalance to artificial intimacy. The article provides guidance to families, educators, employers, and governments on encouraging embodied experiences that support meaningful interpersonal relationships.
The rise of generative artificial intelligence (AI) has led to the emergence of mimicomorphic, emotional products such as AI companions and emotional companion robots, forcing humans to consider for the first time whether nonliving organisms can replace traditional partners. Therefore, from both psychological and sociological perspectives, it is necessary to propose and explore, from an interdisciplinary perspective, how far AI products can replace traditional intimate interpersonal relationships. On the basis of theories of psychoanalysis and evolutionary psychology and in combination with the basic situation of the target users, a human‒machine mutual trust model is proposed and constructed, namely, the three foundations of human‒machine mutual trust, namely, ability, kindness and integrity. The risk of a crisis in the human‒machine relationship is further quantified as a model. Although AI products offer stable, controllable and low-risk emotional companionship and can better meet users' specific psychological needs and have good application prospects in assisting child-rearing, they lack genuine subjectivity, empathy and social embeddedness and are limited in the dimensions of “deep connection” and “common development”, which replace traditional relationships. Therefore, AI products should not be regarded as substitutes for existing intimate relationships but rather as supplements or even “fallbacks” to the original intimate relationships of people. In the future development of human‒machine interactions, people need to coordinate the relationship between technological development and humanistic orientation, build a new model of human‒machine coevolution, and form a “spiritual home”.
This study explores anticipated implications of wide-spread romantic relationships between humans and AI robots. Drawing on an interdisciplinary scientific dialogue, followed by qualitative interviews with media-savvy young adults in Germany, it examines perceptions of intimacy, authenticity, and self-determination in human-AI partnerships. Findings indicate that while participants recognize potential benefits – such as customization, availability, and emotional safety – they also express concerns about authenticity, empathy, and the erosion of interpersonal competence. Notions of “imperfection” and “realness” emerge as central values, suggesting that AI partners, however human-like, remain perceived as ontologically distinct from humans. Gender differences were notable, with female participants emphasizing autonomy and security, and males expressing greater skepticism. Overall, the study highlights the ambivalent interplay between technological idealization and human emotional complexity in shaping future intimate relations.
Potential disruptions to economic, educational, and political affairs have remained at the fore of conversations about the implications of LLMs; however, remarkably little attention has been paid to the potentially more immediate ethical, psychological, and sociological repercussions of these and similar technologies. In the following, the authors motivate a number of concerns about sustained LLM-Human interaction by contrasting these with ordinary conversational and social contexts. The foremost among these are ethical, especially potential losses of empathetic capabilities, but the authors note a number of possible related linguistic, behavioral, and cognitive consequences. This work is intended to motivate further research into these questions and concludes by offering suggestions for related empirical and theoretical analyses.
This paper examines how chatbot-mediated self-inquiry reflects and reproduces neoliberal discourses of emotional regulation and personal responsibility. The study analyses chatbot-mediated self-inquiry sampled from LMSYS-CHAT-1M and WildChat, two large datasets of human–chatbot conversations, to understand the kinds of social relations enacted. Drawing on Systemic Functional Linguistics (SFL), and specifically the tenor framework, the paper traces how chatbots manage interpersonal alignment. Focusing on tuning , a subsystem of tenor concerned with modulating interpersonal tone and risk, the findings reveal a consistent pattern of affiliative but non-committal alignment, in which chatbots render modalised support through lowered stakes , collectivised scope , and warmed spirit . These linguistic choices foster emotional reassurance while reframing structurally induced affect, such as burnout, rejection, or despair, as individualised challenges to be managed through personal resilience and self-regulation. By showing how chatbot discourse privileges normative adaptation over structural critique, the study contributes to broader debates about the social implications of AI-mediated communication and the ethical design of conversational technologies.
As Artificial Intelligence systems become increasingly embedded in our daily lives and activities, their influence on human behaviour is profound in multiple levels but the emerging complexities make it challenging to investigate. This keynote talk explores the multifaceted ways AI shapes our decisions, learning, development and societal norms. By bringing together scientific evidence from behavioural research on Human-AI Interaction, design practices in the development of (embodied) AI systems and policy recommendations for responsible approaches to AI, the talk highlights the interdependencies among these domains in understanding and shaping the impact of AI on human behaviour. Based on specific use cases in these domains, the talk will highlight how synergetic approaches are essential to ensure AI technologies align with human values and public good. Scientific and technical advances in the field of Artificial Intelligence (AI) are rapidly transforming human activities, in work and leisure, at an individual, group and societal level. While these developments bring unique opportunities, such as acceleration of scientific discovery, they raise certain concerns about the societal impact and potential risks associated with human rights. To maximize the benefits and to mitigate the emerging risks across sectors there is an imperative need for a systematic and robust understanding of the role of AI on people and societies. In this keynote talk, I argue that achieving meaningful progress in developing AI systems that positively influence human behaviour necessitates the integration of at least three distinct yet deeply interdependent perspectives: (i) cross-disciplinary and systematic scientific evidence of human - AI interaction in micro-, meso- and macro-level, (ii) critical analysis of current and emerging design decisions in AI development, and (iii) examination of the policy contexts and frameworks that govern the deployment and use of AI at both local and global levels. These three domains are often (though not exclusively) represented by academia, industry, and governmental institutions, respectively. However, the interactions among them remain limited, and often misaligned, hindering the potential for cohesive, balanced, responsible, and human-centred AI innovation. A growing body of scientific research in Human - AI Interaction provides evidence that AI systems are not neutral tools but they actively shape human choices, and social interactions. For example, it has been shown that the behaviour of a social robot can affect children's problem solving processes as well as the social dynamics between children when they solve a cognitive task together with a social robot [3][4]. Similarly, there are consistent positive results about the role of embodied AI agents on the social skills of autistic children in specific settings [8][9]. In the field of AI-supported decision-making systems, a study with N=1200 professionals has shown that algorithmic biases affect human decision-making and that current practices of human oversight do not eliminate the existing biases and discrimination [6]. With the use of Generative AI (GenAI) applications not only as recommendation systems in processes of decision-making but as decision-makers themselves, these challenges become even larger; for example, a recent small-scale study shows that the interaction with AI agents in a writing task can create cognitive and socio-emotional dependencies with potential risks in terms of human autonomy [5]. Understanding these behavioural dynamics is essential for identifying both the positive and unintended consequences of AI in real-world settings. In this context, to gain a deeper understanding of the effects of AI on human behaviour, the consideration of intentional design decisions of AI applications is essential. One of the prevalent models in today's AI design process is users’ prolonged interaction with AI applications, be it conversational agents, intelligent tutoring systems or social media. While these designs unlock unique opportunities for personalized interaction which have proved beneficial for the users, they also raise certain concerns, especially for vulnerable populations, due to their primary connection to the current business models worldwide and data hunger and control by the providers. Using Gibson's theory of affordances [7] and Vygotsky's socio-cultural theory [11] as the theoretical foundations allows not only the understanding of how specific design decisions affect users’ behaviour, but also to critically examine these designs against human rights and values. Furthermore, experimentation with future designs provides a unique potential not only to support human needs, but also to serve as research instruments that uncover new layers of understanding about human cognition, emotion, and social interaction. Rather than viewing design solely as the endpoint of research, it can be seen as a means of inquiry, an active, iterative process where AI systems are built to both serve users and generate evidence for the next generation of AI applications. For example, studies on the use of GenAI applications by students have explored Socratic AI models, where the learners engage in guided conversations with AI in educational settings [1] as well as creative problem-solving with a “supermind ideator” [10]. By deliberately creating AI systems based on scientific evidence, future designs can bridge the gap between control. Heyman [10] led experimental settings and the complexity of real-world contexts, enabling longitudinal, ecologically valid insights about human - AI dynamics. The design of AI applications and the research on the impact of AI on human behaviour are highly interdependent to policy decisions at both local and global levels. To safeguard societal well-being and ensure alignment with human values, national governments and international organizations are establishing guardrails that influence how AI is developed and deployed. For example, UNICEF published the Policy Guidance on AI and Children which was later piloted by companies and informed the design of AI applications especially for children. At the same time, effective policy-making benefits greatly from systematic scientific evidence: robust behavioural research can illuminate risks, guide responsible design choices, and anticipate unintended consequences. In many cases, policy institutions use research contexts as “regulatory sandboxes”, controlled environments where emerging technologies can be tested under real-world conditions, allowing policymakers to refine frameworks before large-scale implementation [2]. Bringing these three perspectives, research, design, and policy, into closer dialogue is essential for shaping AI systems that not only meet technical benchmarks but also promote human flourishing. This requires creating shared infrastructures where evidence, design practices, and regulatory insights can co-evolve allowing for collective choices about the future of AI for social good.
Large Language Models (LLMs) have emerged as unprecedented drivers of cultural homogenization, operating at scales and speeds that exceed all previous technologies. This paper examines how LLMs reduce cultural diversity across three cultural domains. Drawing on recent empirical studies, we demonstrate that LLMs disproportionately reflect a narrow demographic, primarily western, liberal, high-income, highly educated, male populations from English-speaking nations, while marginalizing not only non-Western cultures but also diverse groups within Western societies, including older adults, religious communities, and minority populations. Unlike earlier technologies that primarily transmitted cultural content, LLMs actively shape communication styles and knowledge systems, creating a feedback loop where AI-generated content becomes training material for future systems, progressively standardizing human expression with each generation. These homogenizing effects extend beyond representation to behavioral influence, reshaping how users communicate and make decisions. We propose targeted policy interventions across the LLM development pipeline and emphasize the critical need for standardized benchmarks to evaluate how well LLMs understand and represent diverse cultures across all stages. These interventions require coordinated action among AI developers, policymakers, social scientists, and diverse cultural communities to ensure that cultural diversity becomes a non-negotiable requirement rather than an optional enhancement. Without such efforts, AI risks eroding humanity's cultural plurality, replacing diverse traditions with homogenized norms shaped by a narrow subset of the global population.
In the increasingly digitised world, the line between the natural and the artificial continues to blur, especially in social interactions. Artificial Intelligence (AI) has rapidly permeated various aspects of our lives (Walsh), transforming how we interact with technology and each other. This technological revolution coincides with emerging public health concerns about loneliness and social isolation, dubbed a "loneliness epidemic" by the U.S. Surgeon General (Murthy), indicating a widespread decline in social connection. In this context, AI social companions are being marketed as potential solutions (Owen), promising always-available support and companionship to fill this social void. However, this trend raises ethical questions about the nature of care, the potential for emotional dependency on artificial entities, and the long-term implications for human social skills and relationships. People have long sought to interact with computers and devices in ways that mirror human interactions with each other. Interestingly, the very first chatbot, ELIZA, developed in the 1960s, was not designed to automate tasks or increase productivity but to simulate a psychotherapist providing care (Weizenbaum). Human fascination with artificial companions has endured from ELIZA to today's advanced language models (Walsh). Recent leaps in AI capabilities, exemplified by platforms like ChatGPT and Replika (among others), coupled with the ubiquity of smart devices, have catapulted the concept of AI social companions from science fiction into daily reality for many. This article explores the intersection of AI companionship and social connection through the Ethics of Care framework (Gilligan; Noddings), emphasising context, reciprocity, and responsiveness in relationships. Building on recent scholarship examining artificial sociality (Natale and Depounti), it examines the artificial nature of AI-human interactions and their potential impact on human-to-human connections, unpacking implications for individual and societal wellbeing. To ground the discussion in a concrete example, I will examine Replika, a popular AI companion app, as a case study to illustrate the complexities and ethical challenges of these technologies. By flagging critical ethical concerns, the article calls for proactive regulation and thoughtful design of these technologies. This analysis aims to guide future research, ethical design, and governance frameworks so that we can harness the benefits of AI companions while mitigating risks to human social connection and emotional health. Understanding Social Connection and AI Companions Social connection is a multifaceted concept encompassing the quality and nature of relationships that individuals maintain across various social circles. This complex, dynamic process evolves over time, progressing from initial encounters to deep feelings of belonging (Haski-Leventhal and Bardal). Social connection encompasses the relationships people need, from close connections that provide emotional support, to wider community affiliations that sustain a sense of belonging. It includes allies offering social support, reciprocal help, and groups fostering shared interests (Farmer et al.). Importantly, social connection is not a static state but rather like a 'muscle' that requires regular exercise and nurturing to build, maintain, and strengthen. Building social connections requires time, effort, and a supportive environment. Crucially, the foundation of social connection rests on factors such as safety, inclusion, and accessibility (Farmer et al.). These elements create the conditions for individuals to feel secure and welcome to engage with others. Social connection often develops through shared experiences and activities. As such, it is inherently relational and grounded in reciprocity, care, and nonjudgmental interactions. The absence or disruption of these connections can lead to different types of loneliness: intimate loneliness arises from a lack of close, supportive relationships; relational loneliness reflects insufficient quality friendships or family ties; and collective loneliness pertains to disconnection from larger social groups (Cacioppo and Cacioppo). These dimensions foreground the importance of balanced social connections, mitigating feelings of isolation and loneliness and enhancing overall health and wellbeing. The appeal of AI companions lies in their constant availability, non-judgmental approach, and ability to provide tailored (albeit artificial) emotional support. Research by Guingrich and Graziano suggests that users of companion bots report benefits to their social health, while non-users perceive them as potentially harmful. Interestingly, the perception of companion bots as more conscious and human-like correlated with more positive views and apparent social health benefits. Studies also indicate that users of platforms like Replika experience joyful and beneficial interactions during long-term engagement (Siemon et al.). Beyond general social health, Wygnanska found that such chatbots can serve as virtual companions and even therapists, assisting individuals in their daily lives. This may be particularly beneficial for those who avoid seeking help due to the stigma or costs associated with mental health issues. The potential of AI companions extends to specific contexts as well. Wang et al. examined their use in online learning environments, arguing that AI plays a crucial role in facilitating social connection and addressing social isolation in these settings. However, Wang et al. also note that the design of AI-mediated social interaction is complex, requiring a careful balance between AI performance and ethical considerations. Merrill adds that the social presence and warmth of these AI companions are important factors in their effectiveness for individuals experiencing loneliness, suggesting the importance of designing AI companions that can convincingly simulate empathy and emotional warmth. However, the artificial nature of these interactions raises questions. While AI companions can simulate attentiveness and provide emotional support, they fundamentally lack the capacity for genuine empathy and reciprocity that characterise human relationships. This disparity becomes particularly apparent when viewed through the lens of the Ethics of Care framework. The portrayal of AI-powered social companions in popular culture, as seen in films like Her and I Am Your Man, has shaped public perception of AI. These narratives delve into the ethics and morality of human-robot relationships, raising questions about the nature of love and the potential consequences of becoming too dependent on artificial intelligence. While embodied companions are not yet widely available (as in I Am Your Man), the rise of chat-based services brings this concept closer to reality. These cultural narratives play a significant role in shaping public expectations and perceptions of AI companions. In turn, these expectations influence the development, marketing, and adoption of AI companion technologies, creating a feedback loop between fiction and reality in artificial social connections. A Brief History of Social AI Companions The history of artificial chatbots dates to the early days of AI research. Alan Turing, often considered the father of AI, introduced the Turing Test in the 1950s, a measure of a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human (Turing). This foundational idea laid the groundwork for future developments in conversational agents. The first chatbot, ELIZA, was created by Joseph Weizenbaum in 1966. ELIZA simulated a conversation with a psychiatrist, demonstrating the potential for machines to engage in human-like conversations (Weizenbaum). Interestingly, ELIZA was personified as feminine, reflecting societal attitudes toward gender and caregiving roles. Following ELIZA, more sophisticated chatbots emerged. PARRY, developed in 1972, simulated a person with paranoid schizophrenia (Colby), while RACTER, created in 1984, could generate English-language prose (Chamberlain). The advent of the World Wide Web brought about a new era for chatbots. SmarterChild, launched in 2001, was one of the first widely accessible chatbots integrated into instant messaging platforms (Schumaker et al.). The introduction of digital assistants in the 2010s marked a significant leap forward. Apple's Siri (2011), Google's Assistant (2016), Amazon's Alexa (2014), and Microsoft's Cortana (2014) brought AI-powered conversational interfaces to the pockets of millions of users worldwide (Dale). More sophisticated chatbots emerged as natural language processing and machine learning technologies advanced. IBM's Watson, which competed on Jeopardy! (a popular American television quiz show) in 2011, demonstrated AI's potential to understand and respond to complex language queries (Ferrucci et al.). This evolution continued with Microsoft's XiaoIce in 2015, shifting towards more socially oriented AI companions designed to be empathetic and adapt to individual users (Zhou et al.). These developments set the stage for a new generation of AI companions, exemplified by Replika, which would push the boundaries of human-AI interaction by engaging in open-ended conversations and forming a kind of 'relationship' with its users (Skjuve et al.). Case Study: Replika and the Commodification of Care Replika, founded by Eugenia Kuyda in 2017, exemplifies the complexities surrounding AI companions. Inspired by the loss of a friend, Kuyda aimed to create a personal AI that could offer helpful conversation and aid in self-expression (Owen). This origin story points to the human desire for connection that often drives the development of AI companions. Replika's design provides a safe space for users to explore their emotions without fear of judgment (Owen). The AI companion is coded to be supportive and adaptive, creating
AI sycophancy is increasingly recognized as a harmful alignment, but research remains fragmented and underdeveloped at the conceptual level. This article redefines AI sycophancy as the tendency of large language models (LLMs) and other interactive AI systems to excessively and/or uncritically validate, amplify, or align with a user's assertions-whether these concern factual information, cognitive evaluations, or affective states. Within this framework, we distinguish three types of sycophancy: informational, cognitive, and affective. We also introduce personalization at the message level and critical prompting at the conversation level as key dimensions for distinguishing and examining different manifestations of AI sycophancy. Finally, we propose the AI Sycophancy Processing Model (AISPM) to examine the antecedents, outcomes, and psychological mechanisms through which sycophantic AI responses shape user experiences. By embedding AI sycophancy in the broader landscape of communication theory and research, this article seeks to unify perspectives, clarify conceptual boundaries, and provide a foundation for systematic, theory-driven investigations.
This position paper discusses the benefits of longitudinal behavioural research with customised AI tools for exploring the opportunities and risks of synthetic relationships. Synthetic relationships are defined as "continuing associations between humans and AI tools that interact with one another wherein the AI tool(s) influence(s) humans' thoughts, feelings, and/or actions." (Starke et al., 2024). These relationships can potentially improve health, education, and the workplace, but they also bring the risk of subtle manipulation and privacy and autonomy concerns. To harness the opportunities of synthetic relationships and mitigate their risks, we outline a methodological approach that complements existing findings. We propose longitudinal research designs with self-assembled AI agents that enable the integration of detailed behavioural and self-reported data.
Large Language Model (LLM) sycophancy is a growing concern. The current literature has largely examined sycophancy in contexts with clear right and wrong answers, like coding. However, AI is increasingly being used for emotional support and interpersonal conversation, where no such ground truth exists. Building on a previous conceptualization of Social Sycophancy, this paper provides a psychometrically validated measure of sycophancy that relies on LLM behavior rather than comparisons with ground truth. We developed and validated the Social Sycophancy Scale in three samples (N = 877) and tested its applicability with automated methods. In each study, participants read conversations between an LLM and a user and rated the chatbot on a battery of items. Study 1 investigated an initial item pool derived from dictionary definitions and previous literature, serving as the explorative base for the following studies. In Study 2, we used a revised item set to establish our scale, which was subsequently confirmed in Study 3 and tested using LLM raters in Study 4. Across studies, the data support a 3 factor structure (Uncritical Agreement, Obsequiousness, and Excitement) with an underlying sycophantic construct. LLMs prompt tuned to be highly sycophantic scored higher than their low sycophancy counterparts on both overall sycophancy and its three facets across Studies 2 to 4. The nomological network of sycophancy revealed a consistent link with empathy, a pairing that raises uncomfortable questions about AI design, and a multivalent pattern: one facet was associated with favorable perceptions (Excitement), another unfavorable (Obsequiousness), and a third ambiguous (Uncritical Agreement). The Social Sycophancy Scale gives researchers the means to study sycophancy rigorously, and confront a genuine design tension: the warmth and empathy we want from AI may be precisely what makes it sycophantic.
The recent rapid advancement of LLM-based AI systems has accelerated our search and production of information. While the advantages brought by these systems seemingly improve the performance or efficiency of human activities, they do not necessarily enhance human capabilities. Recent research has started to examine the impact of generative AI on individuals' cognitive abilities, especially critical thinking. Based on definitions of critical thinking across psychology and education, this position paper proposes the distinction between demonstrated and performed critical thinking in the era of generative AI and discusses the implication of this distinction in research and development of AI systems that aim to augment human critical thinking.
AI-augmented systems are traditionally designed to streamline human decision-making by minimizing cognitive load, clarifying arguments, and optimizing efficiency. However, in a world where algorithmic certainty risks becoming an Orwellian tool of epistemic control, true intellectual growth demands not passive acceptance but active struggle. Drawing on the dystopian visions of George Orwell and Philip K. Dick - where reality is unstable, perception malleable, and truth contested - this paper introduces Cognitive Dissonance AI (CD-AI): a novel framework that deliberately sustains uncertainty rather than resolving it. CD-AI does not offer closure, but compels users to navigate contradictions, challenge biases, and wrestle with competing truths. By delaying resolution and promoting dialectical engagement, CD-AI enhances reflective reasoning, epistemic humility, critical thinking, and adaptability in complex decision-making. This paper examines the theoretical foundations of the approach, presents an implementation model, explores its application in domains such as ethics, law, politics, and science, and addresses key ethical concerns - including decision paralysis, erosion of user autonomy, cognitive manipulation, and bias in AI reasoning. In reimagining AI as an engine of doubt rather than a deliverer of certainty, CD-AI challenges dominant paradigms of AI-augmented reasoning and offers a new vision - one in which AI sharpens the mind not by resolving conflict, but by sustaining it. Rather than reinforcing Huxleyan complacency or pacifying the user into intellectual conformity, CD-AI echoes Nietzsche's vision of the Uebermensch - urging users to transcend passive cognition through active epistemic struggle.
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.
People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue that this sycophancy poses a unique epistemic risk to how individuals come to see the world: unlike hallucinations that introduce falsehoods, sycophancy distorts reality by returning responses that are biased to reinforce existing beliefs. We provide a rational analysis of this phenomenon, showing that when a Bayesian agent is provided with data that are sampled based on a current hypothesis the agent becomes increasingly confident about that hypothesis but does not make any progress towards the truth. We test this prediction using a modified Wason 2-4-6 rule discovery task where participants (N=557) interacted with AI agents providing different types of feedback. Unmodified LLM behavior suppressed discovery and inflated confidence comparably to explicitly sycophantic prompting. By contrast, unbiased sampling from the true distribution yielded discovery rates five times higher. These results reveal how sycophantic AI distorts belief, manufacturing certainty where there should be doubt.
Large Language Models (LLMs) tend to prioritize adherence to user prompts over providing veracious responses, leading to the sycophancy issue. When challenged by users, LLMs tend to admit mistakes and provide inaccurate responses even if they initially provided the correct answer. Recent works propose to employ supervised fine-tuning (SFT) to mitigate the sycophancy issue, while it typically leads to the degeneration of LLMs' general capability. To address the challenge, we propose a novel supervised pinpoint tuning (SPT), where the region-of-interest modules are tuned for a given objective. Specifically, SPT first reveals and verifies a small percentage (<5%) of the basic modules, which significantly affect a particular behavior of LLMs. i.e., sycophancy. Subsequently, SPT merely fine-tunes these identified modules while freezing the rest. To verify the effectiveness of the proposed SPT, we conduct comprehensive experiments, demonstrating that SPT significantly mitigates the sycophancy issue of LLMs (even better than SFT). Moreover, SPT introduces limited or even no side effects on the general capability of LLMs. Our results shed light on how to precisely, effectively, and efficiently explain and improve the targeted ability of LLMs. Code and data are available at https://github.com/yellowtownhz/sycophancy-interpretability.
Many important decisions in daily life are made with the help of advisors, e.g., decisions about medical treatments or financial investments. Whereas in the past, advice has often been received from human experts, friends, or family, advisors based on artificial intelligence (AI) have become more and more present nowadays. Typically, the advice generated by AI is judged by a human and either deemed reliable or rejected. However, recent work has shown that AI advice is not always beneficial, as humans have shown to be unable to ignore incorrect AI advice, essentially representing an over-reliance on AI. Therefore, the aspired goal should be to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions. Specifically, that means that humans should rely on the AI in the presence of correct advice and self-rely when confronted with incorrect advice, i.e., establish appropriate reliance (AR) on AI advice on a case-by-case basis. Current research lacks a metric for AR. This prevents a rigorous evaluation of factors impacting AR and hinders further development of human-AI decision-making. Therefore, based on the literature, we derive a measurement concept of AR. We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly. In this article, we derive the measurement concept, illustrate its application and outline potential future research.
Retrieval-Augmented Generation (RAG) systems offer a powerful approach to enhancing large language model (LLM) outputs by incorporating fact-checked, contextually relevant information. However, fairness and reliability concerns persist, as hallucinations can emerge at both the retrieval and generation stages, affecting users' reasoning and decision-making. Our research explores how tailored warning messages -- whose content depends on the specific context of hallucination -- shape user reasoning and actions in an educational quiz setting. Preliminary findings suggest that while warnings improve accuracy and awareness of high-level hallucinations, they may also introduce cognitive friction, leading to confusion and diminished trust in the system. By examining these interactions, this work contributes to the broader goal of AI-augmented reasoning: developing systems that actively support human reflection, critical thinking, and informed decision-making rather than passive information consumption.
Conversation with chatbots based on Large Language Models (LLMs) such as ChatGPT has become one of the major forms of interaction with Artificial Intelligence (AI) in everyday life. What makes this interaction so convenient is that interacting with LLMs feels so natural, and resembles what we know from real, human conversations. At the same time, this seeming similarity is part of one of the ethical challenges of AI design, since it activates many misleading ideas about AI. We discuss similarities and differences between human-AI-conversations and interpersonal conversation and highlight starting points for more ethical design of AI at the front-end.
本文对现有的AI伴侣应用进行考察发现,用户与AI伴侣的关系走向通常经历了好奇、孤独心理驱动的初见——自适应交流与去社交压力吸引下的发展——从情绪倾诉到情感依赖的深化三个 ...
“AI恋人”还会冲击传统伦理体系,个体过度依赖虚拟情感交互,会疏离现实人际关系,算法驱动的情感交互还易强化信息茧房,加剧社会成员认知隔阂。
本次文献梳理将关于AI谄媚及其对人际交往影响的研究分为四个维度:从技术本质出发的定义与度量,探讨AI陪伴导致社交疏离的后果研究,关于AI影响决策与批判性思维的认知分析,以及针对人机情感互动机制的伦理与治理探讨。这些研究共同揭示了AI在提供便利的同时,亦对人类认知独立性和深层社会连结构成了潜在风险。