AI拟人化的概念定义与研究方法,心理学or人机交互
概念边界、理论来源与可测量化:拟人化的定义/触发阈值/操作化与测量框架
聚焦“拟人化是什么/从何而来/何时被触发/如何被可靠界定与操作化”。一方面处理概念边界(拟人化 vs 拟摹/拟模)、理论来源与机制起点(社会认知、认知基础、ToM相关的解释前提);另一方面强调研究可行性:在HCI中如何操纵拟人线索、如何构建测量与量表/维度划分、以及拟人化触发的阈值/分类条件,为后续因果与路径研究提供统一的定义-操作化-测量框架。
- A Taxonomy of Linguistic Expressions That Contribute To Anthropomorphism of Language Technologies(Alicia DeVrio, Myra Cheng, Lisa Egede, Alexandra Olteanu, Su Lin Blodgett, 2025, ArXiv Preprint)
- Experimental Operationalizations of Anthropomorphism in HCI Contexts: A Scoping Review(R. Frazer, 2022, Communication Reports)
- Thinking Technology as Human: Affordances, Technology Features, and Egocentric Biases in Technology Anthropomorphism(Jianqing Zheng, S. Jarvenpaa, 2021, Journal of the Association for Information Systems)
- Social Cognition Unbound(Adam Waytz, Nicholas Epley, John T. Cacioppo, 2010, Current Directions in Psychological Science)
- Embodied social interaction constitutes social cognition in pairs of humans: A minimalist virtual reality experiment(Tom Froese, Hiroyuki Iizuka, Takashi Ikegami, 2014, ArXiv Preprint)
- Development and validation of the Attribution of Mental States Questionnaire (AMS-Q): A reference tool for assessing anthropomorphism(L. Miraglia, G. Peretti, F. Manzi, C. Di Dio, D. Massaro, A. Marchetti, 2023, Frontiers in Psychology)
- Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction(Minja Axelsson, Henry Shevlin, 2026, ArXiv Preprint)
- Humanizing Machines: Rethinking LLM Anthropomorphism Through a Multi-Level Framework of Design(Yunze Xiao, Lynnette Hui Xian Ng, Jiarui Liu, Mona Diab, 2025, Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing)
- Beyond the Machine: An Integrative Framework of Anthropomorphism in AI(P. Curșeu, Ștefana Radu, 2026, Behavioral Sciences)
- Anthropomorphism on Risk Perception: The Role of Trust and Domain Knowledge in Decision-Support AI(Manuele Reani, Xiangyang He, Zuolan Bao, 2026, ArXiv Preprint)
- The Cognitive Bases of Anthropomorphism: From Relatedness to Empathy(G. Airenti, 2015, International Journal of Social Robotics)
- A Mind like Mine: The Exceptionally Ordinary Underpinnings of Anthropomorphism(Nicholas Epley, 2018, Journal of the Association for Consumer Research)
- LLM Theory of Mind and Alignment: Opportunities and Risks(Winnie Street, 2024, ArXiv Preprint)
- Seeing Minds in Others – Can Agents with Robotic Appearance Have Human-Like Preferences?(Molly C. Martini, Christian A. Gonzalez, E. Wiese, 2016, PLOS ONE)
拟人化的计算/语言测量与操作化指标(如AnthroScore等)
专门讨论“拟人化的计算/语言测量与指标化”,把拟人化从主观感知推向可计算表征(如文本层面的自动度量与与人类判断对齐),体现出与问卷/实验操控不同的可量化研究路径。
- AnthroScore: A Computational Linguistic Measure of Anthropomorphism(Myra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky, 2024, ArXiv Preprint)
心智归因与ToM:从加工机制到责任/道德判断与因果归因
以“心智归因/ToM”为核心中介过程,讨论拟人化是如何被加工、被理解并转化为用户的心智模型与道德/责任判断。该组同时覆盖:ToM在AI交互与评估中的作用(含调查与机制综述)、心智感知/具身证据与眼动/注意相关线索、以及“责任/意图/因果”在拟人化语义(心智措辞、解释表达)影响下的归因与责任转移机制(包括功劳归因、道德责任、归责与责备/遮蔽)。
- Exploring the relationship between anthropomorphism and theory‐of‐mind in brain and behaviour(R. Hortensius, Michael Kent, K. Darda, L. Jastrzab, Kami Koldewyn, Richard Ramsey, Emily S. Cross, 2020, Human Brain Mapping)
- On seeing human: a three-factor theory of anthropomorphism.(Nicholas Epley, A. Waytz, J. Cacioppo, 2007, Psychological Review)
- Mind the Eyes: Artificial Agents’ Eye Movements Modulate Attentional Engagement and Anthropomorphic Attribution(D. Ghiglino, C. Willemse, D. D. Tommaso, A. Wykowska, 2020, Frontiers in Robotics and AI)
- Embodied artificial agents for understanding human social cognition(A. Wykowska, T. Chaminade, G. Cheng, 2016, Philosophical Transactions of the Royal Society B: Biological Sciences)
- Mind Perception in HRI: Exploring Users’ Attribution of Mental and Emotional States to Robots with Different Behavioural Styles(Ilenia Cucciniello, S. SanGiovanni, Gianpaolo Maggi, Silvia Rossi, 2023, International Journal of Social Robotics)
- Missing links in social cognition: The continuum from nonhuman agents to dehumanized humans.(Virginia S. Y. Kwan, S. Fiske, 2008, Social Cognition)
- Interactive AI with a Theory of Mind(Mustafa Mert Çelikok, Tomi Peltola, Pedram Daee, Samuel Kaski, 2019, ArXiv Preprint)
- A Survey of Theory of Mind in Large Language Models: Evaluations, Representations, and Safety Risks(Hieu Minh "Jord" Nguyen, 2025, ArXiv Preprint)
- Theory of Mind for Explainable Human-Robot Interaction(Marie S. Bauer, Julia Gachot, Matthias Kerzel, Cornelius Weber, Stefan Wermter, 2025, ArXiv Preprint)
- Theory of Mind and Self-Disclosure to CUIs(Samuel Rhys Cox, 2025, ArXiv Preprint)
- Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents(M. Stuart, Markus Kneer, 2021, Proceedings of the ACM on Human-Computer Interaction)
- External and internal attribution in human-agent interaction: Insights from neuroscience and virtual reality(N Lauharatanahirun, AS Won, 2024, Human-Machine …)
- Attributions of intent and moral responsibility to AI agents(Reem Ayad, Jason E. Plaks, 2024, Computers in Human Behavior: Artificial Humans)
- Anthropomorphism-based causal and responsibility attributions to robots(Yuji Kawai, Tomohito Miyake, Jihoon Park, J. Shimaya, Hideyuki Takahashi, Minoru Asada, 2023, Scientific Reports)
- Responsibility Attribution in Human Interactions with Everyday AI Systems(Joe Brailsford, F. Vetere, Eduardo Velloso, 2025, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems)
- Whose Voice Is It Anyway? Understanding AI Customization and Responsibility Attribution in Human-AI Collaboration(Yunzhijun Yu, Mehmet A. Yetim, Ovidiu C. Cocieru, 2025, International Journal of Human–Computer Interaction)
- Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task(Pamela J. Hinds, Teresa L. Roberts, Hank Jones, 2004, Human–Computer Interaction)
- Public trust and blame attribution in human-AI interactions: a comparison between air traffic control and vehicle driving(Peidong Mei, Richard Cannon, Jim A. C. Everett, Peng Liu, Edmond Awad, 2025, Transportation Research Interdisciplinary Perspectives)
- More Similar Values, More Trust? -- the Effect of Value Similarity on Trust in Human-Agent Interaction(Siddharth Mehrotra, Catholijn M. Jonker, Myrthe L. Tielman, 2021, ArXiv Preprint)
- Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research(Susanne Hindennach, Lei Shi, Filip Miletić, Andreas Bulling, 2023, ArXiv Preprint)
- Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation(Jessica He, Stephanie Houde, Justin D. Weisz, 2025, ArXiv Preprint)
- Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents(M. Stuart, Markus Kneer, 2021, Proceedings of the ACM on Human-Computer Interaction)
拟人化如何塑造信任:形成机制、校准/修复与情境/失败边界条件
把“信任”作为主要因变量,系统研究拟人化(外观、行为、语言与同理线索、内在自说/道歉/解释等)如何影响信任形成与校准、以及在失败/风险与不同情境下如何产生非线性与条件效应(如是否提升信任、是否增强韧性、失败后如何衰减)。同时区分与拆解“信任 vs 拟人化”的关系,形成可用于预测/设计的信任路径理解。
- How anthropomorphism affects trust in intelligent personal assistants(Qianling Chen, Hyun Jung Park, 2021, Industrial Management & Data Systems)
- Trust Formation in AI Delegation: The Interplay of Explainability and Anthropomorphism(Chenyang Li, Zhixuan Deng, Hao Ling, Xu Zhang, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust(Ruoxi Shang, Gary Hsieh, Chirag Shah, 2024, ArXiv Preprint)
- To trust or not to trust a human(-like) AI—A scoping review and conjoint analyses on factors influencing anthropomorphism and trust(M. Reuter, Britta Marleen Kirchhoff, Thomas Franke, Thea Radüntz, Corinna Peifer, 2025, Zeitschrift für Arbeitswissenschaft)
- A Little Anthropomorphism Goes a Long Way(E. D. Visser, Samuel S. Monfort, Kimberly Goodyear, Li Lu, Martin O'Hara, Mary R. Lee, R. Parasuraman, F. Krueger, 2017, Human Factors: The Journal of the Human Factors and Ergonomics Society)
- User Trust on an Explainable AI-based Medical Diagnosis Support System(Yao Rong, Nora Castner, Efe Bozkir, Enkelejda Kasneci, 2022, ArXiv Preprint)
- How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair(Taenyun Kim, Hayeon Song, 2021, Telematics and Informatics)
- The Impact of Revealing Large Language Model Stochasticity on Trust, Reliability, and Anthropomorphization(Chelse Swoopes, Tyler Holloway, Elena L. Glassman, 2025, ArXiv Preprint)
- Disentangling Trust and Anthropomorphism Toward the Design of Human-Centered AI Systems(Theodore Jensen, 2021, Lecture Notes in Computer Science)
- The Influence of Perceived Anthropomorphism and Social Presence on AI Interface User Experience: A Systematic Review(B. Williams, 2025, International Journal of Human–Computer Interaction)
- Exploring Interactions Between Trust, Anthropomorphism, and Relationship Development in Voice Assistants(W. Seymour, M. V. Kleek, 2021, Proceedings of the ACM on Human-Computer Interaction)
- Anthropomorphism and Trust in Human-Large Language Model interactions(Akila Kadambi, Ylenia D'Elia, Tanishka Shah, Iulia Comsa, Alison Lentz, Katie Siri-Ngammuang, Tara Buechler, Jonas Kaplan, Antonio Damasio, Srini Narayanan, Lisa Aziz-Zadeh, 2026, ArXiv Preprint)
- Fostering trust in human-robot interaction via perspective-taking and anthropomorphism: an empirical study in an industrial simulation game(M. Wittmann, Runjie Xie, Jeanine Kirchner-Krath, Benedikt Morschheuser, 2026, International Journal of Human-Computer Studies)
- Experimental Investigation of Trust in Anthropomorphic Agents as Task Partners(Akihiro Maehigashi, Takahiro Tsumura, Seiji Yamada, 2022, ArXiv Preprint)
- Almost human: Anthropomorphism increases trust resilience in cognitive agents.(Ewart J. de Visser, Samuel S. Monfort, Ryan McKendrick, Melissa A. Smith, Patrick McKnight, F. Krueger, R. Parasuraman, 2016, Journal of Experimental Psychology: Applied)
- The Effect of Anthropomorphism and Failure Comprehensibility on Human-Robot Trust(Eileen Roesler, L. Onnasch, 2020, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- My colleague is an AI! Trust differences between AI and human teammates(Eleni Georganta, Anna-Sophie Ulfert, 2024, Team Performance Management: An International Journal)
- Do You Still Trust Me? Human-Robot Trust Repair Strategies(Connor Esterwood, L. Robert, 2021, 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN))
- Trust and Trustworthiness from Human-Centered Perspective in HRI - A Systematic Literature Review(D. Souza, Sonia Sousa, Kadri Kristjuhan-Ling, Olga Dunajeva, Mare Roosileht, Avar Pentel, Mati Mõttus, M. Özdemir, Zanna Gratsjova, 2025, Electronics)
- More Human-Likeness, More Trust?: The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and Interdependent Human-Agent Cooperation(Philipp Kulms, S. Kopp, 2019, Proceedings of Mensch und Computer 2019)
- The Effect of Anthropomorphism on Trust in an Industrial Human-Robot Interaction(Tim Schreiter, Lucas Morillo-Mendez, Ravi T. Chadalavada, Andrey Rudenko, Erik Alexander Billing, Achim J. Lilienthal, 2022, ArXiv Preprint)
- The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming(Myke C. Cohen, Mustafa Demir, Erin K. Chiou, N. Cooke, 2021, 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS))
- Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms(Xusen Cheng, Xiaoping Zhang, Jason F. Cohen, Jian Mou, 2022, Information Processing & Management)
- From "AI" to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust?(Nanna Inie, Stefania Druga, Peter Zukerman, Emily M. Bender, 2024, ArXiv Preprint)
- Which recommendation system do you trust the most? Exploring the impact of perceived anthropomorphism on recommendation system trust, choice confidence, and information disclosure(Yanyun Wang, Weizi Liu, Mike Yao, 2024, New Media & Society)
- The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming(Myke C. Cohen, Mustafa Demir, Erin K. Chiou, N. Cooke, 2021, 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS))
情境/任务依赖的拟人化效应:负荷、情绪、反馈与风险/建议采纳(含负效应)
强调“任务/情境依赖”的交互结果:拟人化不仅影响信任,还会通过情绪、认知负荷、注意/参与、反馈与服务失败体验改变用户行为与风险感知;并讨论负效应、分歧与边界条件(如严肃场景、驾驶/服务失败、不同反馈结构、建议采纳的适当性)。该组的独特性在于把拟人化的效果放进真实任务过程与动态心理状态中研究。
- Do Anthropomorphic Features Affect User Engagement? Exploring the Dual-Path Effects of Anthropomorphic Design on MCI User Engagement Efficacy(Hesen Li, WanQing Zhang, Weicheng Pan, Hong Chen, LiYan Bu, Shuyi Wang, 2025, International Journal of Human–Computer Interaction)
- Understanding AI service failures: insights from attribution theory(W Zhang, L Wu, SQ Liu, 2026, Journal of Service Management)
- Investigating the customer trust in artificial intelligence: The role of anthropomorphism, empathy response, and interaction(Nguyen Thi Khanh Chi, N. Vu, 2022, CAAI Transactions on Intelligence Technology)
- Do Anthropomorphic Features Affect User Engagement? Exploring the Dual-Path Effects of Anthropomorphic Design on MCI User Engagement Efficacy(Hesen Li, WanQing Zhang, Weicheng Pan, Hong Chen, LiYan Bu, Shuyi Wang, 2025, International Journal of Human–Computer Interaction)
- Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms(Xusen Cheng, Xiaoping Zhang, Jason F. Cohen, Jian Mou, 2022, Information Processing & Management)
- Trust and Cognitive Load During Human-Robot Interaction(Muneeb Imtiaz Ahmad, Jasmin Bernotat, Katrin Lohan, Friederike Eyssel, 2019, ArXiv Preprint)
- The Emotional Dilemma: Influence of a Human-like Robot on Trust and Cooperation(Dennis Becker, Diana Rueda, Felix Beese, Brenda Scarleth Gutierrez Torres, Myriem Lafdili, Kyra Ahrens, Di Fu, Erik Strahl, Tom Weber, Stefan Wermter, 2023, ArXiv Preprint)
- Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication(Richard Savery, Lisa Zahray, Gil Weinberg, 2020, ArXiv Preprint)
- Can Gamification Foster Trust-Building in Human-Robot Collaboration? An Experiment in Virtual Reality(Marc Riar, Mareike Weber, J. Ebert, Benedikt Morschheuser, 2025, Information Systems Frontiers)
- How does anthropomorphism promote consumer responses to social chatbots: mind perception perspective(Baoku Li, Ruoxi Yao, Yafeng Nan, 2024, Internet Research)
- Can Gamification Foster Trust-Building in Human-Robot Collaboration? An Experiment in Virtual Reality(Marc Riar, Mareike Weber, J. Ebert, Benedikt Morschheuser, 2025, Information Systems Frontiers)
- Robot Transparency and Employees’ Acceptance: The Roles of Trust and Anthropomorphism(Minghui Yao, Jiyu Li, Zhenyuan Wang, 2025, International Journal of Human–Computer Interaction)
- The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming(Myke C. Cohen, Mustafa Demir, Erin K. Chiou, N. Cooke, 2021, 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS))
- Effects of Autonomous Driving Context and Anthropomorphism of in-Vehicle Voice Agents on Intimacy, Trust, and Intention to Use(Dong Wook Park, Yushin Lee, Yong Min Kim, 2023, International Journal of Human–Computer Interaction)
- Effects of Autonomous Driving Context and Anthropomorphism of in-Vehicle Voice Agents on Intimacy, Trust, and Intention to Use(Dong Wook Park, Yushin Lee, Yong Min Kim, 2023, International Journal of Human–Computer Interaction)
- Anthropomorphism on Risk Perception: The Role of Trust and Domain Knowledge in Decision-Support AI(Manuele Reani, Xiangyang He, Zuolan Bao, 2026, ArXiv Preprint)
- Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making(Max Schemmer, Patrick Hemmer, Niklas Kühl, Carina Benz, Gerhard Satzger, 2022, ArXiv Preprint)
- Trust and Cognitive Load During Human-Robot Interaction(Muneeb Imtiaz Ahmad, Jasmin Bernotat, Katrin Lohan, Friederike Eyssel, 2019, ArXiv Preprint)
- The Effect of Anthropomorphism and Failure Comprehensibility on Human-Robot Trust(Eileen Roesler, L. Onnasch, 2020, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- Fostering trust in human-robot interaction via perspective-taking and anthropomorphism: an empirical study in an industrial simulation game(M. Wittmann, Runjie Xie, Jeanine Kirchner-Krath, Benedikt Morschheuser, 2026, International Journal of Human-Computer Studies)
- Almost human: Anthropomorphism increases trust resilience in cognitive agents.(Ewart J. de Visser, Samuel S. Monfort, Ryan McKendrick, Melissa A. Smith, Patrick McKnight, F. Krueger, R. Parasuraman, 2016, Journal of Experimental Psychology: Applied)
交互与社会关系:对话/多模态/陪伴与人机协作(含社会存在与团队过程)
聚焦“交互与社会关系层面的结果”:通过对话/多模态/陪伴/员工服务与协作游戏等研究设计,考察拟人化作为交互线索如何影响社会存在、亲密/陪伴感、合作关系与团队过程(如委派与适应性AI队友、合作绩效与误解链路)。该组更偏HRI/HCI的任务-关系实证证据,强调拟人化带来的社会体验与关系结构变化。
- Texting with Humanlike Conversational Agents: Designing for Anthropomorphism(Anna-Maria Seeger, Jella Pfeiffer, Armin Heinzl, 2021, Journal of the Association for Information Systems)
- Social Robots As Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearance(Yoonwon Jung, Sowon Hahn, 2023, ArXiv Preprint)
- Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver(Tim Schreiter, Lucas Morillo-Mendez, Ravi T. Chadalavada, Andrey Rudenko, Erik Billing, Martin Magnusson, Kai O. Arras, Achim J. Lilienthal, 2023, ArXiv Preprint)
- How Service Robot Anthropomorphism and Employee Self-Efficacy Shape Collaborative Performance.(Yaqin Cao, Xiangjun Hu, Ming Li, Wei Lyu, 2026, Cyberpsychology, Behavior, and Social Networking)
- Human-AI Collaboration in a Cooperative Game Setting(Zahra Ashktorab, Q. Liao, Casey Dugan, James M. Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, Murray Campbell, 2020, Proceedings of the ACM on Human-Computer Interaction)
- New dyads? The effect of social robots' anthropomorphization on empathy towards human beings(Federica Spaccatini, Giulia Corlito, S. Sacchi, 2023, Computers in Human Behavior)
- Expedient Assistance and Consequential Misunderstanding: Envisioning an Operationalized Mutual Theory of Mind(Justin D. Weisz, Michael Muller, Arielle Goldberg, Dario Andres Silva Moran, 2024, ArXiv Preprint)
- Humanlike AI Design Increases Anthropomorphism but Yields Divergent Outcomes on Engagement and Trust Globally(Robin Schimmelpfennig, Mark Díaz, Vinodkumar Prabhakaran, Aida Davani, 2025, ArXiv Preprint)
- Anthropomorphism in social robotics: empirical results on human–robot interaction in hybrid production workplaces(A. Richert, S. Müller, Stefan Schröder, S. Jeschke, 2018, AI & SOCIETY)
- Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial Human-Robot Interaction(L. Onnasch, C. Hildebrandt, 2021, ACM Transactions on Human-Robot Interaction)
- A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction(Eileen Roesler, D. Manzey, L. Onnasch, 2021, Science Robotics)
- The dynamics of human-robot trust attitude and behavior - Exploring the effects of anthropomorphism and type of failure(Eileen Roesler, Meret Vollmann, D. Manzey, L. Onnasch, 2023, Computers in Human Behavior)
- Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction(Jana Appel, A. V. D. Pütten, N. Krämer, J. Gratch, 2012, Advances in Human-Computer Interaction)
- Effects of Autonomous Driving Context and Anthropomorphism of in-Vehicle Voice Agents on Intimacy, Trust, and Intention to Use(Dong Wook Park, Yushin Lee, Yong Min Kim, 2023, International Journal of Human–Computer Interaction)
- Effects of Autonomous Driving Context and Anthropomorphism of in-Vehicle Voice Agents on Intimacy, Trust, and Intention to Use(Dong Wook Park, Yushin Lee, Yong Min Kim, 2023, International Journal of Human–Computer Interaction)
- Beyond Anthropomorphism: Social Presence in Human–AI Collaboration Processes(Dominik Siemon, Edona Elshan, Triparna de Vreede, Philipp Ebel, G. de Vreede, 2025, Journal of Management Studies)
- AI as the Phantom Limb: The Asymmetry of Attribution in Human vs. AI Delegation(Yu-Sheng Chen, Yoyo Tsung-Yu Hou, Yu-Hsuan Lin, Joshua Mu-En Liu, WeiRong Chen, Yihsiu Chen, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Adaptive AI as Collaborator: Examining the Impact of an AI’s Adaptability and Social Role on Individual Professional Efficacy and Credit Attribution in Human–AI Collaboration(Tianshuo Du, Xiaoqian Li, Naifei Jiang, Yichen Xu, Yushu Zhou, 2025, International Journal of Human–Computer Interaction)
- Beyond Anthropomorphism: Social Presence in Human–AI Collaboration Processes(Dominik Siemon, Edona Elshan, Triparna de Vreede, Philipp Ebel, G. de Vreede, 2025, Journal of Management Studies)
- Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication(Richard Savery, Lisa Zahray, Gil Weinberg, 2020, ArXiv Preprint)
- How Linguistic Framing Affects Factory Workers' Initial Trust in Collaborative Robots: The Interplay Between Anthropomorphism and Technological Replacement(Tobias Kopp, M. Baumgartner, Steffen Kinkel, 2021, International Journal of Human-Computer Studies)
- Exploring Trust in Human-AI Collaboration in the Context of Multiplayer Online Games(Keke Hou, Tingting Hou, Lili Cai, 2023, Systems)
- Antecedents of trust in human-robot collaborations(Kristin E. Oleson, D. Billings, Vivien Kocsis, Jessie Y.C. Chen, P. Hancock, 2011, 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA))
- Exploring Interactions Between Trust, Anthropomorphism, and Relationship Development in Voice Assistants(W. Seymour, M. V. Kleek, 2021, Proceedings of the ACM on Human-Computer Interaction)
研究方法创新:计算/形式化与LLM辅助、现象学与长期体验范式
单列“研究方法创新与替代范式”:包括现象学/长期体验研究(第一人称、随时间演化的体验建构),以及计算化形式化(多智能体博弈论ToM框架)、LLM辅助质性方法(Focus Agent)与LLM代理模拟(用于复现实验替代或建模)。这些文献共同点是为拟人化研究提供新的可复现、可扩展或更贴近体验的研究范式。
- AI Phenomenology for Understanding Human-AI Experiences Across Eras(Bhada Yun, Evgenia Taranova, Dana Feng, Renn Su, April Yi Wang, 2026, ArXiv Preprint)
- A Computable Game-Theoretic Framework for Multi-Agent Theory of Mind(Fengming Zhu, Yuxin Pan, Xiaomeng Zhu, Fangzhen Lin, 2025, ArXiv Preprint)
- Focus Agent: LLM-Powered Virtual Focus Group(Taiyu Zhang, Xuesong Zhang, Robbe Cools, Adalberto L. Simeone, 2024, ArXiv Preprint)
- Can Large Language Model Agents Simulate Human Trust Behavior?(Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, Guohao Li, 2024, ArXiv Preprint)
政府/严肃场景中的拟人化:信任传导与心理距离中介评估
该组文献聚焦政府/严肃公共服务场景中的拟人化评估,强调心理距离与信任传导链路(对聊天机器人与对政府的间接影响)及边界条件(如人格特质调节)。与一般HRI/消费或协作场景相比,情境目标与社会后果更特殊,因此保留为独立分组。
- Impact of anthropomorphic government chatbots on users' perceived trust in the government: based on the perspective of the government services interaction chain(Huawei Liu, Xiqing Han, Sihong Li, Wen Lin, Min Zhang, 2025, Aslib Journal of Information Management)
合并后的统一分组形成“从定义到机制、再到设计与验证、最后到方法学”的并列结构:首先界定拟人化与可操作化测量(含概念边界/触发阈值/拟人化操作与计算测量),再从社会认知层面讨论心智归因与ToM如何驱动意图、道德与责任判断,随后在HCI/HRI层面研究拟人化如何影响信任(形成—校准—修复)以及更广的情境/任务结果(情绪、负荷、风险与建议采纳),再进一步覆盖交互与社会关系(对话、多模态、陪伴与协作团队过程),并保留政府严肃场景的特定信任传导链路。最后,单列现象学与计算/LLM辅助等方法创新,为该领域提供可复现与更贴近体验的研究范式。
总计108篇相关文献
… Although AI agents have been endowed with anthropomorphic … high-performing AI) on human-AI cooperation through a … trust the AI more than another human by accepting their AI …
Artificial intelligence (AI) systems, evolving from reactive tools to proactive collaborators, reshape team dynamics in today's digital workplaces. Text‐based collaboration now frequently involves AI participants that perform tasks traditionally handled by humans, such as creative problem‐solving and decision‐making. This transition has been linked to changes in group dynamics, particularly in relation to social presence, which appears to shape the patterns of productivity and collaboration. We conducted three empirical studies on human–AI teams to investigate the relationship between social presence and willingness to depend on teammates, team‐oriented commitment, and motivation to contribute. Drawing on social presence theory and theory of planned behaviour, our results show that while social presence has a direct association with motivation to contribute, an equally important indirect pathway is associated with human factors like team‐oriented commitment and team members' willingness to depend on each other. We show that while social presence is significantly associated with behavioural intentions, greater AI familiarity and understandability are associated with a stronger relationship, raising questions about the sufficiency of relying solely on anthropomorphic features. Our study contributes to the understanding of human–AI collaboration in social presence research, highlighting the importance of considering social and interpersonal processes in hybrid teams. Our findings have managerial implications for organizations looking to adopt AI‐based systems for collaboration.
Computer agents are increasingly endowed with anthropomorphic characteristics and autonomous behavior to improve their capabilities for problem-solving and make interactions with humans more natural. This poses new challenges for human users who need to make trust-based decisions in dynamic and complex environments. It remains unclear if people trust agents like other humans and thus apply the same social rules to human--computer interaction (HCI), or rather, if interactions with computers are characterized by idiosyncratic attributions and responses. To this ongoing and crucial debate we contribute an experiment on the impact of anthropomorphic cues on trust and trust-related attributions in a cooperative human--agent setting, permitting the investigation of interdependent, continued, and coordinated decision-making toward a joint goal. Our results reveal an incongruence between self-reported and behavioral trust measures. First, the varying degree of agent anthropomorphism (computer vs. virtual vs. human agent) did not affect people's decision to behaviorally trust the agent by adopting task-specific advice. Behavioral trust was affected by advice quality only. Second, subjective ratings indicate that anthropomorphism did increase self-reported trust.
Human–AI collaboration has attracted interest from both scholars and practitioners. However, the relationships in human–AI teamwork have not been fully investigated. This study aims to research the influencing factors of trust in AI teammates and the intention to cooperate with AI teammates. We conducted an empirical study by developing a research model of human–AI collaboration. The model presents the influencing mechanisms of interactive characteristics (i.e., perceived anthropomorphism, perceived rapport, and perceived enjoyment), environmental characteristics (i.e., peer influence and facilitating conditions), and personal characteristics (i.e., self-efficacy) on trust in teammates and cooperative intention. A total of 423 valid surveys were collected to test the research model and hypothesized relationships. The results show that perceived rapport, perceived enjoyment, peer influence, facilitating conditions, and self-efficacy positively affect trust in AI teammates. Moreover, self-efficacy and trust positively relate to the intention to cooperate with AI teammates. This study contributes to the teamwork and human–AI collaboration literature by investigating different antecedents of the trust relationship and cooperative intention.
… To better support longer-term interactions with humanlike automated agents, it will be important to understand how perceptions of anthropomorphism affect trust in those agents. …
… AI can act, react and cooperate like humans [27]. … and that anthropomorphism makes users assign social attraction. … of the anthropomorphic characteristics of AI on customer trust. Hence, …
PurposeWith the continuous improvement of artificial intelligence (AI) technology, intelligent personal assistants (IPAs) based on AI have seen unprecedented growth. The present study investigates the effect of anthropomorphism on cognitive and emotional trust and the role of interpersonal attraction in the relationship between anthropomorphism and trust.Design/methodology/approachA structural equation modeling technique with a sample of 263 consumers was used to analyze the data and test the conceptual model.FindingsThe findings illustrate that the anthropomorphism of IPAs did not directly induce trust. Anthropomorphism led users to assign greater social attraction and task attraction to IPAs, which in turn reinforced cognitive or emotional trust in these assistants. Compared with task attraction, social attraction was more powerful in strengthening both cognitive trust and emotional trust. The present study broadens the current knowledge about interpersonal attraction and its role in AI usage by examining two types of interpersonal attraction of IPAs.Originality/valueAs trust plays an important role in the rapid development of human–computer interaction, it is imperative to understand how consumers perceive these intelligent agents and build or improve trust. Prior studies focused on the impact of anthropomorphism on overall trust in AI, and its underlying mechanism was underexplored. The findings can help marketers and designers better understand how to enhance users' trust in their anthropomorphic products, especially by increasing social interactive elements or promoting communication.
As AI agents act on behalf of users, designers increasingly combine explainability (XAI) and anthropomorphism to build trust. Yet, whether these cues create synergy or interference remains a critical, open question. Our online experiment (N=900) revealed a counterintuitive interference effect: anthropomorphism reduced trust in an explainable agent. A preregistered lab study with eye-tracking (N=57) reversed this finding: under controlled conditions, the combined design elicited the highest trust. Eye-tracking reveals the mechanism: XAI promotes deeper cognitive engagement (e.g., longer fixations), which primes users to allocate attention to social cues (e.g., avatars). Our findings show that trust depends on cognitive engagement moderating social cue processing, yielding a critical design insight: effectively pairing explanatory and anthropomorphic interfaces requires first securing the user’s cognitive engagement to avoid undermining trust.
Human-AI interaction is pervasive across many areas of our day to day lives. In this paper, we investigate human-AI collaboration in the context of a collaborative AI-driven word association game with partially observable information. In our experiments, we test various dimensions of subjective social perceptions (rapport, intelligence, creativity and likeability) of participants towards their partners when participants believe they are playing with an AI or with a human. We also test subjective social perceptions of participants towards their partners when participants are presented with a variety of confidence levels. We ran a large scale study on Mechanical Turk (n=164) of this collaborative game. Our results show that when participants believe their partners were human, they found their partners to be more likeable, intelligent, creative and having more rapport and use more positive words to describe their partner's attributes than when they believed they were interacting with an AI partner. We also found no differences in game outcome including win rate and turns to completion. Drawing on both quantitative and qualitative findings, we discuss AI agent transparency, include design implications for tools incorporating or supporting human-AI collaboration, and lay out directions for future research. Our findings lead to implications for other forms of human-AI interaction and communication.
… anthropomorphic agents were associated with greater trust resilience, a higher resistance to breakdowns in trust… and c) that incorporating human-like trust repair behavior largely erased …
As government services have become increasingly anthropomorphic, intelligent and comprehensive, digital governments are introducing anthropomorphic design methods commonly used in the commercial field to build chatbots. However, government services are different from commercial services in seriousness, authority and other characteristics; if the use of anthropomorphism is not good, it may strengthen the public's stereotype of the government and reduce its credibility. Therefore, the questions of whether and how anthropomorphism should be applied to government services must be answered. By constructing an interactive chain of government services, this study examined the impact of government chatbot anthropomorphism on users' perceived trust in the government from the perspectives of service providers, service receivers and service results. A research model was constructed with the anthropomorphic degree of the government chatbot as the independent variable, psychological distance and trust in the chatbot as the serial medium, trust in government as the dependent variable and self-construction and service outcome titers as the moderating variables. Three formal studies were conducted using the method of situational manipulation experiment. SPSS (version 26.0) was used for the statistical analysis of the experimental data to verify the research hypothesis. The results show that the degree of anthropomorphism of government chatbots has a positive impact on trust in government, psychological distance and trust in chatbots play a mediating role in the above process, and psychological distance plays a moderating role between the degree of anthropomorphism of chatbots and trust in chatbots. Authoritarian obedience personality (high versus low) moderates the relationship between psychological distance and chatbot trust. The findings demonstrate that conclusions about meeting user expectations through technology in the business sector can also be applied to government services due to the similarities between the two. This study defines the service interaction chain from the perspectives of service receivers, providers and outcomes, applying it to government services – thereby expanding and enriching service chain theory. Additionally, it uncovers the mechanisms behind trust in government, representing a novel application of trust transfer theory in the context of government affairs. The study contributes to existing theoretical research and offers practical recommendations for government management departments and digital government service providers.
Purpose The purpose of this study was to investigate trust within human-AI teams. Trust is an essential mechanism for team success and effective human-AI collaboration. Design/methodology/approach In an online experiment, the authors investigated whether trust perceptions and behaviours are different when introducing a new AI teammate than when introducing a new human teammate. A between-subjects design was used. A total of 127 subjects were presented with a hypothetical team scenario and randomly assigned to one of two conditions: new AI or new human teammate. Findings As expected, perceived trustworthiness of the new team member and affective interpersonal trust were lower for an AI teammate than for a human teammate. No differences were found in cognitive interpersonal trust and trust behaviours. The findings suggest that humans can rationally trust an AI teammate when its competence and reliability are presumed, but the emotional aspect seems to be more difficult to develop. Originality/value This study contributes to human–AI teamwork research by connecting trust research in human-only teams with trust insights in human–AI collaborations through an integration of the existing literature on teamwork and on trust in intelligent technologies with the first empirical findings on trust towards AI teammates.
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.
AI-enabled technology (AI) has a transformational role in our modern society because it is increasingly used as an interaction partner, making anthropomorphism (tendency to ascribe human features to non-human agents) a central mechanism shaping how people evaluate, accept or resist AI systems. Existing technology acceptance models and anthropomorphism frameworks, however, offer limited guidance on how human-like attributes of AI translate into perceptions of usefulness, perceived control, perceived opportunity or threats, particularly across different levels of AI autonomy. Building on the theory of planned behavior, the technology acceptance model and threat rigidity model, this paper develops a mid-range conceptual framework of AI anthropomorphism grounded in universal social perception dimensions of warmth and competence. We integrate fragmented research to derive three core propositions and four corollaries that specify how warmth and competence attributions shape evaluative cognitions in relation to AI. The framework further identifies AI autonomy as a boundary condition under which anthropomorphic cues may either facilitate acceptance or trigger perceptions of pseudo-empathy, cognitive superiority and identity threat. By offering a parsimonious, theoretically informed model, this paper clarifies when anthropomorphism fosters acceptance versus resistance in human–AI interaction and provides a structured agenda for future empirical research and AI design aimed at fostering synergies and resilience in human–AI ecosystems.
… our review, anthropomorphism does not always lead to trust. … the effects of anthropomorphism on trust viewed trust as a … feels trust (Komiak & Benbasat, 2004), applying the trust model …
How do individuals perceive AI systems as responsible entities in everyday collaborations between humans and AI? Drawing on psychological literature from attribution theory, praise-blame asymmetries and negativity bias, this study investigated the effects of perspective (actor vs observer) and outcome favorability (positive vs negative) on how participants (N=321) attributed responsibility for outcomes resulting from shared human-AI decision-making. Both Bayesian modelling and reflexive thematic analysis of results revealed that, overall, participants were more likely to attribute greater responsibility to the AI systems. When the outcome was positive, participants were more likely to ascribe shared responsibility to both Human and AI systems, rather than either separately. When the outcome was negative, participants were more likely to attribute responsibility to a single entity, but not consistently towards the human or the AI. These results build on the understanding of how individuals cast blame and praise for shared interactions involving AI systems.
… Such structural characteristics of AI service systems lead to attributional complexities when service failures occur. Yet, current research tends to focus narrowly on the contrast between …
Abstract Human–AI collaboration has become increasingly prevalent, integrating sophisticated AI systems into various professional and personal domains. To explore how AI with trainability, dynamic participation, and real-time feedback positively, companies by different social role labels, promotes human–AI collaboration relationships, a 2 (static/adaptable AI) *2 (expert/peer) experiment was conducted in a laboratory with 96 university students. The study found that collaboration with adaptable AI teammates can greatly enhance human professional efficacy, promote individuals to attribute credit to themselves, and promote the establishment of directive and guided human–robot collaboration relationships. When collaborating with an AI expert, the individual gives more credit to the expert; individuals take credit for themselves. When collaborating with an AI peer. This study supplements the evaluation dimension of human-computer collaboration from personal long-term well-being and teamwork relationships. It provides important theoretical and practical design significance for promoting positive, healthy, and sustainable human–AI collaborative development.
… Artificial Intelligence (AI) has potential to address the increasing demand for capacity in Air … trust and blame attribution toward human and AI operators in different Human-AI Interaction (…
… attribution through psychological ownership’s lens, we next examine how responsibility attribution operates in the context of human-AI … participants demonstrate characteristics essential …
… Anthropomorphism, the attribution of human characteristics to non-human entities, is considered an inherent human tendency that manifests naturally during interactions with a wide …
AI is reshaping workplace dynamics as people increasingly delegate tasks to intelligent assistants. Yet how AI delegates are perceived compared to human delegates—and how their performance and their received feedback shape perceptions—remains unclear. We conducted a 2×2×2 between-subject experiment where participants delegated a scheduling task to either a human or an AI agent, varying their competence (high vs. low) and valence of received feedback (positive vs. negative) toward their performance. Participants generally had higher trust in human assistants; yet a striking asymmetry emerged: when an AI assistant received negative feedback, participants felt the criticism as more self-directed—an “AI Phantom Limb” effect—whereas positive feedback transferred less. This asymmetry did not appear with human delegates. These findings highlight broader design implications, suggesting that AI delegation might blur the boundary between self and other. We also discuss how these findings extend theories of delegation and responsibility attribution to AI.
… observers believe that AI agents are responsible for their own actions? How do these AI agents' … participants with vignettes in which an AI agent contributed to a negative outcome either …
While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk's willingness to ascribe inculpating mental states or "mens rea" to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question - also explored in the experiment - whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot "knew" rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense when given the chance. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook.
People tend to expect mental capabilities in a robot based on anthropomorphism and often attribute the cause and responsibility for a failure in human-robot interactions to the robot. This study investigated the relationship between mind perception, a psychological scale of anthropomorphism, and attribution of the cause and responsibility in human-robot interactions. Participants played a repeated noncooperative game with a human, robot, or computer agent, where their monetary rewards depended on the outcome. They completed questionnaires on mind perception regarding the agent and whether the participant’s own or the agent’s decisions resulted in the unexpectedly small reward. We extracted two factors of Experience (capacity to sense and feel) and Agency (capacity to plan and act) from the mind perception scores. Then, correlation and structural equation modeling (SEM) approaches were used to analyze the data. The findings showed that mind perception influenced attribution processes differently for each agent type. In the human condition, decreased Agency score during the game led to greater causal attribution to the human agent, consequently also increasing the degree of responsibility attribution to the human agent. In the robot condition, the post-game Agency score decreased the degree of causal attribution to the robot, and the post-game Experience score increased the degree of responsibility to the robot. These relationships were not observed in the computer condition. The study highlights the importance of considering mind perception in designing appropriate causal and responsibility attribution in human-robot interactions and developing socially acceptable robots.
The process of understanding the minds of other people, such as their emotions and intentions, is mimicked when individuals try to understand an artificial mind. The assumption is that anthropomorphism, attributing human‐like characteristics to non‐human agents and objects, is an analogue to theory‐of‐mind, the ability to infer mental states of other people. Here, we test to what extent these two constructs formally overlap. Specifically, using a multi‐method approach, we test if and how anthropomorphism is related to theory‐of‐mind using brain (Experiment 1) and behavioural (Experiment 2) measures. In a first exploratory experiment, we examine the relationship between dispositional anthropomorphism and activity within the theory‐of‐mind brain network (n = 108). Results from a Bayesian regression analysis showed no consistent relationship between dispositional anthropomorphism and activity in regions of the theory‐of‐mind network. In a follow‐up, pre‐registered experiment, we explored the relationship between theory‐of‐mind and situational and dispositional anthropomorphism in more depth. Participants (n = 311) watched a short movie while simultaneously completing situational anthropomorphism and theory‐of‐mind ratings, as well as measures of dispositional anthropomorphism and general theory‐of‐mind. Only situational anthropomorphism predicted the ability to understand and predict the behaviour of the film's characters. No relationship between situational or dispositional anthropomorphism and general theory‐of‐mind was observed. Together, these results suggest that while the constructs of anthropomorphism and theory‐of‐mind might overlap in certain situations, they remain separate and possibly unrelated at the personality level. These findings point to a possible dissociation between brain and behavioural measures when considering the relationship between theory‐of‐mind and anthropomorphism.
How does anthropomorphism promote consumer responses to social chatbots: mind perception perspective
PurposeBenefiting from the development and innovation of artificial intelligence and affective computing technology, social chatbots that integrate cognitive analysis and affective social services have flooded into the consumer market. For cognition and emotion-oriented tasks, social chatbots do not always receive positive consumer responses. In addition, consumers have a contradictory attitude toward the anthropomorphism of chatbots. Therefore, from the perspective of mind perception and the two dimensions of social judgment, this research explores the mechanism of consumer responses to anthropomorphic interaction styles when social chatbots complete different service tasks.Design/methodology/approachThis paper utilizes three behavior experimental designs and survey methods to collect data and the ANOVA, t-test and bootstrap analysis methods to verify the assumed hypotheses.FindingsThe results indicate that when the service task type of a social chatbot is cognition-oriented, compared to a warm anthropomorphic interaction style, a competent anthropomorphic interaction style can improve consumer responses more effectively. During this process, agent-mind perception plays a mediating role. When the service task type of a social chatbot is emotion-oriented, compared with a competent anthropomorphic conversation style, a warm anthropomorphic conversation style can improve consumer responses. Experience-mind perception mediates this influencing relationship.Originality/valueThe research results theoretically enrich the relevant research on the anthropomorphism of social chatbots and expand the application of the theory of mind perception in the fields of artificial intelligence and interactive marketing. Our findings provide theoretical guidance for the anthropomorphic development and design of social chatbots and the practical management of service task scenarios.
Artificial agents are on their way to interact with us daily. Thus, the design of embodied artificial agents that can easily cooperate with humans is crucial for their deployment in social scenarios. Endowing artificial agents with human-like behavior may boost individuals’ engagement during the interaction. We tested this hypothesis in two screen-based experiments. In the first one, we compared attentional engagement displayed by participants while they observed the same set of behaviors displayed by an avatar of a humanoid robot and a human. In the second experiment, we assessed the individuals’ tendency to attribute anthropomorphic traits towards the same agents displaying the same behaviors. The results of both experiments suggest that individuals need less effort to process and interpret an artificial agent’s behavior when it closely resembles one of a human being. Our results support the idea that including subtle hints of human-likeness in artificial agents’ behaviors would ease the communication between them and the human counterpart during interactive scenarios.
From computers to cars to cell phones, consumers interact with inanimate objects on a daily basis. Despite being mindless machines, consumers nevertheless routinely attribute humanlike mental capacities of intentions, beliefs, attitudes, and knowledge to them. This process of anthropomorphism has historically been treated as an exceptional belief, explained away as simply an inevitable outcome of human nature or as an occasional product of human stupidity. Recent scientific advances, however, have revealed the very ordinary processes of social cognition underlying anthropomorphism. These processes enable psychologists to predict variability in the magnitude of anthropomorphism across contexts and also connect it to the inverse phenomena of dehumanization whereby people treat other human beings as if they lack a humanlike mind. Consumer behavior researchers are uniquely equipped to study these processes, to identify the precise situational features that give rise to anthropomorphism, to understand implications for consumer welfare, and to predict important consequences for how people treat everything from machines to animals to other human beings.
Attributing mental states to others, such as feelings, beliefs, goals, desires, and attitudes, is an important interpersonal ability, necessary for adaptive relationships, which underlies the ability to mentalize. To evaluate the attribution of mental and sensory states, a new 23-item measure, the Attribution of Mental States Questionnaire (AMS-Q), has been developed. The present study aimed to investigate the dimensionality of the AMS-Q and its psychometric proprieties in two studies. Study 1 focused on the development of the questionnaire and its factorial structure in a sample of Italian adults (N = 378). Study 2 aimed to confirm the findings in a new sample (N = 271). Besides the AMS-Q, Study 2 included assessments of Theory of Mind (ToM), mentalization, and alexithymia. A Principal Components Analysis (PCA) and a Parallel Analysis (PA) of the data from Study 1 yielded three factors assessing mental states with positive or neutral valence (AMS-NP), mental states with negative valence (AMS-N), and sensory states (AMS-S). These showed satisfactory reliability indexes. AMS-Q’s whole-scale internal consistency was excellent. Multigroup Confirmatory Factor Analysis (CFA) further confirmed the three-factor structure. The AMS-Q subscales also showed a consistent pattern of correlation with associated constructs in the theoretically predicted ways, relating positively to ToM and mentalization and negatively to alexithymia. Thus, the questionnaire is considered suitable to be easily administered and sensitive for assessing the attribution of mental and sensory states to humans. The AMS-Q can also be administered with stimuli of nonhuman agents (e.g., animals, inanimate things, and even God); this allows the level of mental anthropomorphization of other agents to be assessed using the human as a term of comparison, providing important hints in the perception of nonhuman entities as more or less mentalistic compared to human beings, and identifying what factors are required for the attribution of human mental traits to nonhuman agents, further helping to delineate the perception of others’ minds.
… One question regards the link between communication and theory of mind. Taking an entity as a partner in a communicative interaction necessarily involves attributing it a theory of mind…
Theory of Mind is crucial to understand and predict others’ behaviour, underpinning the ability to engage in complex social interactions. Many studies have evaluated a robot’s ability to attribute thoughts, beliefs, and emotions to humans during social interactions, but few studies have investigated human attribution to robots with such capabilities. This study contributes to this direction by evaluating how the cognitive and emotional capabilities attributed to the robot by humans may be influenced by some behavioural characteristics of robots during the interaction. For this reason, we used the Dimensions of Mind Perception questionnaire to measure participants’ perceptions of different robot behaviour styles, namely Friendly, Neutral, and Authoritarian, which we designed and validated in our previous works. The results obtained confirmed our hypotheses because people judged the robot’s mental capabilities differently depending on the interaction style. Particularly, the Friendly is considered more capable of experiencing positive emotions such as Pleasure, Desire, Consciousness, and Joy; conversely, the Authoritarian is considered more capable of experiencing negative emotions such as Fear, Pain, and Rage than the Friendly. Moreover, they confirmed that interaction styles differently impacted the perception of the participants on the Agency dimension, Communication, and Thought.
Ascribing mental states to non-human agents has been shown to increase their likeability and lead to better joint-task performance in human-robot interaction (HRI). However, it is currently unclear what physical features non-human agents need to possess in order to trigger mind attribution and whether different aspects of having a mind (e.g., feeling pain, being able to move) need different levels of human-likeness before they are readily ascribed to non-human agents. The current study addresses this issue by modeling how increasing the degree of human-like appearance (on a spectrum from mechanistic to humanoid to human) changes the likelihood by which mind is attributed towards non-human agents. We also test whether different internal states (e.g., being hungry, being alive) need different degrees of humanness before they are ascribed to non-human agents. The results suggest that the relationship between physical appearance and the degree to which mind is attributed to non-human agents is best described as a two-linear model with no change in mind attribution on the spectrum from mechanistic to humanoid robot, but a significant increase in mind attribution as soon as human features are included in the image. There seems to be a qualitative difference in the perception of mindful versus mindless agents given that increasing human-like appearance alone does not increase mind attribution until a certain threshold is reached, that is: agents need to be classified as having a mind first before the addition of more human-like features significantly increases the degree to which mind is attributed to that agent.
… interact with other people. To fill this gap, we experimentally investigated whether the type of mind attributed to anthropomorphic … that anthropomorphism fosters the attribution of agency (…
… (1) external attributions: the tendency to ascribe anthropomorphic embodiment, humanlike … processes with the human mind and, thus, behavior during social interactions. Recently, likely …
… of inductive inference—the attribution of human characteristics … We end this review by describing interactions between the … theory of mind—appears critical for anthropomorphism to …
LLM-based conversational agents have become increasingly popular in recent years due to their novel capacity for natural, human-like dialogue interactions. However, mistrust in LLMs persists due to concerns about privacy, the potential for incorrect responses (often referred to as ’hallucinations’), and issues related to social bias. Previous AI research shows that anthropomorphic form positively influences users’ perceptions. However, this aspect remains under-explored in LLM-based conversational agent research. Our research features two anthropomorphic forms: embodied and behavioral. Embodied Anthropomorphic Form (EA) encompasses chatbot, chatbot with text-to-speech (TTS), and embodied conversational agent (ECA) interface designs. Behavioral Anthropomorphic Form (BA) involves LLMs instructed with and without Theory of Mind (ToM) principles. In an empirical evaluation, we explored how interplay between BA form and EA form, and vice-versa, affects users’ perceptions of LLM-based conversational agents on trust, anthropomorphism, presence, usability, and user experience. Our findings provide evidence of such effects, offering novel insight into the influence of both anthropomorphic forms on perceived anthropomorphism, presence, usability, user experience, and their positive impact on user trust in LLM-based conversational agents. However, the combined highest (i.e., ECA with ToM behaviors) and lowest (i.e., Chatbot without ToM behaviors) levels of both forms result in lower user trust, suggesting a complex relationship between embodiment and ToM behaviors that warrants further investigation.
People conceive of wrathful gods, fickle computers, and selfish genes, attributing human characteristics to a variety of supernatural, technological, and biological agents. This tendency to anthropomorphize nonhuman agents figures prominently in domains ranging from religion to marketing to computer science. Perceiving an agent to be humanlike has important implications for whether the agent is capable of social influence, accountable for its actions, and worthy of moral care and consideration. Three primary factors-elicited agent knowledge, sociality motivation, and effectance motivation-appear to account for a significant amount of variability in anthropomorphism. Identifying these factors that lead people to see nonhuman agents as humanlike also sheds light on the inverse process of dehumanization, whereby people treat human agents as animals or objects. Understanding anthropomorphism can contribute to a more expansive view of social cognition that applies social psychological theory to a wide variety of both human and nonhuman agents.
Anthropomorphism of computerized agents, avatars, and technologies has been the focus of a large body of research in human-computer interaction (HCI). Yet, operational definitions of anthropomorphism vary greatly, creating the potential for error when broad theoretical conclusions are drawn from operationalizations lacking in content validity. This scoping review aimed to identify and categorize the range of operationalizations of anthropomorphism in experimental studies of computerized agents, avatars, and technologies, adding needed clarity to a diverse area of inquiry. Using five selection criteria, this review categorized the operationalization(s) of anthropomorphism in 31 experiment-based articles published in academic research journals. Results showed a heavy dominance of manipulations of physical appearance as operationalizations of anthropomorphism, which thretens content validity and raises questions about the understanding of anthropomorphism in HCI.
… concepts of anthropomorphism and dehumanization in a social … What conditions increase or decrease anthropomorphism … the social and moral consequences of anthropomorphism and …
In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum ‘What is a social agent?’
Conversational agents (CAs) are natural language user interfaces that emulate human-to-human communication. Because of this emulation, research on CAs is inseparably linked to questions about anthropomorphism—the attribution of human qualities, including consciousness, intentions, and emotions, to nonhuman agents. Past research has demonstrated that anthropomorphism affects human perception and behavior in human-computer interactions by, for example, increasing trust and connectedness or stimulating social response behaviors. Based on the psychological theory of anthropomorphism and related research on computer interface design, we develop a theoretical framework for designing anthropomorphic CAs. We identify three groups of factors that stimulate anthropomorphism: technology design-related factors, task-related factors, and individual factors. Our findings from an online experiment support the derived framework but also reveal novel yet counterintuitive insights. In particular, we demonstrate that not all combinations of anthropomorphic technology design cues increase perceived anthropomorphism. For example, we find that using only nonverbal cues harms anthropomorphism; however, this effect becomes positive when nonverbal cues are complemented with verbal or human identity cues. We also find that CAs’ disposition to complete computerlike versus humanlike tasks and individuals’ disposition to anthropomorphize greatly affect perceived anthropomorphism. This work advances our understanding of anthropomorphism and contextualizes the theory of anthropomorphism within the IS discipline. We advise on the directions that research and practice should take to find the sweet spot for anthropomorphic CA design.
Advanced information technologies (ITs) are increasingly assuming tasks that have previously required human capabilities, such as learning and judgment. What drives this technology anthropomorphism (TA), or the attribution of humanlike characteristics to IT? What is it about users, IT, and their interactions that influences the extent to which people think of technology as humanlike? While TA can have positive effects, such as increasing user trust in technology, what are the negative consequences of TA? To provide a framework for addressing these questions, we advance a theory of TA that integrates the general three-factor anthropomorphism theory in social and cognitive psychology with the needs-affordances-features perspective from the information systems (IS) literature. The theory we construct helps to explain and predict which technological features and affordances are likely: (1) to satisfy users’ psychological needs, and (2) to lead to TA. More importantly, we problematize some negative consequences of TA. Technology features and affordances contributing to TA can intensify users’ anchoring with their elicited agent knowledge and psychological needs and also can weaken the adjustment process in TA under cognitive load. The intensified anchoring and weakened adjustment processes increase egocentric biases that lead to negative consequences. Finally, we propose a research agenda for TA and egocentric biases.
… evidence that smartphone users’ social disposition, including factors of … anthropomorphism. The findings corroborate and add to the theory of sociality determinant of anthropomorphism, …
… social rules in HCI, the current research evaluated two potential explanations for why people apply social heuristics toward computers: anthropomorphism … on social cognition and social …
… between perceived anthropomorphism, social presence, and … whether perceived anthropomorphism and social presence … the effects of anthropomorphism and social presence …
Empirical studies have repeatedly shown that autonomous artificial entities elicit social behavior on the part of the human interlocutor. Various theoretical approaches have tried to explain this phenomenon. The agency assumption states that the social influence of human interaction partners (represented by avatars) will always be higher than the influence of artificial entities (represented by embodied conversational agents). Conversely, the Ethopoeia concept predicts that automatic social reactions are triggered by situations as soon as they include social cues. Both theories have been challenged in a2×2between subjects design with two levels of agency (low: agent, high: avatar) and two interfaces with different degrees of social cues (low: textchat, high: virtual human). The results show that participants in the virtual human condition reported a stronger sense of mutual awareness, imputed more positive characteristics, and allocated more attention to the virtual human than participants in the text chat conditions. Only one result supports the agency assumption; participants who believed to interact with a human reported a stronger feeling of social presence than participants who believed to interact with an artificial entity. It is discussed to what extent these results support the social cue assumption made in the Ethopoeia approach.
… To address this issue, the present study draws upon social … characteristics to refine anthropomorphic design into four core … anthropomorphism among older adults with mild cognitive …
… The studies presented here draw from social psychology and sociology. … This is a crucial step in HCI anthropomorphism practice, and will consequently be examined in more detail. …
Large Language Models (LLMs) increasingly exhibit anthropomorphism characteristicshuman-like qualities portrayed across their outlook, language, behavior, and reasoning functions.Such characteristics enable more intuitive and engaging human-AI interactions.However, current research on anthropomorphism remains predominantly risk-focused, emphasizing over-trust and user deception while offering limited design guidance.We argue that anthropomorphism should instead be treated as a concept of design that can be intentionally tuned to support user goals.Drawing from multiple disciplines, we propose that the anthropomorphism of an LLM-based artifact should reflect the interaction between artifact designers and interpreters.This interaction is facilitated by cues embedded in the artifact by the designers and the (cognitive) responses of the interpreters to the cues.Cues are categorized into four dimensions: perceptive, linguistic, behavioral, and cognitive.By analyzing the manifestation and effectiveness of each cue, we provide a unified taxonomy with actionable levers for practitioners.Consequently, we advocate for function-oriented evaluations of anthropomorphic design.
The application of anthropomorphic features to robots is generally considered beneficial for human-robot interaction (HRI). Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-Robot interaction as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust and visual attention allocation was examined. Participants interacted with a robot, which was either anthropomorphically or non-anthropomorphically designed. Unexpectedly, attribute-based trust measures revealed no beneficial effect of anthropomorphism but even a negative impact on the perceived reliability of the robot. Trust behavior was not significantly affected by an anthropomorphic robot design during faultless interactions, but showed a relatively steeper decrease after participants experienced a failure of the robot. With regard to attention allocation, the study clearly reveals a distracting effect of anthropomorphic robot design. The results emphasize that anthropomorphism might not be an appropriate feature in industrial HRI as it not only failed to reveal positive effects on trust, but distracted participants from relevant task areas which might be a significant drawback with regard to occupational safety in HRI.
… centered around trust dynamics in human–robot interaction, … of robots (ie, anthropomorphism and type of failure) and trust … relationship, participants collaborated with robots via voice …
This paper examines how people’s trust and dependence on robot teammates providing decision support varies as a function of different attributes of the robot, such as perceived anthropomorphism, type of support provided by the robot, and its physical presence. We conduct a mixed-design user study with multiple robots to investigate trust, inappropriate reliance, and compliance measures in the context of a time-constrained game. We also examine how the effect of human accountability addresses errors due to over-compliance in the context of human robot interaction (HRI). This study is novel as it involves examining multiple attributes at once, thus enabling us to perform multi-way comparisons between different attributes on trust and compliance with the agent. Results from the 4×4×2×2 study show that behavior and anthropomorphism of the agent are the most significant factors in predicting the trust and compliance with the robot. Furthermore, adding a coalition-building preface, where the agent provides context to why it might make errors while giving advice, leads to an increase in trust for specific behaviors of the agent.CCS CONCEPTS • Human-centered computing $\rightarrow$ User studies; • Computer systems organization $\rightarrow$ Robotics. ACM Reference Format: Manisha Natarajan and Matthew Gombolay. 2020. Effects of Anthropomorphism and Accountability on Trust in Human Robot Interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3319502.3374839
The application of anthropomorphic features to robots is generally considered to be beneficial for human- robot interaction. Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-robot interaction, as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust is examined. Participants interacted with a robot, which was either anthropomorphically or technically designed and experienced either a comprehensible or an incomprehensible fault of the robot. Unexpectedly, the robot was perceived as less reliable in the anthropomorphic condition. Additionally, trust increased after faultless experience and decreased after failure experience independently of the type of error. Even though the manipulation of the design did not result in a different perception of the robot’s anthropomorphism, it still influenced the formation of trust. The results emphasize that anthropomorphism is no universal remedy to increase trust, but highly context dependent.
Description This meta-analysis quantifies whether and under which circumstances anthropomorphic features of robots facilitate HRI. The application of anthropomorphic design features is widely assumed to facilitate human-robot interaction (HRI). However, a considerable number of study results point in the opposite direction. There is currently no comprehensive common ground on the circumstances under which anthropomorphism promotes interaction with robots. Our meta-analysis aims to close this gap. A total of 4856 abstracts were scanned. After an extensive evaluation, 78 studies involving around 6000 participants and 187 effect sizes were included in this meta-analysis. The majority of the studies addressed effects on perceptual aspects of robots. In addition, effects on attitudinal, affective, and behavioral aspects were also investigated. Overall, a medium positive effect size was found, indicating a beneficial effect of anthropomorphic design features on human-related outcomes. However, closer scrutiny of the lowest variable level revealed no positive effect for perceived safety, empathy, and task performance. Moreover, the analysis suggests that positive effects of anthropomorphism depend heavily on various moderators. For example, anthropomorphism was in contrast to other fields of application, constantly facilitating social HRI. The results of this analysis provide insights into how design features can be used to improve the quality of HRI. Moreover, they reveal areas in which more research is needed before any clear conclusions about the effects of anthropomorphic robot design can be drawn.
… Collaborative robots (cobots) are increasingly … trust, hindering successful human-cobot collaboration. Concurrently, we witness people empathizing with and treating social robots as …
… induces human qualities, which allow people to accept robots as companions with trust … anthropomorphism level of robotic hands and their performance in hand-over-hand collaboration …
… -robot team. The current research systematically investigates trust in cooperative human-robot teams, where the robotic … In creating anthropomorphic robots, our goal is to design a robot …
… With collaborative robots (cobots) entering production sites and allowing factory workers … -robot trust at the workplace are gaining relevance and complexity. It is widely believed that trust …
This study aims to elucidate the mechanisms through which service robot anthropomorphism and employee self-efficacy influence human-robot collaborative performance in the service industry. A conceptual model was developed in which perceived usefulness, perceived ease of use, perceived competence, and collaborative intention operate as multiple mediators, clarifying how robot-assisted collaboration enhances performance. Using structural equation modeling, we analyzed data from 418 valid questionnaires collected from employees in the hospitality and catering sectors. The results indicate that both service robot anthropomorphism and employee self-efficacy significantly enhance employees' perceptions of usefulness, ease of use, and competence. These perceptions, in turn, positively affect collaborative intention, which subsequently improves collaborative performance. Moreover, perceived usefulness, ease of use, and competence jointly mediate the relationships between the antecedent variables (anthropomorphism and self-efficacy) and collaborative performance. In summary, the findings elucidate a sequential mediation pathway: robot anthropomorphism and employee self-efficacy boost key perceptions, thereby fostering collaborative intention and ultimately enhancing performance. The study provides theoretical insights into the psychological mechanisms through which anthropomorphic design features and employee self-efficacy shape effective human-robot collaboration and offers practical guidance for the successful integration of service robots into service operations.
With the increasing deployment of robots to support humans in various activities, a crucial factor that has surfaced as a precondition for successful human-robot interaction (HRI) is the human’s level of trust in the robotic companion. A phenomenon that has recently shifted into the foreground for its potential to influence cognitive and affective dimensions in humans is gamification. However, there is a dearth of knowledge whether and how gamification can be employed to effectively cultivate trust in HRI. The present study investigates and compares the effects of three design interventions (i.e., non-gamified vs. gameful design vs. playful design) on cognitive and affective trust between humans and an autonomous mobile collaborative robot (cobot) in a virtual reality (VR) training experiment. The results reveal that affective trust and specific trust antecedents (i.e., a robot’s likability and perceived intelligence) are most significantly developed via playful design, revealing the importance of incorporating playful elements into a robot’s appearance, demeanor, and interaction to establish an emotional connection and trust in HRI.
Full article: Anthropomorphism, acceptance and value co-creation with humanoid retail service robots: a moderated mediation model from cognitive and emotional trust perspective Skip …
… a humanoid-looking robot would gain more trust and be better … robots shape perception and acceptance, too. The field of … robots) influence the performance of hybrid collaboration and …
Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.
The transition from Industry 4.0 to Industry 5.0 highlights recent European efforts to design intelligent devices, systems, and automation that can work alongside human intelligence and enhance human capabilities. In this vision, human–machine interaction (HMI) goes beyond simply deploying machines, such as autonomous robots, for economic advantage. It requires societal and educational shifts toward a human-centric research vision, revising how we perceive technological advancements to improve the benefits and convenience for individuals. Furthermore, it also requires determining which priority is given to user preferences and needs to feel safe while collaborating with autonomous intelligent systems. This proposed human-centric vision aims to enhance human creativity and problem-solving abilities by leveraging machine precision and data processing, all while protecting human agency. Aligned with this perspective, we conducted a systematic literature review focusing on trust and trustworthiness in relation to characteristics of humans and systems in human–robot interaction (HRI). Our research explores the aspects that impact the potential for designing and fostering machine trustworthiness from a human-centered standpoint. A systematic analysis was conducted to review 34 articles in recent HRI-related studies. Then, through a standardized screening, we identified and categorized factors influencing trust in automation that can act as trust barriers and facilitators when implementing autonomous intelligent systems. Our study comments on the application areas in which trust is considered, how it is conceptualized, and how it is evaluated within the field. Our analysis underscores the significance of examining users’ trust and the related factors impacting it as foundational elements for promoting secure and trustworthy HRI.
… Will people trust robots to perform operations that the robots are capable of, without oversight? If things go wrong, will people take appropriate responsibility to correct the problem, or …
Abstract Robots are more and more widely used in work scenes, but few studies have discussed the influence of robots on front-line employees. This study aims to investigate whether the transparency and anthropomorphism of robots affect employees’ acceptance of robots and its influencing mechanism from the perspective of social identity. We test our hypotheses in two studies. Study 1 examined the main effect of robot transparency on employee’s acceptance and the mediating role of human–robot trust, while Study 2 further examined the main and mediating effects, as well as the moderating role of robot anthropomorphism. The findings revealed that that robot transparency positively affects employee’s acceptance, cognition-based trust mediates the relationship, and the mediating effect of affect-based trust is not significant. Robot anthropomorphism moderates the relationship between transparency and cognition-based trust, it also moderates the mediating effect of cognition-based trust between transparency and employee’s acceptance.
… agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is … of the Computers-Are-Social-Actors paradigm and automation bias. A 2 (…
… computers, in order to effectively interact with those agents. … of weak anthropomorphism [4] tell us that anthropomorphism can … how anthropomorphism is similar and different from trust in …
… from low anthropomorphism (mostly an image of a computer, … increased trust only for reliable anthropomorphic agents, but … , anthropomorphic agents still exhibited lower trust resilience …
Modern conversational agents such as Alexa and Google Assistant represent significant progress in speech recognition, natural language processing, and speech synthesis. But as these agents have grown more realistic, concerns have been raised over how their social nature might unconsciously shape our interactions with them. Through a survey of 500 voice assistant users, we explore whether users' relationships with their voice assistants can be quantified using the same metrics as social, interpersonal relationships; as well as if this correlates with how much they trust their devices and the extent to which they anthropomorphise them. Using Knapp's staircase model of human relationships, we find that not only can human-device interactions be modelled in this way, but also that relationship development with voice assistants correlates with increased trust and anthropomorphism.
… agents. Drawing upon the stimulus–organism–response (SOR) framework, focus is placed on how the anthropomorphic attributes of chatbots influence consumers’ perceived trust in …
Recommendation systems (RSs) leverage data and algorithms to generate a set of suggestions to reduce consumers’ efforts and assist their decisions. In this study, we examine how different framings of recommendations trigger people’s anthropomorphic perceptions of RSs and therefore affect users’ attitudes in an online experiment. Participants used and evaluated one of four versions of a web-based wine RS with different source framings (i.e. “recommendation by an algorithm,” “recommendation by an AI assistant,” “recommendation by knowledge generated from similar people,” no description). Results showed that different source framings generated different levels of perceived anthropomorphism. Participants indicated greater trust in the recommendations and greater confidence in making choices based on the recommendations when they perceived an RS as highly anthropomorphic; however, higher perceived anthropomorphism of an RS led to a lower willingness to disclose personal information to the RS.
Abstract Recently, the intelligent in-vehicle voice agent (IVVA) using natural language processing (NLP) and capable of having a conversation have been introduced in autonomous vehicles (AVs). The IVVA is expanding its role not only to provide driving information and vehicle condition updates to ensure safety, but also to communicate and empathize with drivers like a friend in order to provide a more enjoyable driving experience. Accordingly, various anthropomorphic techniques have been applied to IVVAs and their effects evaluated. There is a tendency to focus on identifying whether anthropomorphic techniques are effective or not, but consideration of the autonomous driving contexts (ADCs) has been insufficient. Therefore, this study compares and evaluates the effects of the ADC (e.g., emergency stop, navigation, and casual conversations) on the interaction experience (specifically, intimacy, trust, and intention to use) from human-IVVA conversations by analyzing two IVVAs with different levels of anthropomorphism. As a result, the IVVA with the higher level of anthropomorphism encouraged greater intimacy. An interaction effect was confirmed based on the ADC and the level of anthropomorphism. In addition, regarding trust and intention to use, the IVVA with greater anthropomorphism was evaluated with a higher trust level and a stronger intention to use in an emergency stop situation. But the IVVAs did not always provide a positive experience in other driving contexts. The results of this study suggest that when designing an IVVA for AVs, it is necessary to use a conversation strategy appropriate to the situation by recognizing the ADC, rather than simply considering an increase in anthropomorphism.
Inner Speech is an essential but also elusive human psychological process which refers to an everyday covert internal conversation with oneself. We argue that programming a robot with an overt self-talk system, which simulates human inner speech, might enhance human trust by improving robot transparency and anthropomorphism. For this reasons, this work aims to investigate if robot's inner speech, here intended as overt self-talk, affects human trust and anthropomorphism when human and robot cooperate. A group of participants was engaged in collaboration with the robot. During cooperation, the robot talks to itself. To evaluate if the robot's inner speech influences human trust, two questionnaires were administered to each participant before (pre-test) and after (post-test) the cooperative session with the robot. Preliminary results evidenced differences between the answers of participants in the pre-test and post-test assessment, suggesting that robot's inner speech influences human trust. Indeed, participant's levels of trust and perception of robot anthropomorphic features increase after the experimental interaction with the robot.
Anthropomorphic design is routinely used to make conversational agents more approachable and engaging. Yet its influence on users' perceptions remains poorly understood. Drawing on psychological theories, we propose that anthropomorphism influences risk perception via two complementary forms of trust, and that domain knowledge moderates these relationships. To test our model, we conducted a large-scale online experiment (N = 1,256) on a financial decision-support system implementing different anthropomorphic designs. We found that anthropomorphism indirectly reduces risk perception by increasing both cognitive and affective trust. Domain knowledge moderates these paths: participants with low financial knowledge experience a negative indirect effect of perceived anthropomorphism on risk perception via cognitive trust, whereas those with high financial knowledge exhibit a positive direct and indirect effect. We discuss theoretical contributions to human-AI interaction and design implications for calibrating trust in anthropomorphic decision-support systems for responsible AI.
As AI systems are increasingly involved in decision making, it also becomes important that they elicit appropriate levels of trust from their users. To achieve this, it is first important to understand which factors influence trust in AI. We identify that a research gap exists regarding the role of personal values in trust in AI. Therefore, this paper studies how human and agent Value Similarity (VS) influences a human's trust in that agent. To explore this, 89 participants teamed up with five different agents, which were designed with varying levels of value similarity to that of the participants. In a within-subjects, scenario-based experiment, agents gave suggestions on what to do when entering the building to save a hostage. We analyzed the agent's scores on subjective value similarity, trust and qualitative data from open-ended questions. Our results show that agents rated as having more similar values also scored higher on trust, indicating a positive effect between the two. With this result, we add to the existing understanding of human-agent trust by providing insight into the role of value-similarity.
Interfaces for interacting with large language models (LLMs) are often designed to mimic human conversations, typically presenting a single response to user queries. This design choice can obscure the probabilistic and predictive nature of these models, potentially fostering undue trust and over-anthropomorphization of the underlying model. In this paper, we investigate (i) the effect of displaying multiple responses simultaneously as a countermeasure to these issues, and (ii) how a cognitive support mechanism-highlighting structural and semantic similarities across responses-helps users deal with the increased cognitive load of that intervention. We conducted a within-subjects study in which participants inspected responses generated by an LLM under three conditions: one response, ten responses with cognitive support, and ten responses without cognitive support. Participants then answered questions about workload, trust and reliance, and anthropomorphization. We conclude by reporting the results of these studies and discussing future work and design opportunities for future LLM interfaces.
Over a billion users globally interact with AI systems engineered to mimic human traits. This development raises concerns that anthropomorphism, the attribution of human characteristics to AI, may foster over-reliance and misplaced trust. Yet, causal effects of humanlike AI design on users remain untested in ecologically valid, cross-cultural settings, leaving policy discussions to rely on theoretical assumptions derived largely from Western populations. Here we conducted two experiments (N=3,500) across ten countries representing a wide cultural spectrum, involving real-time, open-ended interactions with a state-of-the-art chatbot. We found users evaluate human-likeness based on pragmatic interactional cues (conversation flow, response speed, perspective-taking) rather than abstract theory-driven attributes emphasized in academic discourse (e.g., sentience, consciousness). Furthermore, while experimentally increasing chatbot's human-likeness reliably increased anthropomorphism across all sampled countries, it did not universally increase trust or engagement. Instead, effects were culturally contingent; design choices fostering engagement or trust in one country may reduce them in another. These findings challenge prevailing assumptions that humanlike AI poses uniform psychological risks and necessarily increases trust. Risk is not inherent to humanlike design but emerges from its interplay with cultural context. Consequently, governance frameworks must move beyond universalist approaches to account for this global heterogeneity.
Recent research has supported that system explainability improves user trust and willingness to use medical AI for diagnostic support. In this paper, we use chest disease diagnosis based on X-Ray images as a case study to investigate user trust and reliance. Building off explainability, we propose a support system where users (radiologists) can view causal explanations for final decisions. After observing these causal explanations, users provided their opinions of the model predictions and could correct explanations if they did not agree. We measured user trust as the agreement between the model's and the radiologist's diagnosis as well as the radiologists' feedback on the model explanations. Additionally, they reported their trust in the system. We tested our model on the CXR-Eye dataset and it achieved an overall accuracy of 74.1%. However, the experts in our user study agreed with the model for only 46.4% of the cases, indicating the necessity of improving the trust. The self-reported trust score was 3.2 on a scale of 1.0 to 5.0, showing that the users tended to trust the model but the trust still needs to be enhanced.
This paper investigates the influence of anthropomorphized descriptions of so-called "AI" (artificial intelligence) systems on people's self-assessment of trust in the system. Building on prior work, we define four categories of anthropomorphization (1. Properties of a cognizer, 2. Agency, 3. Biological metaphors, and 4. Properties of a communicator). We use a survey-based approach (n=954) to investigate whether participants are likely to trust one of two (fictitious) "AI" systems by randomly assigning people to see either an anthropomorphized or a de-anthropomorphized description of the systems. We find that participants are no more likely to trust anthropomorphized over de-anthropmorphized product descriptions overall. The type of product or system in combination with different anthropomorphic categories appears to exert greater influence on trust than anthropomorphizing language alone, and age is the only demographic factor that significantly correlates with people's preference for anthropomorphized or de-anthropomorphized descriptions. When elaborating on their choices, participants highlight factors such as lesser of two evils, lower or higher stakes contexts, and human favoritism as driving motivations when choosing between product A and B, irrespective of whether they saw an anthropomorphized or a de-anthropomorphized description of the product. Our results suggest that "anthropomorphism" in "AI" descriptions is an aggregate concept that may influence different groups differently, and provide nuance to the discussion of whether anthropomorphization leads to higher trust and over-reliance by the general public in systems sold as "AI".
Robots are increasingly deployed in spaces shared with humans, including home settings and industrial environments. In these environments, the interaction between humans and robots (HRI) is crucial for safety, legibility, and efficiency. A key factor in HRI is trust, which modulates the acceptance of the system. Anthropomorphism has been shown to modulate trust development in a robot, but robots in industrial environments are not usually anthropomorphic. We designed a simple interaction in an industrial environment in which an anthropomorphic mock driver (ARMoD) robot simulates to drive an autonomous guided vehicle (AGV). The task consisted of a human crossing paths with the AGV, with or without the ARMoD mounted on the top, in a narrow corridor. The human and the system needed to negotiate trajectories when crossing paths, meaning that the human had to attend to the trajectory of the robot to avoid a collision with it. There was a significant increment in the reported trust scores in the condition where the ARMoD was present, showing that the presence of an anthropomorphic robot is enough to modulate the trust, even in limited interactions as the one we present here.
Trust is not just a cognitive issue but also an emotional one, yet the research in human-AI interactions has primarily focused on the cognitive route of trust development. Recent work has highlighted the importance of studying affective trust towards AI, especially in the context of emerging human-like LLMs-powered conversational agents. However, there is a lack of validated and generalizable measures for the two-dimensional construct of trust in AI agents. To address this gap, we developed and validated a set of 27-item semantic differential scales for affective and cognitive trust through a scenario-based survey study. We then further validated and applied the scale through an experiment study. Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents. Our study methodology and findings also provide insights into the capability of the state-of-art LLMs to foster trust through different routes.
There is no 'ordinary' when it comes to AI. The human-AI experience is extraordinarily complex and specific to each person, yet dominant measures such as usability scales and engagement metrics flatten away nuance. We argue for AI phenomenology: a research stance that asks "How did it feel?" beyond the standard questions of "How well did it perform?" when interacting with AI systems. AI phenomenology acts as a paradigm for bidirectional human-AI alignment as it foregrounds users' first-person perceptions and interpretations of AI systems over time. We motivate AI phenomenology as a framework that captures how alignment is experienced, negotiated, and updated between users and AI systems. Tracing a lineage from Husserl through postphenomenology to Actor-Network Theory, and grounding our argument in three studies-two longitudinal studies with "Day", an AI companion, and a multi-method study of agentic AI in software engineering-we contribute a set of replicable methodological toolkits for conducting AI phenomenology research: instruments for capturing lived experience across personal and professional contexts, three design concepts (translucent design, agency-aware value alignment, temporal co-evolution tracking), and a concrete research agenda. We offer this toolkit not as a new paradigm but as a practical scaffold that researchers can adapt as AI systems-and the humans who live alongside them-continue to co-evolve.
Many important decisions in daily life are made with the help of advisors, e.g., decisions about medical treatments or financial investments. Whereas in the past, advice has often been received from human experts, friends, or family, advisors based on artificial intelligence (AI) have become more and more present nowadays. Typically, the advice generated by AI is judged by a human and either deemed reliable or rejected. However, recent work has shown that AI advice is not always beneficial, as humans have shown to be unable to ignore incorrect AI advice, essentially representing an over-reliance on AI. Therefore, the aspired goal should be to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions. Specifically, that means that humans should rely on the AI in the presence of correct advice and self-rely when confronted with incorrect advice, i.e., establish appropriate reliance (AR) on AI advice on a case-by-case basis. Current research lacks a metric for AR. This prevents a rigorous evaluation of factors impacting AR and hinders further development of human-AI decision-making. Therefore, based on the literature, we derive a measurement concept of AR. We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly. In this article, we derive the measurement concept, illustrate its application and outline potential future research.
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.
Within the context of human-robot interaction (HRI), Theory of Mind (ToM) is intended to serve as a user-friendly backend to the interface of robotic systems, enabling robots to infer and respond to human mental states. When integrated into robots, ToM allows them to adapt their internal models to users' behaviors, enhancing the interpretability and predictability of their actions. Similarly, Explainable Artificial Intelligence (XAI) aims to make AI systems transparent and interpretable, allowing humans to understand and interact with them effectively. Since ToM in HRI serves related purposes, we propose to consider ToM as a form of XAI and evaluate it through the eValuation XAI (VXAI) framework and its seven desiderata. This paper identifies a critical gap in the application of ToM within HRI, as existing methods rarely assess the extent to which explanations correspond to the robot's actual internal reasoning. To address this limitation, we propose to integrate ToM within XAI frameworks. By embedding ToM principles inside XAI, we argue for a shift in perspective, as current XAI research focuses predominantly on the AI system itself and often lacks user-centered explanations. Incorporating ToM would enable a change in focus, prioritizing the user's informational needs and perspective.
In this preliminary work, we offer an initial disambiguation of the theoretical concepts anthropomorphism and anthropomimesis in Human-Robot Interaction (HRI) and social robotics. We define anthropomorphism as users perceiving human-like qualities in robots, and anthropomimesis as robot developers designing human-like features into robots. This contribution aims to provide a clarification and exploration of these concepts for future HRI scholarship, particularly regarding the party responsible for human-like qualities - robot perceiver for anthropomorphism, and robot designer for anthropomimesis. We provide this contribution so that researchers can build on these disambiguated theoretical concepts for future robot design and evaluation.
Understanding each other is the key to success in collaboration. For humans, attributing mental states to others, the theory of mind, provides the crucial advantage. We argue for formulating human--AI interaction as a multi-agent problem, endowing AI with a computational theory of mind to understand and anticipate the user. To differentiate the approach from previous work, we introduce a categorisation of user modelling approaches based on the level of agency learnt in the interaction. We describe our recent work in using nested multi-agent modelling to formulate user models for multi-armed bandit based interactive AI systems, including a proof-of-concept user study.
Design fictions allow us to prototype the future. They enable us to interrogate emerging or non-existent technologies and examine their implications. We present three design fictions that probe the potential consequences of operationalizing a mutual theory of mind (MToM) between human users and one (or more) AI agents. We use these fictions to explore many aspects of MToM, including how models of the other party are shaped through interaction, how discrepancies between these models lead to breakdowns, and how models of a human's knowledge and skills enable AI agents to act in their stead. We examine these aspects through two lenses: a utopian lens in which MToM enhances human-human interactions and leads to synergistic human-AI collaborations, and a dystopian lens in which a faulty or misaligned MToM leads to problematic outcomes. Our work provides an aspirational vision for human-centered MToM research while simultaneously warning of the consequences when implemented incorrectly.
Large language models (LLMs) are transforming human-computer interaction and conceptions of artificial intelligence (AI) with their impressive capacities for conversing and reasoning in natural language. There is growing interest in whether LLMs have theory of mind (ToM); the ability to reason about the mental and emotional states of others that is core to human social intelligence. As LLMs are integrated into the fabric of our personal, professional and social lives and given greater agency to make decisions with real-world consequences, there is a critical need to understand how they can be aligned with human values. ToM seems to be a promising direction of inquiry in this regard. Following the literature on the role and impacts of human ToM, this paper identifies key areas in which LLM ToM will show up in human:LLM interactions at individual and group levels, and what opportunities and risks for alignment are raised in each. On the individual level, the paper considers how LLM ToM might manifest in goal specification, conversational adaptation, empathy and anthropomorphism. On the group level, it considers how LLM ToM might facilitate collective alignment, cooperation or competition, and moral judgement-making. The paper lays out a broad spectrum of potential implications and suggests the most pressing areas for future research.
With large language models (LLMs) becoming increasingly prevalent in daily life, so too has the tendency to attribute to them human-like minds and emotions, or anthropomorphize them. Here, we investigate dimensions people use to anthropomorphize and attribute trust toward LLMs across more than 2,000 human-LLM interactions. Participants (N=115) engaged with LLM chatbots systematically varied in warmth (friendliness), competence (capability, coherence), and empathy (cognitive and affective). Warmth and cognitive empathy significantly predicted perceptions on all outcomes (perceived anthropomorphism, trust, similarity, relational closeness, frustration, usefulness), while competence predicted all outcomes except for anthropomorphism. Affective empathy primarily predicted perceived relational measures, but did not predict the epistemic outcomes. Topic sub-analyses showed that more subjective, personally relevant topics (e.g., relationship advice) amplified these effects, producing greater human-likeness and relational connection with the LLM than did objective topics. Together, these findings reveal that warmth, competence, and empathy are key dimensions through which people attribute relational and epistemic perceptions to artificial agents.
When users perceive AI systems as mindful, independent agents, they hold them responsible instead of the AI experts who created and designed these systems. So far, it has not been studied whether explanations support this shift in responsibility through the use of mind-attributing verbs like "to think". To better understand the prevalence of mind-attributing explanations we analyse AI explanations in 3,533 explainable AI (XAI) research articles from the Semantic Scholar Open Research Corpus (S2ORC). Using methods from semantic shift detection, we identify three dominant types of mind attribution: (1) metaphorical (e.g. "to learn" or "to predict"), (2) awareness (e.g. "to consider"), and (3) agency (e.g. "to make decisions"). We then analyse the impact of mind-attributing explanations on awareness and responsibility in a vignette-based experiment with 199 participants. We find that participants who were given a mind-attributing explanation were more likely to rate the AI system as aware of the harm it caused. Moreover, the mind-attributing explanation had a responsibility-concealing effect: Considering the AI experts' involvement lead to reduced ratings of AI responsibility for participants who were given a non-mind-attributing or no explanation. In contrast, participants who read the mind-attributing explanation still held the AI system responsible despite considering the AI experts' involvement. Taken together, our work underlines the need to carefully phrase explanations about AI systems in scientific writing to reduce mind attribution and clearly communicate human responsibility.
Theory of Mind (ToM), the ability to attribute mental states to others and predict their behaviour, is fundamental to social intelligence. In this paper, we survey studies evaluating behavioural and representational ToM in Large Language Models (LLMs), identify important safety risks from advanced LLM ToM capabilities, and suggest several research directions for effective evaluation and mitigation of these risks.
Recent attention to anthropomorphism -- the attribution of human-like qualities to non-human objects or entities -- of language technologies like LLMs has sparked renewed discussions about potential negative impacts of anthropomorphism. To productively discuss the impacts of this anthropomorphism and in what contexts it is appropriate, we need a shared vocabulary for the vast variety of ways that language can be anthropomorphic. In this work, we draw on existing literature and analyze empirical cases of user interactions with language technologies to develop a taxonomy of textual expressions that can contribute to anthropomorphism. We highlight challenges and tensions involved in understanding linguistic anthropomorphism, such as how all language is fundamentally human and how efforts to characterize and shift perceptions of humanness in machines can also dehumanize certain humans. We discuss ways that our taxonomy supports more precise and effective discussions of and decisions about anthropomorphism of language technologies.
Self-disclosure is important to help us feel better, yet is often difficult. This difficulty can arise from how we think people are going to react to our self-disclosure. In this workshop paper, we briefly discuss self-disclosure to conversational user interfaces (CUIs) in relation to various social cues. We then, discuss how expressions of uncertainty or representation of a CUI's reasoning could help encourage self-disclosure, by making a CUI's intended "theory of mind" more transparent to users.
Originating in psychology, $\textit{Theory of Mind}$ (ToM) has attracted significant attention across multiple research communities, especially logic, economics, and robotics. Most psychological work does not aim at formalizing those central concepts, namely $\textit{goals}$, $\textit{intentions}$, and $\textit{beliefs}$, to automate a ToM-based computational process, which, by contrast, has been extensively studied by logicians. In this paper, we offer a different perspective by proposing a computational framework viewed through the lens of game theory. On the one hand, the framework prescribes how to make boudedly rational decisions while maintaining a theory of mind about others (and recursively, each of the others holding a theory of mind about the rest); on the other hand, it employs statistical techniques and approximate solutions to retain computability of the inherent computational problem.
Scientists have traditionally limited the mechanisms of social cognition to one brain, but recent approaches claim that interaction also realizes cognitive work. Experiments under constrained virtual settings revealed that interaction dynamics implicitly guide social cognition. Here we show that embodied social interaction can be constitutive of agency detection and of experiencing another`s presence. Pairs of participants moved their "avatars" along an invisible virtual line and could make haptic contact with three identical objects, two of which embodied the other`s motions, but only one, the other`s avatar, also embodied the other`s contact sensor and thereby enabled responsive interaction. Co-regulated interactions were significantly correlated with identifications of the other`s avatar and reports of the clearest awareness of the other`s presence. These results challenge folk psychological notions about the boundaries of mind, but make sense from evolutionary and developmental perspectives: an extendible mind can offload cognitive work into its environment.
Loneliness is a distressing personal experience and a growing social issue. Social robots could alleviate the pain of loneliness, particularly for those who lack in-person interaction. This paper investigated how the effect of loneliness on the anthropomorphism of social robots differs by robot appearance, and how it influences purchase intention. Participants viewed a video of one of the three robots (machine-like, animal-like, and human-like) moving and interacting with a human counterpart. Bootstrapped multiple regression results revealed that although the unique effect of animal-likeness on anthropomorphism compared to human-likeness was higher, lonely individuals' tendency to anthropomorphize the animal-like robot was lower than that of the human-like robot. This moderating effect remained significant after covariates were included. Bootstrapped mediation analysis showed that anthropomorphism had both a positive direct effect on purchase intent and a positive indirect effect mediated by likability. Our results suggest that lonely individuals' tendency of anthropomorphizing social robots should not be summarized into one unified inclination. Moreover, by extending the effect of loneliness on anthropomorphism to likability and purchase intent, this current study explored the potential of social robots to be adopted as companions of lonely individuals in their real life. Lastly, we discuss the practical implications of the current study for designing social robots.
Social robots have emerged as valuable contributors to individuals' well-being coaching. Notably, their integration into long-term human coaching trials shows particular promise, emphasizing a complementary role alongside human coaches rather than outright replacement. In this context, robots serve as supportive entities during coaching sessions, offering insights based on their knowledge about users' well-being and activity. Traditionally, such insights have been gathered through methods like written self-reports or wearable data visualizations. However, the disclosure of people's information by a robot raises concerns regarding privacy, appropriateness, and trust. To address this, we conducted an initial study with [n = 22] participants to quantify their perceptions of privacy regarding disclosures made by a robot coaching assistant. The study was conducted online, presenting participants with six prerecorded scenarios illustrating various types of information disclosure and the robot's role, ranging from active on-demand to proactive communication conditions.
As robotic arms become prevalent in industry it is crucial to improve levels of trust from human collaborators. Low levels of trust in human-robot interaction can reduce overall performance and prevent full robot utilization. We investigated the potential benefits of using emotional musical prosody to allow the robot to respond emotionally to the user's actions. We tested participants' responses to interacting with a virtual robot arm that acted as a decision agent, helping participants select the next number in a sequence. We compared results from three versions of the application in a between-group experiment, where the robot had different emotional reactions to the user's input depending on whether the user agreed with the robot and whether the user's choice was correct. In all versions, the robot reacted with emotional gestures. One version used prosody-based emotional audio phrases selected from our dataset of singer improvisations, the second version used audio consisting of a single pitch randomly assigned to each emotion, and the final version used no audio, only gestures. Our results showed no significant difference for the percentage of times users from each group agreed with the robot, and no difference between user's agreement with the robot after it made a mistake. However, participants also took a trust survey following the interaction, and we found that the reported trust ratings of the musical prosody group were significantly higher than both the single-pitch and no audio groups.
Robots are increasingly used in shared environments with humans, making effective communication a necessity for successful human-robot interaction. In our work, we study a crucial component: active communication of robot intent. Here, we present an anthropomorphic solution where a humanoid robot communicates the intent of its host robot acting as an "Anthropomorphic Robotic Mock Driver" (ARMoD). We evaluate this approach in two experiments in which participants work alongside a mobile robot on various tasks, while the ARMoD communicates a need for human attention, when required, or gives instructions to collaborate on a joint task. The experiments feature two interaction styles of the ARMoD: a verbal-only mode using only speech and a multimodal mode, additionally including robotic gaze and pointing gestures to support communication and register intent in space. Our results show that the multimodal interaction style, including head movements and eye gaze as well as pointing gestures, leads to more natural fixation behavior. Participants naturally identified and fixated longer on the areas relevant for intent communication, and reacted faster to instructions in collaborative tasks. Our research further indicates that the ARMoD intent communication improves engagement and social interaction with mobile robots in workplace settings.
This paper presents an exploratory study to understand the relationship between a humans' cognitive load, trust, and anthropomorphism during human-robot interaction. To understand the relationship, we created a \say{Matching the Pair} game that participants could play collaboratively with one of two robot types, Husky or Pepper. The goal was to understand if humans would trust the robot as a teammate while being in the game-playing situation that demanded a high level of cognitive load. Using a humanoid vs. a technical robot, we also investigated the impact of physical anthropomorphism and we furthermore tested the impact of robot error rate on subsequent judgments and behavior. Our results showed that there was an inversely proportional relationship between trust and cognitive load, suggesting that as the amount of cognitive load increased in the participants, their ratings of trust decreased. We also found a triple interaction impact between robot-type, error-rate and participant's ratings of trust. We found that participants perceived Pepper to be more trustworthy in comparison with the Husky robot after playing the game with both robots under high error-rate condition. On the contrary, Husky was perceived as more trustworthy than Pepper when it was depicted as featuring a low error-rate. Our results are interesting and call further investigation of the impact of physical anthropomorphism in combination with variable error-rates of the robot.
Increasing anthropomorphic robot behavioral design could affect trust and cooperation positively. However, studies have shown contradicting results and suggest a task-dependent relationship between robots that display emotions and trust. Therefore, this study analyzes the effect of robots that display human-like emotions on trust, cooperation, and participants' emotions. In the between-group study, participants play the coin entrustment game with an emotional and a non-emotional robot. The results show that the robot that displays emotions induces more anxiety than the neutral robot. Accordingly, the participants trust the emotional robot less and are less likely to cooperate. Furthermore, the perceived intelligence of a robot increases trust, while a desire to outcompete the robot can reduce trust and cooperation. Thus, the design of robots expressing emotions should be task dependent to avoid adverse effects that reduce trust and cooperation.
This study investigated whether human trust in a social robot with anthropomorphic physicality is similar to that in an AI agent or in a human in order to clarify how anthropomorphic physicality influences human trust in an agent. We conducted an online experiment using two types of cognitive tasks, calculation and emotion recognition tasks, where participants answered after referring to the answers of an AI agent, a human, or a social robot. During the experiment, the participants rated their trust levels in their partners. As a result, trust in the social robot was basically neither similar to that in the AI agent nor in the human and instead settled between them. The results showed a possibility that manipulating anthropomorphic features would help assist human users in appropriately calibrating trust in an agent.
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount.
In the domain of Human-Computer Interaction, focus groups represent a widely utilised yet resource-intensive methodology, often demanding the expertise of skilled moderators and meticulous preparatory efforts. This study introduces the ``Focus Agent,'' a Large Language Model (LLM) powered framework that simulates both the focus group (for data collection) and acts as a moderator in a focus group setting with human participants. To assess the data quality derived from the Focus Agent, we ran five focus group sessions with a total of 23 human participants as well as deploying the Focus Agent to simulate these discussions with AI participants. Quantitative analysis indicates that Focus Agent can generate opinions similar to those of human participants. Furthermore, the research exposes some improvements associated with LLMs acting as moderators in focus group discussions that include human participants.
Anthropomorphism, or the attribution of human-like characteristics to non-human entities, has shaped conversations about the impacts and possibilities of technology. We present AnthroScore, an automatic metric of implicit anthropomorphism in language. We use a masked language model to quantify how non-human entities are implicitly framed as human by the surrounding context. We show that AnthroScore corresponds with human judgments of anthropomorphism and dimensions of anthropomorphism described in social science literature. Motivated by concerns of misleading anthropomorphism in computer science discourse, we use AnthroScore to analyze 15 years of research papers and downstream news articles. In research papers, we find that anthropomorphism has steadily increased over time, and that papers related to language models have the most anthropomorphism. Within ACL papers, temporal increases in anthropomorphism are correlated with key neural advancements. Building upon concerns of scientific misinformation in mass media, we identify higher levels of anthropomorphism in news headlines compared to the research papers they cite. Since AnthroScore is lexicon-free, it can be directly applied to a wide range of text sources.
合并后的统一分组形成“从定义到机制、再到设计与验证、最后到方法学”的并列结构:首先界定拟人化与可操作化测量(含概念边界/触发阈值/拟人化操作与计算测量),再从社会认知层面讨论心智归因与ToM如何驱动意图、道德与责任判断,随后在HCI/HRI层面研究拟人化如何影响信任(形成—校准—修复)以及更广的情境/任务结果(情绪、负荷、风险与建议采纳),再进一步覆盖交互与社会关系(对话、多模态、陪伴与协作团队过程),并保留政府严肃场景的特定信任传导链路。最后,单列现象学与计算/LLM辅助等方法创新,为该领域提供可复现与更贴近体验的研究范式。