ai在数字社会中的争议案例研究
算法偏见、社会歧视与公平性评估
集中探讨AI系统在社会决策、公共政策及大数据分析中如何放大历史偏见、导致不公平歧视及数字鸿沟,并提出相应的审计与公平性提升框架。
- AI in Everyday Life: How Algorithmic Systems Shape Social Relations, Opportunity, and Public Trust(O. B. Ayeni, Isabella Musinguzi-Karamukyo, Oluwakemi T. Onibalusi, Oluwajuwon M. Omigbodun, 2026, Societies)
- Artificial Intelligence and Ethical Dimensions of Automated Traffic Enforcement: Implications for Public Health, Healthcare Equity, and Social Justice(Patricia Haley, 2025, Health Economics and Management Review)
- Cultural Bias in Machine Learning Systems: A Philosophical and Empirical Study of Algorithmic Knowledge Production(Nabulongo Ali, Peter Both Goah Wiech, Katwesigye Collins, Specioza Asiimwe, 2026, International Journal of Research and Innovation in Social Science)
- Algorithmic tenancies and the ordinal tenant: digital risk-profiling in England’s private rented sector(Alison Wallace, David Beer, Roger Burrows, Alexandra Ciocănel, James Cussens, 2025, Housing Studies)
- Epistemologies of predictive policing: Mathematical social science, social physics and machine learning(Jens Hälterlein, 2021, Big Data & Society)
- On AI colourisation: algorithms, ancestry, and colour beyond the black box(Lida Zeitlin-Wu, 2025, Visual Studies)
- (Re)mediators of Epistemic Injustice: Generative AI and Hermeneutic Resource Provision in Intimate Partner Violence(Jasmine C Foriest, L. Ajmani, M. de Choudhury, 2026, Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems)
- Diversity as Ethical Infrastructure: Reimagining AI Governance for Justice and Accountability(Achi Iseko, 2025, International Journal of Science, Technology and Society)
- Generalizing Fairness to Generative Language Models via Reformulation of Non-discrimination Criteria(Sara Sterlie, Nina Weng, A. Feragen, 2024, Lecture Notes in Computer Science)
- T2IBias: Uncovering Societal Bias Encoded in the Latent Space of Text-to-Image Generative Models(Abu Sufian, C. Distante, Marco Leo, H. Salam, 2025, No journal)
- Algorithmic bias in anthropomorphic artificial intelligence: Critical perspectives through the practice of women media artists and designers(Caterina Antonopoulou, 2023, Technoetic Arts)
- Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination(Sara Kingsley, Jiayin Zhi, Wesley Hanwen Deng, Jaimie Lee, Sizhe Zhang, Motahhare Eslami, Kenneth Holstein, Jason I. Hong, Tianshi Li, Hong Shen, 2024, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing)
- What is the point of fairness?(Cynthia L. Bennett, O. Keyes, 2020, Interactions)
- Algorithmic Bias and Non-Discrimination in Argentina(F. Farinella, 2022, Lex Genetica)
- Teaching Parrots to See Red: Self-Audits of Generative Language Models Overlook Sociotechnical Harms(Evan Shieh, Thema Monroe-White, 2025, Proceedings of the AAAI Symposium Series)
- ALGORITHMIC BIAS IN MEDIA CONTENT DISTRIBUTION AND ITS INFLUENCE ON MEDIA CONSUMPTION: IMPLICATIONS FOR DIVERSITY, EQUITY, AND INCLUSION (DEI)(Chizorom Ebosie Okoronkwo, 2024, International Journal of Social Sciences and Management Review)
- Research on the Impact of Social Media Information Dissemination Mechanism on the Formation and Dissolution of Female Stereotypes(Yijun Liang, He Wen, 2025, SHS Web of Conferences)
- The Gendered, Epistemic Injustices of Generative AI(Isobel Barry, Elise Stephenson, 2025, Australian Feminist Studies)
- A New Era of Artificial Intelligence to Examine Algorithmic Discrimination for Social, Healthcare and Legal System Among Transgender Individuals(Jessy k Jayanth, Thendral Com.L.L.B., R. Raman, S. B.Com.L.L.B., S. Hanushka, Srinivasan BBA.L.L.B., 2025, 2025 International Conference on Emerging Technologies and Innovation for Sustainability (EmergIN))
- Beyond Algorithms: A G.E.N.D.E.R. AI Framework for Advancing Workplace Equity in Automation(A. Maheswari, 2025, International Journal of Global Research Innovations & Technology)
- Racism in the Digital Age: The Impact of Social Media Algorithms on Public Discourse(Ángeles Solanes Corella, Nacho Hernández Moreno, 2025, The Age of Human Rights Journal)
- Human-AI Interactions and Societal Pitfalls(Francisco Castro, Jian Gao, Sébastien Martin, 2023, ACM Conference on Economics and Computation)
- How Do Generative Models Draw a Software Engineer? A Case Study on Stable Diffusion Bias(Tosin Fadahunsi, Giordano d'Aloisio, A. Marco, Federica Sarro, 2025, 2025 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion (SANER-C))
- Bias in Big Data, Machine Learning and AI: What Lessons for the Digital Humanities?(A. Prescott, 2023, Digital Humanities Quarterly)
- Exclusive Flux: A Review of Flux’s Generation of LGBTQ+ Couples(Lynn Vonderhaar, Kayla V. Taylor, Jennifer Wojton, Omar Ochoa, 2025, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society)
- The Algorithmic Looking-Glass: A Predictive Model of How Digital Acculturation and Perceived Algorithmic Bias Impact the Well-being of Thailand’s Tai Dam Minority(Panchamaphorn Tamnanwan, Danupon Sangnak, 2025, Asian Journal of Arts and Culture)
- Racially Inclusive Approach to Facial Beauty Modeling Using Machine Learning(Erik Nguyen, Sampson E. Akwafuo, Doina Bein, Blessing Ojeme, 2024, 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM))
- Controlling Bias in Generative AI: Techniques for Fair and Equitable Data Generation in Socially Sensitive Applications(Nimit Bhardwaj, Anvit Bhardwaj, Lavanya Garg, 2025, 2025 3rd International Conference on Integrated Circuits and Communication Systems (ICICACS))
- The Algorithmic Bias in Recommendation Systems and Its Social Impact on User Behavior : Algorithmic Bias in Recommendation Systems(2024, International Theory and Practice in Humanities and Social Sciences)
- Because the machine can discriminate: How machine learning serves and transforms biological explanations of human difference(J. W. Lockhart, 2023, Big Data & Society)
生成式AI对创意产业、版权与劳动力市场的影响
分析生成式AI在音乐、艺术与影视创作中的应用,探讨版权归属、创意产业的结构性变革、劳动力流失风险以及创作者劳动权利保护。
- Big data analysis on the attitude of Chinese public toward AI art.(Xiaoyuan Jia, Q. Yin, Chunfeng Li, 2025, Acta Psychologica)
- Creative labor on streaming platforms: a case study of Spotify and Australian musicians(Yutong Li, 2025, Advances in Humanities Research)
- Emergence of a New Aesthetics: the Development of Contemporary Popular Culture in the United States in the Mirror of the Techno-Narratives of the Age of Artificial Intelligence(A. Pilkevych, 2025, Ethnic History of European Nations)
- Cultural Dimensions of Artificial Intelligence in the Ukrainian Musical Landscape(I. Antipina, 2025, Часопис Національної музичної академії України ім.П.І.Чайковського)
- The Misuse of Artificial Intelligence in Imitating the Voices of Public Figures in Songs on Social Media(Ni Kadek Pande Monica Cansica Dewi, Dr. I Nyoman Bagiastra SH MH, 2025, International Journal of Judicial Law)
- “A safe, responsible, and profitable ecosystem of music”: Analyzing perceptions and implementation of generative AI in the music industry(Raquel Campos Valverde, D. Kaye, 2026, New Media & Society)
- AI and Cultural Innovations in South Korea(S. Dongre, 2025, International Journal of Science, Architecture, Technology and Environment)
- Integration of Cognitive Intelligence and Cultural Industries: Current Status, Challenges, and Future Trajectories(Jiaqi Yu, Yonghui Song, Yuxiang Zhao, Lijun Zhang, 2025, Scientific and Social Research)
- Beyond the Black Box: An Analytical Study of AI-Generated Content's Impact on Consumer Engagement and Ethical Co-Creation Issues in Maharashtra, India(Swarnalata Bambhore, Ravindra Gharpure, Rahul Mohare, 2026, International Journal of Research and Innovation in Social Science)
- How Generative AI Went From Innovation to Risk: Discussions in the Korean Public Sphere(Sunghwan Kim, Jaemin Jung, 2025, Media and Communication)
- Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach(Sourojit Ghosh, Pranav Narayanan Venkit, Sanjana Gautam, Shomir Wilson, Aylin Caliskan, 2024, AAAI/ACM Conference on AI, Ethics, and Society)
- Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond(Lin Kyi, Amruta Mahuli, M. S. Silberman, Reuben Binns, Jun Zhao, Asia J. Biega, 2025, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems)
- Who Gets Paid (for) What? The Cultural Political Economy of News Content in Generative AI(Siho Nam, 2024, Emerging Media)
- Authorship, Ownership, and Ethical Issues in AI-generated Research: Implications for Nigerian Academia(Oluchi.G Maduka, Temiloluwa Oyundoyin, Adebayo A. Adejumo, 2025, Journal of Intellectual Property and Information Technology Law (JIPIT))
- Reclaiming the Sonic Archive: AI, Data Sovereignty, and the Future of Mijikenda Music(Linus Wechuli Odeke, A. Kirui, 2025, Journal of Visual and Performing Arts)
- Sociological Age of Technologies: Demurrers in Sierra Leone Intellectual Property Rights Law(Mohamed Bangura, 2026, EJSMT)
- From the Communal Music Making to Deep Learning: AI, Copyright, and the Soul of African Music(A. Kirui, Tolu Owoaje, 2026, African Musicology Online)
- Blurring boundaries: Piracy, algorithmic authorship and creativity among designers in Kenya(G. Gatere, 2025, South African Intellectual Property Law Journal)
- Gen-AI Chatbots as Agentic Moderators: Implications for Innovation, Diversity and Ethics Through Socio-Sustainability(Sarawana Kumar L Krishnasamy, 2025, 2025 International Conference on ICT for Smart Society (ICISS))
- Generative AI may create a socioeconomic tipping point through labour displacement(J. Occhipinti, William Hynes, Ante Prodan, Harris A. Eyre, Roy Green, Sharan Burrow, Marcel Tanner, John Buchanan, Goran Ujdur, Frederic Destrebecq, Christine Song, Steve Carnevale, I. Hickie, Mark Heffernan, 2025, Scientific Reports)
- Algorithmic Management. Theoretical Perspectives and Implications for Organizational Development(Lucian Sfetcu, 2024, Technium Social Sciences Journal)
- Factors Influencing the Usage Behavior of Generative AI Among Shanghai Residents(Qianning Zhang, Jiayu Zhang, Yifei Yang, Jingjing Wang, 2025, Advances in Humanities and Modern Education Research)
- Lay Stakeholder Centric Sociotechnical Mechanisms for Addressing the Impacts of Generative AI(Julia Barnett, 2025, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society)
- Rethinking industrial relations: Policy ecologies, cultural work and artificial intelligence(Kate MacNeill, Amanda Coles, Gareth Fletcher, 2025, Journal of Industrial Relations)
- Art and artificial intelligence, a window into the future of the evolution of contemporary society(Alejandra Elena Marinaro, 2020, EAI Endorsed Transactions on Creative Technologies)
- A capitalist stranglehold on “artificial intelligence”: a gallop through piracy, privacy invasion, lock-in and a fever dream of democratisation(Aidan Cornelius-Bell, 2026, Fast Capitalism)
- From beats to bytes: electronic dance music, artificial intelligence and teen identity(Linto Thomas, V. V. Kumar, L. K. Jena, 2025, Young Consumers)
- Language, Identity, and Ethics in AI-Driven Art: Perspectives from Human Artists in Digital Environments(A. J. Torres, Jasper Mareece C. Alberto, Angel Pearl J. Guieb, Ayessa DR. Paray, Joseph A. Villarama, 2024, Language, Technology, and Social Media)
- The Illusion of “Authenticity”: Ethical Dilemmas and Aesthetic Imagination in Pop Music Creation in the Age of AI(Sitan Yang, 2025, Journal of Contemporary Art Criticism)
- Ethical dilemmas and cultural tensions: Socio-Psychological effects of AI image art — Value conflicts and adaptation mechanisms in the digital media ecosystem(Jie Liu, 2025, Environment and Social Psychology)
- Human-in-the-Loop Labor Architectures: Reinstating and Reconfiguring Work through Skills, Certification, and Human-Centered AI(V.M Benignus Grero, 2026, SSRN Electronic Journal)
- An Analysis of the Publishing Industry's "Labyrinth of Responsibility" and Governance Paths in the Era of Generative Artificial Intelligence: From the Perspective of Actor-Network Theory(Yuwen Cao, Jiahua Chen, 2025, Communications in Humanities Research)
数字舆论、信息完整性与人机信任机制
探讨算法推荐导致的过滤气泡、偏见增强、信息操纵、深度伪造以及公众对AI技术的信任度和数字素养问题。
- Artificial Intelligence and Hate Speech(Mariam Balavadze, 2025, Journal of Law)
- Impact of AI in Social Media: Addressing Cyber Crimes and Gender Dynamics(Shreyas Kumar, Anisha Menezes, Gaurish Agrawal, Nishika Bajaj, Meenakshi Naren, Sukrit Jindal, 2025, European Conference on Social Media)
- "This is a deepfake!": Celebrity scandals, parodic deepfakes, and a critically speculative ethics of care for fandom research in the age of artificial intelligence(Eva Cheuk-Yin Li, Ka-Wei Pang, 2025, Transformative Works and Cultures)
- Bias in Personalized Social Media Content: Impact on Romanian Generation Z Decision Making(M. Wolff, Cella Buciuman, 2025, European Conference on Social Media)
- Framing and Stigmatization: A Multidimensional Analysis of Anti-Chinese Narratives on Social Media during the COVID-19 Pandemic(Zhou Hui, Akmar Hayati Binti Ahmad Ghazali, Sharil Nizam Bin Sha'ri, 2025, International Journal of Academic Research in Economics and Management Sciences)
- Social media's impact on community building and social movements(Jin young Hwang, 2025, Magna Scientia Advanced Research and Reviews)
- AI-Driven Content Curation and Its Impact on Media Diversity in Social Networks(Gerr Mariia, 2025, Proceedings of the World Conference on Media and Communication)
- The Impact of the Application of Algorithms in Social Media Marketing on Society(Xinwei Wang, 2023, Lecture Notes in Education Psychology and Public Media)
- Exploring Trust and Literacy in Engagement With Generative AI and Science Information Behavior(Torben E. Agergaard, Kristian H. Nielsen, R. Labouriau, A. Fage-Butler, 2026, Media and Communication)
- Synthetic relationships and artificial intimacy: an ethical framework for evaluating the impact of generative-AI on community(Martin Jones, Matthew Schonewille, 2025, Journal of Ethics in Entrepreneurship and Technology)
- Identifying the Public's Beliefs About Generative Artificial Intelligence: A Big Data Approach(Ali B. Mahmoud, V. Kumar, S. Spyropoulou, 2025, IEEE Transactions on Engineering Management)
- "It's Not About Laziness, It's About Efficiency": Youth Perspectives on Generative AI in Higher Education Through the Lens of TikTok(Ioana Literat, Constance de Saint Laurent, Vlad P Glăveanu, Rhea Jaffer, Sonia S. Kim, Sophia Diplacido, 2026, AoIR Selected Papers of Internet Research)
- Dialogues Towards Sociologies of Generative AI(Patrick Baert, Robert Dorschel, M. Hall, I. Higgins, Ella McPherson, Shannon Philip, 2025, Social Science Computer Review)
- The fall and rise of Iruda: Reassembling AI through ethics-in-action(Yubeen Kwon, Sungook Hong, 2025, Social Studies of Science)
- Yet another Source of Dis- & Misinformation, Sociopathy, Hallucinations of AI, or the Case of Solaris? (From the Teacher’s Observations)(A. Korenkov, 2025, Al-Noor Journal for Digital Media Studies)
- Fake news propagates differently from real news even at early stages of spreading(Zilong Zhao, Jichang Zhao, Y. Sano, Orr Levy, H. Takayasu, M. Takayasu, Daqing Li, Junjie Wu, S. Havlin, 2020, EPJ Data Science)
- The Importance of AI-Generated Content Detection in the Future: Societal, Ethical, and Policy Implications(Bharath Kandati, 2026, Journal of Advances in Developmental Research)
AI系统的透明度、治理与技术责任框架
关注技术黑箱效应与风险治理,探讨如何通过可解释性设计(XAI)、法律规制及伦理审查机制来确保AI系统的透明度与社会治理合规性。
- Breaking Boundaries through Collaboration: A Human-Centered Framework for Fair AI Design(Shaozeng Zhang, Ethan Copple, Ana Carolina de Assís Nunes, Ali Behnoudfar, Fuxin Li, 2025, Engineering Studies)
- Neural Network Nebulae: 'Black Boxes’ of Technologies and Object-Lessons from the Opacities of Algorithms(A. Kuznetsov, 2020, Sociology of Power)
- Explainability and Contestability for the Responsible Use of Public Sector AI(Timothée Schmude, 2025, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
- In generative artificial intelligence we trust: unpacking determinants and outcomes for cognitive trust(Minh-Tay Huynh, T. Aichner, 2025, AI & SOCIETY)
- Picking Apart the Black Box: Sociotechnical Contours of Accessibility in AI/ML Software Engineering(Christine T. Wolf, 2020, Advances in Intelligent Systems and Computing)
- Do people believe in Artificial Intelligence?: A cross-topic multicultural study(A. Kolasinska, Ivano Lauriola, Giacomo Quadrio, 2019, Proceedings of the 5th EAI International Conference on Smart Objects and Technologies for Social Good)
- Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture(R. Geiger, Stuart T. Geiger, 2017, Big Data & Society)
- Explainable AI in Public Policy: Quantifying Trust and Distrust in Algorithmic Decision-Making across Marginalized Communities(Saad Tehreem, 2025, AI and Machine Learning Advances)
- Between doubt and deference: trust and harm in the risk society(Colin Strong, Tamara Ansons, 2025, Journal of Risk Research)
- Predictive Policing with Neural Networks: A Big Data Approach to Crime Forecasting in Sri Lanka(Hamza Nauzad, Dhanuka Dayawansa, N. Dias, P. Haddela, Samadhi Ratnayake, 2025, 2025 International Research Conference on Smart Computing and Systems Engineering (SCSE))
- Keynote: Surveillance, Power and Accountable AI Systems: Can We Craft an AI Future that Works for Everyone?(Jeanna Neefe Matthews, 2021, 2021 Eighth International Conference on eDemocracy & eGovernment (ICEDEG))
- Social and Ethical Challenges of Artificial Intelligence in Surveillance and Security Systems(Lokeshwari R, Saravanan C, 2025, 2025 9th International Conference on Computational System and Information Technology for Sustainable Solutions (CSITSS))
- Algorithmic criminology(R. Berk, 2012, Security Informatics)
- The Impact of Artificial Intelligence on Social Media Content(Elsir Ali Saad Mohamed, M. Osman, Badur Algasim Mohamed, 2024, Journal of Social Sciences)
- Artificial general intelligence policy: dignity over transparency(Liat Lavi, 2025, Transnational Legal Theory)
- Ethical Implications of AI Applications in Nonprofit and Charity Sectors(Я. Я. Кравчук, 2025, COMPUTER-INTEGRATED TECHNOLOGIES: EDUCATION, SCIENCE, PRODUCTION)
- Data Ethics in the Digital Age: Privacy, Ownership, and Governance(A. Kazanskaia, 2025, NEYA Global Journal of Non-Profit Studies)
- Echo Chambers and Algorithmic Bias: The Homogenization of Online Culture in a Smart Society(Salsa Della Guitara Putri, E. Purnomo, Tiara Khairunissa, 2024, SHS Web of Conferences)
- Research on the Challenges and Coping Strategies of News Authenticity in the Age of Social Media(Yuxin Yang, 2025, Communications in Humanities Research)
- The Default Billion: Google–Apple Search Payments, Platform Power, and the AI Turn in Digital Capitalism, Google Apple Search Deal(Alex Lee, 2025, Unveiling seven continents yearbook journal)
- Reimagining Social Justice in the Age of AI and Automation: Ethical, Economic, and Cultural Perspectives(Dr. Nabila Qureshi, 2025, Contemporary Thought and Society International Journal)
- Social Science in the Age of AI: Unveiling Opportunities, Confronting Biases, and Charting Ethical Pathways(Tarik Mokadi, Osama Tawfiq Jarrar, Ayman Yousef, 2026, Philosophies)
- Beyond black boxes and the AI sublime: critically assessing the code behind commonly used machine learning models(Justin Grandinetti, 2026, AI & SOCIETY)
- HEV Generative Sandbox: A Framework for Assessing Domain-Specific Social Risks Through Human-LLM Simulation(Yiran Liu, Zhiyi Hou, Xiaoan Xu, Shuo Wang, Huijia Wu, Kaicheng Yu, Yang Yu, Chengxiang Zhai, 2026, Proceedings of the AAAI Conference on Artificial Intelligence)
- Exploring the Janus Face of Synthetic Images: From Privacy-secure Biometrics Applications to Deepfake Detection for Misinformation-Free Social Networks(Tanusree Ghosh, 2025, Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security)
- From Gaze to Data: Privacy and Societal Challenges of Using Eye-tracking Data to Inform GenAI Models(Yasmeen Abdrabou, Süleyman Özdel, Virmarie Maquiling, Efe Bozkir, Enkelejda Kasneci, 2025, Proceedings of the 2025 Symposium on Eye Tracking Research and Applications)
- The Transformative Impact of Generative AI on Society(Manoj Kumar Reddy Jaggavarapu, 2025, Journal of Computer Science and Technology Studies)
- Artificial intelligence in diabetes care: from predictive analytics to generative AI and implementation challenges(M. Deng, Ruiye Yang, Xiaoran Zheng, Yaoqi Deng, Junyi Jiang, 2025, Frontiers in Endocrinology)
- Social Risks in the Era of Generative AI(Xiaozhong Liu, Yu-Ru Lin, Zhuoren Jiang, Qunfang Wu, 2024, Proceedings of the Association for Information Science and Technology)
- Algorithmic Resistance as Understood via a Cross-Cultural Analysis of AI-Driven Workplace Surveillance in Eastern and Western Contexts(Phan-Nam Trinh, 2025, Journal of Social Science Studies)
- Artificial Intelligence, Human Rights and Sustainable Development: An African Perspective(J. Mubangizi, 2024, Perspectives of Law and Public Administration)
- The effect of algorithmic bias and network structure on coexistence, consensus, and polarization of opinions(A. F. Peralta, J'anos Kert'esz, G. Íñiguez, 2021, Physical Review E)
- Towards a Multidisciplinary Vision for Culturally Inclusive Generative AI (Dagstuhl Seminar 25022)(Joanna Biega, Georgina Born, F. Diaz, Mary L. Gray, Rida Qadri, 2025, Dagstuhl Reports (DagRep))
- A powerful potion for a potent problem: transformative justice for generative AI in healthcare(Nicole Gross, 2024, AI and Ethics)
- Theorizing the Impact of Generative Artificial Intelligence on Social Causation(A. Anisin, 2025, Journal of Posthuman Studies)
- The role of artificial intelligence in the transformation of social interactions(Artem A. Sergienko, Z. Denikina, 2025, Sociopolitical Sciences)
- Expropriated Minds: On Some Practical Problems of Generative AI, Beyond Our Cognitive Illusions(Fabio Paglieri, 2024, Philosophy & Technology)
社会技术集成与包容性福祉实践
研究如何构建以人为本的AI系统,通过社会技术协作、包容性设计和伦理治理来缩小数字鸿沟,促进教育与公共服务的平等参与。
- Social Impacts, Risks, and Governance Framework of Generative Artificial Intelligence Applications(Liangcong Fan, Meng-Chen Wu, 2025, 2025 International Conference on Algorithms, Software and Network Security (ASNS))
- AI-powered public service and persons with disabilities (PWDs): questioning the commitment to bridging digital inclusivity gap in Ghana(A. Acquah, 2025, Transforming Government: People, Process and Policy)
- Cautious optimism: public voices on medical AI and sociotechnical harm(Filippo Gibelli, Alessia Maccaro, Dr. Nkosi Nkosi Botha, B. Townsend, Victoria J. Hodge, Hannah Richardson, R. Calinescu, T. Arvind, 2025, Frontiers in Digital Health)
- Ethical and Societal Impacts of Generative AI in Higher Computing Education: An ACM Task Force Working Group to Develop a Landscape Analysis - Perspectives from the Global Souths and Guidelines for CS1/CS2/CS3(Claudia Szabo, Nickolas J. G. Falkner, M. Munienge, Judithe Sheard, Mabberi Enock, Tony Clear, D. K. Dake, O. Ogunyemi, Oluwakemi Ola, T. Taukobong, Bimlesh Wadhwa, 2025, Proceedings of the ACM Global Computing Education Conference 2025 - Volume 2)
- Generative AI in Virtual Reality Communities: A Preliminary Analysis of the VRChat Discord Community(He Zhang, Siyu Zha, Jie Cai, D. Y. Wohn, John M. Carroll, 2025, Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems)
- Chatbot Programmes’ ‘Arms Race’: Africa and Artificial Intelligence (AI) Ethics(Tapiwa Chagonda, 2025, The Thinker)
- Contextualizing the privacy paradox—a risk–benefit analysis of generation z’s adoption intentions toward AI-based virtual try-on(Keren Mao, Rongrong Cui, Zhicheng Wang, 2026, Frontiers in Psychology)
- The Societal Impact of AI: Balancing Innovation with Public Trust(Tarini Prasad Samanta, 2025, Journal of Information Systems Engineering and Management)
- Ethical Implications of AI-Driven Recruitment: A Multi-Perspective Study on Bias and Transparency in Digital Hiring Platforms(Fadlul Musrifah, Ika Hasanah, 2025, Journal of Management and Informatics)
- The Societal Impacts of Generative AI: Policy, Ethics and the Future of Human-Machine Collaboration(Mohammad Quayes Bin Habib, MD Abdur Rahim, Md. Mahedi Hasan, Yeasir Fahim Sikder, Kazi Wasi Uddin Shad, 2026, International Journal of Science and Research Archive)
- Generative AI and human–robot interaction: implications and future agenda for business, society and ethics(Bojan Obrenovic, Xiaochuan Gu, Guoyu Wang, Danijela Godinic, Ilimdorjon J. Jakhongirov, 2024, AI & SOCIETY)
- Analysis of Value Conflicts between Efficiency and Responsibility in AI-based Airport Operations: A Comparative Study of AI Regulatory Approaches in Major Countries(H. Ahn, Jihyun Park, Seung Jae Park, 2025, Korean Production and Operations Management Society)
- Generative Ai In Higher Education: Philosophical, Sociological, And Pedagogical Perspectives(Rich Yueh, 2025, International Journal of Arts, Humanities & Social Science)
- Tracing transformation by tension: A multidisciplinary perspective on German performing rights organizations navigating conflict and technological changes(Stephan Klingner, Malte Zill, G. Fischer, 2025, International Communication Gazette)
- Metaverse & Human Digital Twin: Digital Identity, Biometrics, and Privacy in the Future Virtual Worlds(P. Ruiu, Michele Nitti, Virginia Pilloni, M. Cadoni, Enrico Grosso, M. Fadda, 2024, Multimodal Technologies and Interaction)
- Perspectives of Generative AI in the Context of Digital Transformation of Society, Audio-Visual Media and Mass Communication: Instrumentalism, Ethics and Freedom(I. Pecheranskyi, O. Oliinyk, Alla Medvedieva, Volodymyr Danyliuk, Olena Hubernator, 2024, Indian Journal of Information Sources and Services)
- Bridging Educational Inequity in Nepal through Explainable AI and Social Theory Integration(A. Adhikari, Vivek Kumar Sinha, 2025, Asian Journal of Research in Computer Science)
- Designing with AI, Not Around It – Human-Centric Architecture in the Age of Intelligence(Tejasvi Nuthalapati, 2025, Journal of Computer Science and Technology Studies)
- Artificial Intelligence: A Creative Player in the Game of Copyright(Jan Zibner, 2019, European Journal of Law and Technology)
- Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos(Levent Çallı, B. Çallı, 2025, Systems)
- Inclusion Ethics in AI: Use Cases in African Fashion(Christelle Scharff, James Brusseau, K. Bathula, Kaleemunnisa Fnu, Samyak Meshram, Om Gaikhe, 2024, Proceedings of the AAAI Symposium Series)
- Artificial intelligence, racialization, and art resistance(Ruth Martinez-Yepes, 2024, Cuadernos de Música, Artes Visuales y Artes Escénicas)
本次综合报告将AI在数字社会的争议划分为五个关键领域:一是算法偏见与社会公平性,关注AI在决策中的歧视问题;二是生成式AI对创意版权与劳动市场的重塑;三是社交媒体环境下信息完整性与人机信任的维系;四是基于透明度、可解释性与治理策略的风险管控;五是实现以人为本的社会技术集成与包容性福祉。这些研究揭示了AI不仅是技术工具,更是深度嵌入制度、伦理与社会身份构建的动力要素,强调了跨学科治理对于构建可持续数字社会的必要性。
总计140篇相关文献
No abstract available
2022 is called the year of “generative AI,” and the increased interest in ChatGPT in 2023 proves not just a sign of the popularity of neural networks among the mass audience but rather a trend. The number and quality of neural network models are growing, accelerating the digital transformation of society and its subsystems towards a new paradigm – Society 5.0. In this study, the authors analyze the functional potential of generative AI in the fields of social development, mass communication, and audiovisual media and identify several serious ethical challenges and dilemmas directly related to the operationalization of this digital technology. Along with the techno-optimistic concepts of “digital happiness” and “super smart society” and the excitement of fast and efficient operation of technologies using FM and LLM, even the most progressive adherents of Society 5.0 do not deny the existence of a number of risks to society and humans generated by AI, among other things. It goes about the threat of rapid and convincing multiplication of fake news and narratives, excessive dependence, the risk of copyright infringement and related ethical dilemmas, challenges to freedom of creativity, the instrumentalization of intellectual and artistic (including audiovisual) practices, digital manipulation of consciousness, and social exclusion associated with the deformation of the existential foundations of human life and communication.
This paper explores the societal impacts of generative AI by examining the intertwined dimensions of policy, ethics, and human-machine collaboration. It aims to highlight the ethical challenges and policy considerations necessary to foster responsible AI development while ensuring equitable benefits across society. Emphasizing the need for interdisciplinary approaches, this study investigates how generative AI reshapes human roles and interactions, advocating for frameworks that promote transparency, accountability, and trust in AI systems. Ultimately, it seeks to inform strategies that support sustainable and human-centered AI integration in the modern era.
ABSTRACT The rise of generative artificial intelligence (GenAI) brings optimism for productivity, economic, and social progress, but also raises concerns about algorithmic bias and discrimination. Regulators and theorists face the urgent task of identifying potential harms and mitigating risks. This article applies Miranda Fricker’s concepts of testimonial and hermeneutical injustice to explore how GenAI exacerbates or creates epistemic injustice from a feminist, epistemic perspective.Through three case studies, we reveal how gender-biased GenAI responses in leadership and workplace contexts reinforce stereotypes, leading to offline injustices. Moreover, we highlight how gender data gaps within the GenAI ecosystem contribute to hermeneutical injustice, marginalizing women’s experiences within collective knowledge resources. These findings demonstrate how GenAI technologies perpetuate both testimonial and hermeneutical injustices.By integrating existing work on epistemology, AI ethics, and feminist theory, we propose a novel framework for understanding the risks and harms of GenAI. We argue that meta-blindness further obscures these gendered issues and their potential solutions. Our analysis not only sheds light on these critical challenges but also offers a pathway toward achieving gender equity within the GenAI landscape.
This article presents a sociological dialogue between six researchers who specialise in different sociological subfields. Each researcher explores the possible consequences of generative AI within their specific area of expertise. More concretely, the article develops insights around directions in social theory, the political economy of intellectual property, matters of identities and intimacies, evidence and evidentiary power, racial and reproductive inequalities, as well as work and social class. This is followed by a collective discussion on six interconnected themes across these areas: agency, authorship, identity, visibility, inequality, and hype. We also consider our role as cultural producers, understanding our reactions to generative AI as part of the empirical, theoretical, and methodological shifts this knowledge controversy engenders, as well as highlighting our duty as critical sociologists to keep the knowledge controversy about generative AI open.
Generative AI in Virtual Reality Communities: A Preliminary Analysis of the VRChat Discord Community
As immersive social platforms like VRChat increasingly adopt generative AI (GenAI) technologies, it becomes critical to understand how community members perceive, negotiate, and utilize these tools. In this preliminary study, we conducted a qualitative analysis of VRChat-related Discord discussions, employing a deductive coding framework to identify key themes related to AI-assisted content creation, intellectual property disputes, and evolving community norms. Our findings offer preliminary insights into the complex interplay between the community’s enthusiasm for AI-driven creativity and deep-rooted ethical and legal concerns. Users weigh issues of fair use, data ethics, intellectual property, and the role of community governance in establishing trust. By highlighting the tensions and trade-offs as users embrace new creative opportunities while seeking transparency, fair attribution, and equitable policies, this research offers valuable insights for designers, platform administrators, and policymakers aiming to foster responsible, inclusive, and ethically sound AI integration in future immersive virtual environments.
This study aims to examine the ethical impact of generative artificial intelligence (AI) tools on human relationships and community life. It explores how AI-mediated interactions can reshape essential social practices, particularly in emotionally meaningful or developmentally formative spaces. Drawing on interdisciplinary research and moral philosophy, the article introduces the REAL Framework: Retained, Eroded, Atrophied and Leveraged. This model helps evaluate the relational consequences of emerging technologies. The purpose is to provide educators, institutional leaders and technology designers with a critical and practical tool for assessing whether generative AI tools support authentic human connection or subtly undermine it. This article uses a conceptual and ethical analysis methodology, drawing from recent interdisciplinary literature in AI ethics, psychology and theology. Rather than presenting empirical findings, it offers a critical examination of how generative AI tools shape human relationships and community dynamics. The article synthesizes insights from scholarly research and cultural observation to develop the REAL Framework, a practical model for ethical evaluation. This approach allows for a reflective, theory-informed perspective that emphasizes relational integrity and communal well-being in the adoption and use of AI technologies. The article finds that generative AI tools, while offering potential benefits, can also subtly distort or displace essential elements of human relationships. Through critical analysis, it identifies specific risks such as relational erosion, skill atrophy and the simulation of emotional intimacy without moral reciprocity. The REAL Framework, which stands for Retained, Eroded, Atrophied and Leveraged, serves as a practical tool to assess these relational impacts. Findings suggest that ethical evaluation of AI must move beyond technical concerns to consider the formation of individuals and communities. The framework helps users evaluate whether AI tools support or undermine authentic connections. This article presents a conceptual framework rather than empirical research, which limits the generalizability of its conclusions. While grounded in interdisciplinary scholarship, its findings are interpretive and intended to guide ethical reflection rather than predict outcomes. Future studies could test the REAL Framework across various cultural and technological contexts to assess its practical utility. Despite these limitations, the article offers valuable implications for educators, developers and institutional leaders. It encourages proactive, community-centered evaluation of generative AI tools and highlights the need for ethical discernment that prioritizes relational integrity and long-term human development over short-term technological efficiency. The article provides a usable framework for evaluating the relational impact of generative AI tools within educational, organizational and community settings. The REAL Framework equips practitioners to ask targeted questions about whether a tool preserves essential human connection, erodes relational depth, weakens emotional skills or can be used to support authentic community. This model is especially relevant for educators, institutional leaders and developers who are navigating the integration of AI into emotionally significant environments. By applying the framework, stakeholders can make more informed, ethically responsible decisions that prioritize the dignity of persons and the health of human relationships. This article highlights the broader social implications of generative AI tools that increasingly shape human interaction, identity and community life. As AI systems mediate emotionally significant exchanges, there is a risk that relational authenticity may be replaced by simulation and convenience. The REAL Framework encourages reflection on how technology influences not only individual behavior but also collective values and social norms. Its application can help communities safeguard relational integrity, resist depersonalization and foster practices that strengthen human connection. The framework invites ongoing communal discernment about the kind of society being formed through the tools we choose to adopt. This article offers an original contribution by introducing the REAL Framework as a practical tool for evaluating the relational and ethical impact of generative AI technologies. Unlike purely technical or utilitarian approaches, this model emphasizes the social and moral dimensions of AI use, particularly in emotionally formative and community-based contexts. The framework draws from interdisciplinary research and applies it to a timely cultural concern, offering a structured means of reflection for educators, leaders and designers. Its value lies in equipping stakeholders to move beyond efficiency-based assessments and instead prioritize the preservation of authentic human connection and communal well-being.
This study explores the influencing factors of generative AI usage behavior among residents in Shanghai. Based on 208 questionnaire responses and analyzed using SPSS, the findings reveal that users can be categorized into three types: technology-dependent, ethics-sensitive, and function-oriented. High-frequency users (80.19%) exhibit weaker awareness of privacy risks, while low-frequency users (19.8%) pay more attention to technological transparency. Age and education level significantly influence usage intention in specific scenarios. Recommendations are provided to optimize technology design and policy regulation for different user groups.
This paper argues that the AI revolution which is currently unfolding and being fuelled by the significant strides in Generative AI-powered technologies, calls for an urgent response by the African continent, to ensure that possible harms associated with this cutting-edge technology are mitigated. The ‘arms race’ to create chatbots that can rival Open AI’s ChatGPT-4.0 technology by big technology companies such as Google and Meta, is not only hastening the pace of the AI revolution but is also bringing to the fore the double-edged nature of this technology. The benefits of AI generative technologies such as chatbots in fields such as the academy; health; agriculture; music and art, have been touted in recent times, but the ethical concerns around issues of bias; possible proliferation of misinformation from algorithms that are trained on datasets that are not fully representative of the global South’s realities, especially Africa; breaches in privacy issues and threats of job losses, still linger. The fact that in March 2023, an Elon Musk-led petition to have a six-month moratorium on AI chatbot innovations began circulating raises serious ethical concerns around the AI revolution, which makes it critical for a continent such as Africa, which has largely been a consumer of these technologies and notan innovator, to urgently draft measures that can protect it. The paper contends that even though Africa is not homogenous in nature, it needs to come up with an AI ethics-driven framework that protects the majority of its population which is mired in poverty and likely to be on the receiving end of any cons associated with AI technologies. This framework should be largely anchored in the African philosophy of Ubuntu, but also pragmatic enough to include positive facets of global-North philosophical strands such as deontology, which largely places currency on ethical principles and rules above the outcomes they produce.
This study explores how youth discuss generative artificial intelligence (AI) in higher education contexts on TikTok. Through a qualitative analysis of 980 TikTok posts and their associated comments, we identify three key themes: (1) the commercialization of AI tools on TikTok through peer-to-peer marketing strategies, (2) platform-mediated moral contestations around AI ethics and the purpose of higher education, and (3) the emergence of new forms of community-building and identity exploration centered around AI. Our analysis reveals TikTok's significant role as a platform for peer-to-peer knowledge sharing around emerging technologies—yet one deeply marked by commercial dynamics and undisclosed promotional content. These findings contribute to emerging scholarship on how social media platforms shape youth technological imaginaries and provide insights into the complex entanglements of commercial interests, educational discourse, and identity practices in digital spaces.
Technological progress breeds both innovation and potential risks, a duality exemplified by the recent debate over generative artificial intelligence (GAI). This study examines how GAI has become a perceived risk in the Korean public sphere. To explore this, we analyzed news articles (N = 56,468) and public comments (N = 68,393) from early 2023 to mid-2024, a period marked by heightened interest in GAI. Our analysis focused on articles mentioning “generative artificial intelligence.” Using the social amplification of risk framework (Kasperson et al., 1988), we investigated how risks associated with GAI are amplified or attenuated. To identify key topics, we employed the bidirectional encoder representations from transformers model on news content and public comments, revealing distinct media and public agendas. The findings show a clear divergence in risk perception between news media and public discourse. While the media’s amplification of risk was evident, its influence remained largely confined to specific amplification stations. Moreover, the focus of public discussion is expected to shift from AI ethics and regulatory issues to the broader consequences of industrial change.
Recent breakthroughs in generative artificial intelligence (GenAI) have transformed higher education, offering new possibilities for personalized learning and assessment. This paper explores GenAI's impacts on education, focusing on business programs as early adopters while extending to broader humanities contexts. We examine GenAI's potential to enhance learning through adaptive systems and real-time feedback, while addressing ethical dilemmas including algorithmic biases, equity gaps, and academic integrity concerns. From philosophical and sociological perspectives, we investigate how GenAI challenges traditional notions of knowledge production, authenticity, and human agency in education. The paper proposes an integrative framework for responsible GenAI implementation that balances technological capabilities with human-centered pedagogy through contextual adoption, ethical reflexivity, and redefined evaluation metrics. We recommend assessment redesigns that validate authentic learning and encourage a posthuman perspective that reimagines AI as collaborator rather than tool, offering productive pathways for future educational practice while preserving essential human elements of interpretation, ethics, and relational capacity
The rise of generative AI has brought a host of challenges for historically marginalized groups, including increased surveillance, AI-mediated racism, and algorithmic inequity. While stakeholders emphasize ethical and responsible AI that is safe, anti-discriminatory, and “protects human dignity,” the centrality of anti-Blackness in the design, development, and deployment of AI systems coupled with race-evasive approaches to defining and advancing ethical, equitable, and ‘human-centered’ technologies have exacerbated racial oppression. We present three case studies of speculative technologies designed by Black youth in a college bridge, summer course that examine ethical and responsible AI in their everyday lives. From a bottom-up approach, we infringe upon this broader discourse to provide an initial grounding of responsible and ethical AI as well as discuss the criticality of Black, historically anchored, culturally-situated lenses to offer justice-oriented design principles that can guide the teaching, learning, and design of technology.
One of the key controversies that generative artificial intelligence (AI) has recently stirred was whether compensation is due for the copyrighted materials used to train AI models. This article explores the logic, trajectories, and dynamics of content generation, including news, through generative AI in two distinctive yet intertwined domains. Guided by a cultural political economy approach, it examines how both the political context (validation/legitimation of AI-generated news content by established news media) and the economic context (use of unpaid and underpaid labor in the forms of freely scraped data and data annotation work) shape the deployment of news content on AI models. It further untangles how the space for serious, independent journalism may shrink, as big tech companies’ algorithmic technologies emerge as a solution to contemporary problems in journalism. A clear danger here is that AI companies’ proprietary algorithms, language training models, and value-laden parameters are incompatible with journalism's democratic obligations and responsibilities.
The integration of generative AI (Gen-AI) chatbots into contemporary workplaces has expanded beyond task automation, positioning these systems as influential socialbehavioral actors. Far from merely assisting operational workflows, Gen-AI chatbots increasingly moderate human behavior, subtly shaping communication styles, decisionmaking processes, and collaborative norms. This paper investigates the critical role of Gen-AI chatbots in moderating workplace behavior and explores their profound implications for innovation, diversity, and ethical dynamics, viewed through the comprehensive lens of socio-sustainability. Drawing on interdisciplinary insights from various organizational science, we analyze how chatbot-mediated agent interactions can either enhance socio-sustainable outcomes such as promoting inclusive communication, ethical awareness, and creative problem-solving or, conversely, entrench biases, suppress critical thinking, and erode social trust. This paper's findings suggest that Gen-AI chatbots hold a dual potential: when thoughtfully designed and contextually deployed, they can foster socio-sustainable practices that drive innovation and inclusiveness; yet, without careful governance, they risk exacerbating ethical vulnerabilities and undermining diversity efforts. The outcome proposes a strategic principle for AI agentic moderation aligned with socio-sustainability, offering actionable guidance for organizations seeking to harness AI responsibly in the future of work.
This paper examines the case of Iruda, an AI chatbot launched in December 2020 by the South Korean startup Scatter Lab. Iruda quickly became the center of a controversy, because of inappropriate remarks and sexual exchanges. As conversations between Iruda and users spread through online communities, the controversy expanded to other issues, including hate speech against minorities and privacy violations. Under public pressure, Scatter Lab quickly suspended Iruda on 12 January 2021. After implementing extensive changes, the company relaunched the chatbot as Iruda 2.0 in October 2022. Notably, this revised version has operated without any major incidents as of mid-2025. This study offers a symmetrical analysis of Iruda’s initial failure and subsequent success in terms of ‘folds’ connecting users, machines, algorithms, and other key elements. We introduce ‘configuration’ as a mode of folding and show how socio-material assemblages—whether harmful or safe—emerge as a result of different configurations. The success of Iruda 2.0 highlights the importance of placing ethics at the core of AI development and implementation strategies. In addition, we introduce the concept of ‘ethics-in-action’ to highlight the critical role of practical interventions and user engagement. By tracing Iruda’s evolution in detail, this study provides practical guidelines for the successful integration of AI systems into society.
This article analyzes music industry discourses about generative AI to understand competing and conflicting views across the industrial field. Our analysis mobilizes primary data from ethnographic fieldwork collected at music trade conferences between 2023 and 2024 and secondary data from trade press, corporate statements, reports published by governments, unions, and trade bodies. Our analysis illuminates tensions and contradictions among protectionist, liberalizing, and conciliatory views toward generative AI. Some corporate actors and public stakeholders advocate for protectionist business policies and “responsible” AI development that foregrounds potential harms of AI. Other corporate actors offer more liberalizing views, encouraging investment, experimentation, and adoption of generative AI systems to cut costs and increase profits. We also note conciliatory positions, mainly from musicians’ unions and trade bodies, trying to find compromises between these two poles. We argue that these contradictions reveal a fundamentally misunderstood notion of universal AI ethics in the music industry.
This paper addresses the ethics of inclusion in artificial in-telligence in the context of African fashion. Despite the proliferation of fashion-related AI applications and da-tasets global diversity remains limited, and African fash-ion is significantly underrepresented. This paper docu-ments two use-cases that enhance AI's inclusivity by in-corporating sub-Saharan fashion elements. The first case details the creation of a Senegalese fashion dataset and a model for classifying traditional apparel using transfer learning. The second case investigates African wax textile patterns generated through generative adversarial net-works (GANs), specifically StyleGAN architectures, and machine learning diffusion models. Alongside the practi-cal, technological advances, theoretical ethical progress is made in two directions. First, the cases are used to elabo-rate and define the ethics of inclusion, while also contrib-uting to current debates about how inclusion differs from ethical fairness. Second, the cases engage with the ethical debate on whether AI innovation should be slowed to prevent ethical imbalances or accelerated to solve them.
Text-to-image (T2I) generative models are largely used in AI-powered real-world applications and value creation. However, their strategic deployment raises critical concerns for responsible AI management, particularly regarding the reproduction and amplification of race- and gender-related stereotypes that can undermine organizational ethics. In this work, we investigate whether such societal biases are systematically encoded within the pretrained latent spaces of state-of-the-art T2I models. We conduct an empirical study across the five most popular open-source models, using ten neutral, profession-related prompts to generate 100 images per profession, resulting in a dataset of 5,000 images evaluated by diverse human assessors representing different races and genders. We demonstrate that all five models encode and amplify pronounced societal skew: caregiving and nursing roles are consistently feminized, while high-status professions such as corporate CEO, politician, doctor, and lawyer are overwhelmingly represented by males and mostly White individuals. We further identify model-specific patterns, such as QWEN-Image's near-exclusive focus on East Asian outputs, Kandinsky's dominance of White individuals, and SDXL's comparatively broader but still biased distributions. These results provide critical insights for AI project managers and practitioners, enabling them to select equitable AI models and customized prompts that generate images in alignment with the principles of responsible AI. We conclude by discussing the risks of these biases and proposing actionable strategies for bias mitigation in building responsible GenAI systems. The code and Data Repository: https://github.com/Sufianlab/T2IBias
The rapid advancement of generative AI is reshaping the publishing industry, enhancing efficiency while triggering a severe responsibility and ethics crisis. It disrupts the traditional linear responsibility chain among authors, editors, and publishers, creating a "responsibility vacuum" and "responsibility gap" .This study employs actor-network theory and distributed responsibility theory to analyze publishing as a heterogeneous network of human and non-human actors. It reveals how responsibility is dynamically "translated" and "distributed" within this network, identifying three mechanisms behind the "responsibility maze": asymmetric human-AI dependence, responsibility shirking among actors, and AI's unexpected autonomy. A multi-layered governance path is proposed, centered on forward-looking, proportional, and traceable responsibility, advocating for standards across micro, meso, and macro levels to foster a trustworthy human-machine collaborative publishing ecosystem.
Fans are increasingly aware of deepfakes—believable AI-fabricated videos—and are therefore more skeptical of unverified information, even when visual evidence appears convincing. This article offers a methodological reflection on analyzing a deepfake event in which fans produced and circulated AI-generated disinformation to playfully undermine the credibility of a celebrity's video scandal. We explore the complex human-community-machine interactions (HCMI) between fans and AI-generated images, and we discuss how researchers can ethically (re)present their findings. We call for rethinking the "fans first" principle, a core tenet of ethical fandom research. Drawing on Puig de la Bellacasa's technoscientific theorization of care, we propose a critically speculative ethics of care in fandom research, guided by three principles: (1) thinking with fans, (2) thinking for fandom, and (3) thinking beyond fans and fandom. This approach is particularly relevant in a digital media ecology where generative AI and fan practices mutually transform each other. Our discussion also serves as a springboard for further explorations of ethics related to AI, including its impact on trust, social relations, and data governance.
No abstract available
In an era of digital proliferation, ethnic minorities face unique challenges in maintaining psychological well-being within algorithmically mediated environments that often reflect and amplify mainstream culture. This study addresses the urgent need to understand how interactions with technological systems, rather than just interpersonal contact, shape the mental health of minority individuals. This mixed-methods study examines the factors that influence the psychological well-being of Thailand’s Tai Dam ethnic minority. Drawing on acculturation and social identity theories, the research employed Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze survey data from 450 participants, recruited via purposive sampling, complemented by a one-year digital ethnography, to test a predictive model. The study introduces and validates a new construct, Perceived Algorithmic Bias (PAB), which refers to a user’s perception that a platform’s algorithm systemically suppresses or negatively represents their cultural content, measured using a newly developed scale based on preliminary qualitative work. The resulting model, which explained 53.8% of the variance in well-being, revealed a critical paradox. While a strong ethnic identity positively predicts psychological well-being (β = 0.258, p < 0.001), this benefit is significantly undermined by PAB, the model’s most potent negative predictor (β = -0.491, p < 0.001). The model demonstrates that a stronger orientation toward mainstream digital culture significantly increases PAB (β = 0.515, p < 0.001). This finding uncovers a “digital cost of engagement,” where participating in the dominant digital culture paradoxically exposes minorities to unique technological stressors, challenging traditional acculturation models. Qualitative findings of a “digital hearth” illustrate how online communities reinforce identity, while feelings of “algorithmic invisibility” confirm the distressing experience of PAB. The study presents a validated predictive framework for minority digital well-being, establishing PAB as a critical factor in global intercultural relations and providing clear implications for culturally aware platform design and mental health support.
The rise of smart societies, characterized by extensive use of technology and data-driven algorithms, promises to improve our lives. However, this very technology presents a potential threat to the richness and diversity of online culture. This thesis explores the phenomenon of echo chambers and algorithmic bias, examining how they contribute to the homogenization of online experiences. Social media algorithms personalize content feeds, presenting users with information that reinforces their existing beliefs. This creates echo chambers, where users are isolated from diverse viewpoints. Algorithmic bias, stemming from the data used to train these algorithms, can further exacerbate this issue. The main data in this study were sourced from previous studies (secondary data) which focused on research related homogenizing on online culture. The thesis investigates the impact of echo chambers and algorithmic bias on online culture within smart societies. It explores how these factors limit exposure to a variety of ideas and perspectives, potentially leading to a homogenized online experience. By examining the interplay between echo chambers, algorithmic bias, and the homogenization of online culture in smart societies, this thesis aims to contribute to a more nuanced understanding of the impact of technology on our online experiences.
In today’s digital age, algorithms play a pivotal role in shaping media content distribution, which may possibly influence what content individuals are exposed to. Consequently, this may have implications for diversity, equity, and inclusion (DEI). Hence, this review analyzes algorithmic bias in media material distribution and its impact on media consumption and the implications for diversity, equity, and inclusion (DEI). The study concludes that algorithm bias limits the visibility of underprivileged groups and perpetuates current social injustices, posing serious problems for media distribution. Moreover, there are risks and opportunities associated with the development of artificial intelligence (AI) and machine learning in tackling algorithmic inequities. Furthermore, there is a need for collaborative efforts among different stakeholders (engineers, policymakers, and media platforms) in creating a more inclusive and equitable algorithms in order to ensure that media distribution systems promote fairness and diversity.
There has been growing recognition of the crucial role users, especially those from marginalized groups, play in uncovering harmful algorithmic biases. However, it remains unclear how users’ identities and experiences might impact their rating of harmful biases. We present an online experiment (N=2,197) examining these factors: demographics, discrimination experiences, and social and technical knowledge. Participants were shown examples of image search results, including ones that previous literature has identified as biased against marginalized racial, gender, or sexual orientation groups. We found participants from marginalized gender or sexual orientation groups were more likely to rate the examples as more severely harmful. Belonging to marginalized races did not have a similar pattern. Additional factors affecting users’ ratings included discrimination experiences, and having friends or family belonging to marginalized demographics. A qualitative analysis offers insights into users' bias recognition, and why they see biases the way they do. We provide guidance for designing future methods to support effective user-driven auditing.
In an era dominated by personalized digital experiences, social media platforms play an increasingly influential role in shaping young adults' perceptions and decisions. With the use of algorithms that personalize social media feeds for individual users, concerns have arisen regarding the extent to which such personalization reinforces preexisting biases and influences user behavior, with a focus on how it influences biased decisions in a variety of contexts, including political opinions, lifestyle choices, and shopping habits. This study investigates the impact of personalized social media content on the decision-making of Romanian Generation Z users. The research analyzes how personalized feeds shape perceptions, preferences, and decisions by examining algorithmic bias in content curation and exposure to diverse viewpoints. Furthermore, the study examines digital literacy and critical thinking by exploring how aware Generation Z is of algorithmic personalization and its potential biases, providing information on its ability to engage with digital content critically. In this context, it is important to mention that throughout the article, we provide users' perspectives on how they perceive, feel, and think about the influence that the media has on their daily lives. Given the lack of research on the willingness of Generation Z to share information on social networks, we also addressed this issue. The data for this research were collected using a questionnaire conducted on a sample of Romanian university undergraduate students, all part of Generation Z. Based on the collected data, we present the results regarding the awareness of the participants about personalized content, their perceptions of bias, and the influence this has on their decision-making. The findings contribute to understanding the implications of algorithmic personalization for young adults in the Romanian context, highlighting the importance of critical social media literacy and promoting informed decision-making in the digital age.
Current research in artificial intelligence (AI) sheds light on algorithmic bias embedded in AI systems. The underrepresentation of women in the AI design sector of the tech industry, as well as in training datasets, results in technological products that encode gender bias, reinforce stereotypes and reproduce normative notions of gender and femininity. Biased behaviour is notably reflected in anthropomorphic AI systems, such as personal intelligent assistants (PIAs) and chatbots, that are usually feminized through various design parameters, such as names, voices and traits. Gendering of AI entities, however, is often reduced to the encoding of stereotypical behavioural patterns that perpetuate normative assumptions about the role of women in society. The impact of this behaviour on social life increases, as human-to-(anthropomorphic)machine interactions are mirrored in human-to-human social interactions. This article presents current critical research on AI bias, focusing on anthropomorphic systems. Moreover, it discusses the significance of women’s engagement in AI design and programming, by presenting selected case studies of contemporary female media artists and designers. Finally, it suggests that women, through their creative practice, provide feminist and critical approaches to AI design which are essential for imagining alternative, inclusive, ethic and de-biased futures for anthropomorphic AIs.
Individuals of modern societies share ideas and participate in collective processes within a pervasive, variable, and mostly hidden ecosystem of content filtering technologies that determine what information we see online. Despite the impact of these algorithms on daily life and society, little is known about their effect on information transfer and opinion formation. It is thus unclear to what extent algorithmic bias has a harmful influence on collective decision-making, such as a tendency to polarize debate. Here we introduce a general theoretical framework to systematically link models of opinion dynamics, social network structure, and content filtering. We showcase the flexibility of our framework by exploring a family of binary-state opinion dynamics models where information exchange lies in a spectrum from pairwise to group interactions. All models show an opinion polarization regime driven by algorithmic bias and modular network structure. The role of content filtering is, however, surprisingly nuanced; for pairwise interactions it leads to polarization, while for group interactions it promotes coexistence of opinions. This allows us to pinpoint which social interactions are robust against algorithmic bias, and which ones are susceptible to bias-enhanced opinion polarization. Our framework gives theoretical ground for the development of heuristics to tackle harmful effects of online bias, such as information bottlenecks, echo chambers, and opinion radicalization.
One of the major research problems related to artificial intelligence (AI) models at present is algorithmic bias. When an automated system “makes a decision” based on its training data, it can reveal biases similar to those inherent in the humans who provided the training data. Much of the data used to train the models comes from vector representations of words obtained from text corpuses, which can transmit stereotypes and social prejudices. AI system design focused on optimising processes and improving prediction accuracy ignores the need for new standards for compensating the negative impact of AI on the most vulnerable categories of peoples. An improved understanding of the relationship between algorithms, bias, and non-discrimination not only precedes any eventual solution, but also helps us to recognize how discrimination is created, maintained, and disseminated in the AI era, as well as how it could be projected into the future using various neurotechnologies. The opacity of the algorithmic decision-making process should be replaced by transparency in AI processes and models. The present work aims to reconcile the use of AI with algorithmic decision processes that respect the basic human rights of the individual, especially the principles of non-discrimination and positive discrimination. The Argentine legislation serves as the legal basis of this work.
Algorithmic systems gain more and more impact on getting access to social services, to healthcare, and the outcomes of law. Transgender people have certain risks since there is bias in data, model design, and the manner in which these systems are implemented. Biological and personal factors, like not having a strong ability to handle stress or not having good skills to deal with tough situations, can affect how someone reacts to stress and mental health problems. For example, when a person feels ashamed or hates themselves because of the negative views and stereotypes society has about been LGBTQ+, this can cause mental health problems. However, being transgender is not always clearly recorded, or if it is, it might be mixed up with male or female categories, making it hard to find and fix in AI systems. This paper looks at how AI can hurt transgender people in areas like society, healthcare, and the law. It also suggests ways to check and measure these harms and fix them, without breaking privacy or taking away people's ability to make decisions. One way to reduce unfair treatment is by using bias-mitigation techniques, which can change how algorithms work before they are used. These methods might hide sensitive information, add more data, or change how important certain data is to make things fairer.
Machine learning systems are increasingly functioning as epistemic infrastructures in high-stakes domains such as criminal justice, healthcare, finance, and employment. Despite this, their outputs are frequently treated as objective and neutral forms of knowledge. This study advances a synthesis of empirical and philosophical inquiry into cultural bias in machine learning, arguing that algorithms operate as sociotechnical agents embedded within historically situated structures of power and representation. Using the COMPAS Recidivism dataset (N = 7,214), a quantitative experimental design was employed to examine predictive disparities across protected attributes, specifically race and sex. Logistic Regression and Random Forest models were implemented within a controlled preprocessing pipeline and evaluated using standard performance metrics (accuracy, precision, recall, and F1-score), alongside subgroup fairness measures including false positive rates (FPR), false negative rates (FNR), and disparate impact ratios. To ensure robustness, subgroup disparities were further assessed using statistical significance testing. While overall model performance was moderate in aggregate metrics, subgroup analysis revealed consistent and structured disparities: African-American defendants exhibited elevated false positive rates, whereas females and underrepresented racial groups experienced disproportionately high false negative rates. These patterns persisted across model architectures, indicating that bias is structurally embedded in the data rather than solely a function of model design. However, extreme subgroup values should be interpreted with caution due to potential sample size imbalances within certain demographic categories. The findings challenge the assumption of epistemic neutrality in algorithmic systems, demonstrating that machine learning models participate in the cultural production of knowledge by reproducing historically grounded classifications and power asymmetries. The study argues that algorithmic outputs should be evaluated not only in terms of predictive performance but also through fairness-aware and context-sensitive frameworks that account for their broader ethical and epistemological implications.
Integrating artificial intelligence (AI) in social media has transformed digital interactions, enhancing content moderation, user experience, and security. However, this evolution has also introduced significant cybersecurity risks, particularly gender-based cybercrimes, algorithmic bias, and privacy violations. This paper examines AI’s dual role in mitigating and exacerbating cybercrimes on social media, focusing on gender dynamics and ethical concerns. It explores AIpowered moderation tools, their effectiveness in detecting harmful content, and the unintended consequences of algorithmic bias. Additionally, it highlights how AI-driven misinformation and deepfake technology contribute to onlineexploitation. The study evaluates regulatory frameworks, ethical AI deployment, and policy interventions aimed at reducing algorithmic discrimination and strengthening digital safety. By analyzing both technological advancements and systemic vulnerabilities, this research proposes strategies for fostering a safer, more equitable online environment. Beyond content moderation, AI significantly impacts user behavior and information dissemination. Algorithmic personalization can reinforce echo chambers, exacerbate polarization, and contribute to the virality of harmful content. Cybercriminals leverage AI for advanced phishing attacks, automated disinformation campaigns, and deepfake-based fraud, requiring adaptive security measures. The paper also discusses emerging policy frameworks that balance AI innovation with accountability, advocating for an interdisciplinary approach involving policymakers, technologists, and civil society. The findings underscore the need for transparent AI governance, improved dataset diversity, and a hybrid human-AI approach to content moderation. Ultimately, this paper emphasizes the importance of ethical AI design and proactive intervention to ensure AI-driven social media platforms serve as tools for protection rather than harm.
: Artificial Intelligence (AI) has become a powerful tool for creating and managing social media content. Social media platforms have integrated AI technology into their algorithms to optimize the user experience. However, the impact of AI on social media is a topic of debate. This research aims to explore the effects of AI and its implications for content creators and consumers. It is recommended that social media platforms ensure that the use of AI is transparent and ethical to maintain user trust. The impact of AI on social media content is significant and multifaceted, enabling personalized content recommendations, automated content generation, and real-time content analysis. However, there are also concerns about algorithmic bias and the potential for job displacement. As AI technology continues to evolve, it is essential to ensure that ethical considerations and social responsibility are prioritized in the development and use of AI in social media marketing. The impact of AI on social media content is a complex issue, with both positive and negative effects. While AI algorithms can enhance the user experience by providing personalized content, there are concerns that they may also lead to the spread of misinformation and the creation of filter bubbles. To mitigate these potential negative effects, it is important to promote transparency, media literacy, and human moderation, in order to ensure that social media content is accurate, diverse, and
The rapid advancement of generative artificial intelligence (AI) has made AI-driven content curation a dominant force in shaping public discourse on social media. Platforms, including Facebook, YouTube, and TikTok employ recommendation algorithms to personalise content and increase user engagement. However, these systems also intensify concerns over media pluralism, algorithmic bias, and misinformation. By prioritising user preferences, they reinforce filter bubbles and restrict exposure to diverse viewpoints. As a result, democratic dialogue weakens, and public opinion formation becomes distorted. This study examines how AI-assisted content curation affects media diversity in European social networks, focusing on platform accountability and regulatory challenges. Special attention is given to recent policy interventions, including DSA (2022), Germany’s Action Plan (2024), and the EU AI Act (2025) initiatives on AI-generated political content. However, they also expose significant gaps in enforcement and oversight. To evaluate regulatory impact, this study analyses platform policies, legal frameworks, and AI content selection mechanisms. Despite transparency being a main objective, findings reveal current regulations unable to reduce algorithmic bias or achieve balanced content representation. In response, the study advocates for improved explainable AI (XAI) models, demands stronger regulatory oversight, and supports increased user control over content selection. By addressing these shortcomings, this research contributes to the wider debate on AI ethics, media governance, and digital policy in Europe.
This paper explores the intersection of social media and racism, focusing on how algorithms and their biases perpetuate racial inequalities in the digital public sphere. By examining the evolution of the public sphere, the role of algorithms, and the spread of racist narratives, the study highlights the ethical implications for democracy and digital governance. The findings underscore the need for regulatory interventions to mitigate algorithmic bias and ensure fair public discourse, promoting a more inclusive and equitable digital environment.
Social media provides opportunities for women to challenge traditional norms through self-expression and empowerment content, but algorithmic mechanisms and user behavior often perpetuate gender stereotypes. This study uses TikTok as a case to explore the complex relationship between Exposure to Diversified Content (EDC) and Gender Stereotype Reinforcement (GSR) on short video platforms. Through a quantitative survey of active TikTok users, the study examines how interaction frequency with female-related content is associated with awareness of gender stereotypes. The findings reveal a contradictory dynamic: while diversified content has the potential to break stereotypes, algorithmic bias and selective exposure may inadvertently reinforce these stereotypes. This study suggests that technological safeguards should be strengthened to ensure the diversity and fairness of information dissemination on social media, avoiding the output and spread of content like stereotypes. Additionally, the research emphasizes the necessity of raising individual awareness to cope with diversified content in the information age. In summary, the study highlights the need to promote balanced content distribution, enhance user media literacy, and conduct cross-cultural research to better understand these mechanisms.
This study explores the transformative role of social media in community building and social movements, focusing on platforms like Twitter and Instagram. Through a mixed-methods approach combining surveys, interviews, and content analysis, the research examines how social media facilitates collective action, fosters online communities, and amplifies marginalized voices. Key findings reveal that social media serves as a powerful tool for mobilization, with 55% of participants engaging in movements like #BlackLivesMatter and #MeToo. However, challenges such as misinformation, algorithmic bias, and performative activism ("slacktivism") undermine its potential. The study highlights the dual nature of social media as both an empowering platform for digital solidarity and a space fraught with ethical dilemmas. Practical recommendations are provided for activists, policymakers, and platform designers to enhance the effectiveness and inclusivity of digital activism.
As algorithms have evolved, they have been heavily used in social media, and more brands are using them as an effective tool for social media marketing. The use of algorithms in social media marketing has become increasingly prevalent as brands seek to maximize their impact and reach on these platforms. However, there has been a growing concern over the impact of algorithms on society, including issues such as filter bubbles, echo chambers, and algorithmic bias. This research is important as it seeks to provide insights into the social impact of algorithmic applications in social media marketing. The findings could be useful to both brands and consumers, enabling them to make more informed decisions about their use of algorithms in social media marketing. This research aims to investigate the social impact of the use of algorithms in social media marketing. The study collected 135 samples through a qualitative research method using a questionnaire for social media users. The collected data was analyzed to derive both positive and negative impacts of algorithmic applications in social media marketing on society. The application of algorithms on social media can affect society due to efficient marketing and enhanced access to information. However, it can also have negative impacts on society, for example, potential for manipulation, loss of privacy, reinforcement of stereotypes and biases, and amplification of harmful content. And the paper builds on previous research on the subject of algorithms and social media by exploring their potential impact in more depth. By doing so, the study intends to enable both brands and consumers to effectively avoid its negative effects.
Algorithmic management leverages data-driven algorithms and artificial intelligence to automate managerial functions traditionally executed by human managers. This paper provides a comprehensive overview of algorithmic management, exploring its definitions and emergence in gig economy platforms and traditional workplaces. It delves into key sociological and organizational theories—including Weber's bureaucracy, Critical Management Studies (CMS), and technological rationality—to frame the discussion. The impact of algorithmic management on employee autonomy, digital surveillance, and forms of worker resistance is examined, alongside its role in shaping organizational structures, enhancing efficiency, and driving innovation. Ethical implications, particularly concerning fairness, transparency, and bias, are critically analyzed. While algorithmic management offers potential benefits such as improved efficiency and decision-making, it also raises significant concerns about worker autonomy, power imbalances, and ethical considerations. The paper underscores the need for a nuanced understanding and responsible implementation of algorithmic management to harness its advantages while mitigating its drawbacks.
This study provides a critical examination of AI-integrated speed and red-light camera systems through the theoretical lenses of Surveillance Capitalism, the Panopticon Model, Social Control Theory, Technological Determinism, and Structural Violence Theory. While artificial intelligent speed safety cameras demonstrate efficacy in reducing traffic violations and fatalities, this research addresses a critical gap in healthcare literature regarding their broader societal and ethical consequences, including algorithmic bias, data governance failures, and privacy violations that directly impact public trust and health equity. The analysis reveals how machine learning and predictive analytics in automated enforcement create disproportionate burdens on marginalized populations through three specific mechanisms: (1) biased algorithmic design that targets low-income neighborhoods more intensively, (2) punitive traffic fine structures that impose greater relative financial hardship on economically disadvantaged families, and (3) opaque implementation practices that limit community understanding and participation. These patterns perpetuate health disparities by increasing chronic stress, economic instability, and barriers to healthcare access among vulnerable populations. This work’s novel contribution lies in applying four foundational health equity principles to AI-powered traffic enforcement: distributive justice (fair allocation of enforcement across communities), procedural justice (transparent and accountable decision-making processes), recognition justice (acknowledgment of community voices and concerns), and capabilities approach (ensuring enforcement practices do not undermine individuals’ fundamental capabilities for health and wellbeing). Additionally, the study examines three core social justice principles: substantive equality (addressing systemic disadvantages rather than treating all violations identically), participatory parity (ensuring affected communities can participate meaningfully in policy decisions), and non-domination (preventing the arbitrary exercise of state power through automated systems). The study advocates for the development of ethical artificial intelligence governance frameworks that incorporate transparent algorithmic auditing, community driven design processes, and robust oversight mechanisms. These evidence-based recommendations support equitable and trustworthy applications of artificial intelligence that advocate for, rather than undermine, population health and social justice in traffic safety initiatives. A novel contribution of this work lies in its exploration of how artificial intelligence powered speed safety cameras intersect with specific health equity principles in distributive justice, procedural justice, and the capabilities approach, as well as core social justice principles, including substantive equality, participatory parity, and nondomination, in the governance of public infrastructure. The analysis applies distributive justice to examine the fair allocation of enforcement across communities, procedural justice to evaluate transparent decision-making processes, and the capabilities approach to assess whether enforcement practices undermine individuals’ fundamental capabilities for health and wellbeing.
This study uses a narrative literature review to examine stigma in the Chinese community, particularly during the COVID-19 pandemic, focusing on media framing and structural stigma. Drawing on Framing Theory and the Health Stigma and Discrimination Framework (HSDF), this study examines how social media propagates stigmatizing discourses, including labeling language such as “Chinese virus,” while reinforcing dominant narratives through algorithmic bias and structural hierarchies. Reviewing literature from authoritative sources (primarily post-2020), this study integrates a historical perspective to analyze the construction, social impact, and policy implications of stigma. The findings provide valuable insights for promoting inclusive public discourse and guiding policymaking to counteract stigma.
Artificial Intelligence (AI) is revolutionizing surveillance and security systems by enabling real-time facial recognition, anomaly detection, behavior prediction, and predictive policing. These capabilities offer substantial improvements in public safety, operational efficiency, and national security. However, the rapid deployment of AI-driven surveillance technologies raises profound social and ethical concerns. Key issues include the erosion of privacy, algorithmic bias, lack of transparency, and the potential for misuse by state and corporate actors. Marginalized communities are often disproportionately affected by these systems due to biased data and flawed model training. This paper explores the dual impact of AI surveillance—its promise in safeguarding society and its risks to civil liberties and democratic values. It further analyzes normative frameworks and international case studies to highlight the need for responsible design, ethical regulation, and participatory governance. The goal is to encourage the development of surveillance systems that are both technologically effective and ethically sound.
With the development of network technology, social media has transformed the traditional way of news dissemination, but it has also brought new problems such as fake news, algorithmic bias, and deepfakes. These issues have affected people's trust in news, making it necessary to find effective solutions. This study explores the main problems faced by news authenticity in the age of social media and their solutions. The research reveals that the impact of the social media environment on news authenticity exhibits multi-faceted characteristics: at the level of communication mechanism, decentralized communication leads to confusion in information sources and difficulties in responsibility tracing, exacerbates cognitive one-sidedness, and enables the rapid spread of false information; at the level of content production, the unprofessionalism of user-generated content increases the risk of information distortion; at the level of social cognition, the "post-truth" phenomenon and confirmation bias reinforce each other, and the insufficient media literacy of the public accelerates the secondary spread of false information. The social hazards of these issues not only mislead public judgment but also trigger social trust crises and group conflicts. From an interdisciplinary perspective (communication studies, law, computer science), this study examines a four-dimensional coping strategy covering technology, system, subject, and education: in the technological dimension, it is necessary to promote algorithm transparency, apply blockchain to realize news traceability, and establish a "human-machine collaboration" fact-checking system; in the institutional dimension, laws, regulations, and industry self-discipline norms should be improved; in the subject dimension, it is necessary to strengthen the platform's review responsibility, the in-depth reporting capabilities of professional media, and the public's enthusiasm for participating in governance; in the educational dimension, the public's ability to identify information should be enhanced through systematic media literacy training.
The advent of Artificial Intelligence (AI) and automation is reshaping labor markets, economic distribution, social hierarchies, and cultural norms worldwide. While these technologies offer unprecedented productivity gains, efficiency, and innovation, they also raise profound concerns regarding fairness, equity, employment displacement, algorithmic bias, and cultural homogenization. This research paper investigates how AI and automation impact social justice by examining ethical dilemmas, economic disparities, and cultural transformations. Using a mixed-methods approach, including surveys of 550 respondents, expert interviews, and policy analysis, the study highlights both opportunities and challenges for equitable and inclusive societies. Findings reveal that ethical deployment of AI requires transparent governance, accountability, and participatory decision-making. Economically, automation risks exacerbating inequality unless coupled with reskilling programs and social safety nets. Culturally, AI influences identity, norms, and representation, demanding culturally sensitive design and inclusive algorithms. The paper proposes a multi-dimensional framework for reimagining social justice that integrates ethical principles, economic policies, and cultural inclusivity.
No abstract available
In recent years, generative artificial intelligence has been applied in various industries of society at an explosive growth rate, bringing about an unprecedented reshaping of social forms. Meanwhile, the inherent dual nature of technology has also manifested itself in the application of generative artificial intelligence. Based on AIGC-related incident reports in the AI Incident Database (AIID) as samples, this paper conducts a content analysis, which unfolds from three dimensions—social impacts, risks, and governance of generative artificial intelligence applications—and further conducts an analysis at various levels under each dimension. The study finds that the social impacts of generative artificial intelligence are reflected at the micro-level, meso-level, and macro-level respectively; its risks are mainly concentrated in aspects such as deepfakes, algorithmic bias, privacy and data security, copyright infringement, and attribution of responsibility; and the interaction of multiple factors including technology, institutions, and society constitutes its main generating mechanism. The study holds that, given the diversity and extensiveness of risks as well as the multi-level and complex nature of risk causes, the governance of generative artificial intelligence should first address various risks through full-process technical intervention, and then take into account technical characteristics, institutional flexibility, and social adaptability, so as to form a “technology empowerment - institutional adaptation - social collaboration” three-dimensional systematic governance framework.
Contemporary popular culture in the United States is at a critical juncture of transformation driven by the exponential development of artificial intelligence (AI). This study examines AI’s binary impact: it simultaneously serves as a powerful instrument for creativity and efficiency gains while generating fundamental challenges regarding authorship, originality, and employment. The primary focus is on transformations across key sectors of the creative industries. The analysis explores how AI technologies reconceptualize production processes, consumption models, and the public reception of cultural content. The legal and social implications of these changes are addressed separately, including issues of copyright, algorithmic bias, and economic instability. The findings indicate that generative AI is rapidly integrating into the creative industries, significantly expanding content-creation capabilities – from automated generation of musical works and visual effects to the emergence of digital actors in cinema. Personalization algorithms on social-media and streaming platforms now shape mass-audience information consumption, constructing individualized cultural experiences and new marketing paradigms such as virtual influencers. The implementation of AI has a controversial effect: on the one hand, it increases the efficiency of content production and stimulates innovation in popular culture; on the other, it gives rise to multiple challenges. Identified negative consequences include the blurring of boundaries of authorship and the authenticity of creative products, risks of displacing artists with algorithmic systems, unresolved questions of intellectual property (e.g., the contested legal status of AI-generated works), and ethical dilemmas associated with using digital likenesses of real individuals without proper consent. Notably, these problems have already elicited a socio-cultural response: professional communities in the United States are calling for rules governing AI use in creative domains. The article underscores the need to develop ethical-legal frameworks and new business models that ensure a balanced interaction between AI technologies and human creativity, safeguarding artists’ rights and the cultural value of their output. The scholarly contribution lies in a comprehensive conceptualization of generative AI’s impact across multiple dimensions of popular culture. The study synthesizes the latest manifestations of AI integration and outlines possible scenarios for the further development of U.S. popular culture under conditions of human–AI synergy. The conclusions suggest that the future of the creative industries will likely be defined by such synergy, necessitating the design of balanced ethical and legal frameworks.
This article explores how, in reshaping creative labour in Australia's cultural and creative industries, artificial intelligence is exposing the limitations of the current industrial relations framework. Using a policy ecology lens and drawing on submissions from creative industries stakeholder groups to two parliamentary inquiries, the article maps the fragmented governance of creative labour across industrial relations, copyright law, cultural policy. As artificial intelligence is disrupting attribution, income, and authorship in the cultural and creative industries, many freelance and contract-based workers fall outside the core workplace protections afforded to employees. The article argues that a reimagining of industrial relations is required to ensure that creative labour is adequately protected in an economy increasingly driven by artificial intelligence.
No abstract available
In this paper, I discuss the emergence of personal computing, the rise of platform-controlled smartphones and tablets, and the recent surge in artificial intelligence technologies. I explore how these technological advancements have often been shaped by the interests of capital, with recent trends towards increased platform lock-in, control, and exploitation of users (workers). I argue that without a strong push for open-source, democratized AI, these technologies risk being used to further the globalized colonial capitalist project. Through discussion of contemporary issues in corporate LLMs, I explore the corporate piracy of text, visual, and auditory data on the internet and the copyright and other ethical and human implications of this theft of work. I highlight the potential for open-source hardware and software to counter the proprietary and un-hackable future of AI, offering a radical alternative that empowers users and advances human, ecological, and labor rights alongside technology tools. Ultimately, I call for greater attention to the social, political, economic, and environmental implications of computing and AI technologies under capitalism.
Purpose The journey through beats and bytes of artificial intelligence (AI)-infused electronic dance music (EDM) production is more than just a technical indication; it is a celebration of innovation. The recent EDM surge in India has sparked interest, particularly among teenagers. This contrasts with the country’s rich tradition of diverse musical genres. This study aims to explore the shift in behaviour of EDM artists toward AI and how it influences teenagers in India, including its potential benefits and risks, which remain under explored. Design/methodology/approach A mixed-methods approach was used, comprising quantitative surveys (n = 275), qualitative focus group discussions with youngsters (n = 32) and semi-structured interviews with EDM professionals (n = 12). A combination of thematic analysis and statistical methods was employed to analyze the data, utilizing IBM SPSS version 28.0. Findings The findings of this study contribute to a deeper understanding of India’s evolving musical landscape, particularly the intersection of AI and EDM. The findings illuminate how EDM shapes teen identity, how AI is amalgamated into EDM and how the production and consumption of EDM influence both EDM artists and young consumers. Practical implications The integration of AI technologies enables EDM artists to enhance creativity, streamline production processes and elevate the quality of their work, ensuring a competitive edge in the evolving music industry. AI fosters innovative trends and immerses teenagers in new experiences; However, the industry must address the positive and negative impacts on teenagers. It also recommends increased AI involvement in music production alongside policies addressing ethical concerns such as copyright, ownership of AI-generated EDM tracks, educational policies and public awareness campaigns. Originality/value The study’s novelty lies in its offering unique insights into India’s evolving musical landscape and providing valuable resources for policymakers, educators and music industry professionals.
Modern society faces serious ethical dilemmas regarding the prospects of its existence, which arise against the background of rapid progress in the field of information systems and machine learning. It cannot be denied that technologies based on artificial intelligence demonstrate tremendous capabilities, but their implementation takes place in an unstable social environment, which provokes the emergence of fundamentally new problems. This study provides an analysis of the main features of the use of artificial intelligence in the transformation of social interactions. The article analyzes specific developments based on artificial intelligence that can infringe on fundamental freedoms of citizens: tools for monitoring personnel, algorithms for predictive analytics and biometric identification. The authors emphasizes that certain AI use cases are characterized by excessive intrusion into personal space, create prerequisites for manipulating human actions and threaten basic guarantees, including confidentiality of personal data, freedom of movement, the opportunity to participate in peaceful protests and other fundamental rights.
This article examines the cultural, aesthetic, and social dimensions of the integration of artificial intelligence (AI) technologies into the contemporary Ukrainian musical landscape. The study analyzes theoretical and methodological approaches to interpreting the interaction between human and algorithmic creativity and explores empirical cases that contribute to the emergence of new models of music production and reception. Particular attention is paid to the dynamics of generative technologies and their influence on Ukraine’s musical culture. The research identifies key areas of AI application, including pedagogical practices, analytical tools, industrial services, and independent AI-based music projects. It is argued that artificial intelligence functions not only as a technological innovation but also as a cultural agent capable of reshaping concepts of creativity, authorship, stylistic originality, and aesthetic value. The analysis draws on contemporary Ukrainian cases, such as the activities of the Harmix startup, the circulation of AI-generated music on streaming platforms, and experimental practices of independent artists. The study reveals an asymmetrical integration of AI across different musical domains, characterized by its active adoption within the music industry and its limited presence in academic music. The cultural consequences of these processes are examined, including the transformation of musicians’ professional identity, the emergence of hybrid “human–algorithm” co-creation models, shifts in listening practices toward personalized algorithmic scenarios, and the growing role of automated recommendation systems in shaping musical tastes. It is shown that the development of artificial intelligence raises issues of ethics, copyright, transparency of training data, and cultural responsibility in the digital environment. Ukrainian and international studies of generative music are analyzed, demonstrating the potential of neural networks to model complex compositional structures. The possibilities for their further application in national education, creative practice, and cultural policy are outlined.
Across the world, the rate of hate speech is increasing day by day. The development of technology has amplified hate speech expressed through social networks and other forms of communication, which incite hostility among different social groups. Hate speech tends to escalate rapidly and can easily develop into a hate crime. Therefore, it is crucial to respond promptly and effectively without infringing upon freedom of expression. With the ongoing development of the digital world, the application of artificial intelligence is becoming increasingly widespread. This circumstance has made it necessary to study the relationship between artificial intelligence and hate speech. Specifically, this study seeks to examine the impact of artificial intelligence on the dissemination and regulation of hate speech in digital environments. The paper analyzes the impact of artificial intelligence on the dynamics of hate speech and the limits of freedom of expression. The research has identified the legal measures necessary to adapt to emerging technological realities. This article examines the essence and meaning of hate speech, the international and domestic legal frameworks governing its use, and the legal regulation of artificial intelligence, including the principle of non-discrimination in its application. It also analyzes the decisions of the European Court of Human Rights concerning hate speech and the use of artificial intelligence.
The development of Artificial Intelligence (AI) technology has had a significant impact on various fields, including social media and the entertainment industry. One innovation that has sparked controversy is voice cloning, which is the digital imitation of public figures' voices without the consent of the voice owners. This phenomenon is becoming increasingly prevalent on social media in the form of songs or audio-visual content that resembles the voices of public figures, thereby posing legal, economic, social, and ethical risks. This study aims to identify the risks faced by public figures and content uploaders due to the use of AI-based voice cloning, as well as to analyze the legal protections available under Indonesia's legal system. The research method used is a normative legal approach, combining legislative analysis and literature review. The research findings indicate that for public figures, the risks include violations of moral and economic rights, defamation, and loss of control over their personal image. Meanwhile, for content uploaders, the risks include civil and criminal liability, administrative sanctions from digital platforms, loss of credibility, and ethical consequences. Current legal protection refers to the Copyright Law, the Electronic Information and Transactions Law, the Personal Data Protection Law, the Criminal Code, and other related regulations, which are applied both preventively and repressively. However, there are no explicit regulations regarding the right of publicity and the recognition of voice as part of legally protected identity, so more adaptive regulatory updates are needed to keep pace with technological developments in the digital age.
This article explores the opportunities and challenges posed by AI technologies in Africa. It examines the potential risks of AI exacerbating existing inequalities, infringing on privacy rights, and perpetuating digital colonialism. The article investigates the unique challenges that Africa faces in harnessing AI for human rights and sustainable development by examining the intersection of AI, human rights, and sustainable development from an African perspective. It highlights the importance of context-specific approaches that take Africa’s cultural and ethical considerations into account. Through case studies of a few African countries, this article provides insights into the existing policy and regulatory landscape. It emphasises the need for inclusive policymaking processes that involve diverse stakeholders,including civil society organisations, marginalised communities, and indigenous groups. The article concludes with recommendations on how AI can be ethically deployed to advance human rights and sustainable development goals on the African continent. A case is also made for a human rights-based approach to artificial intelligence and sustainable development.
INTRODUCTION: The present work deals with the incorporation of artificial intelligence in the process of artistic creation. OBJECTIVES: The objective of this article is to analyze the work developed at the Latin American Bio Art Laboratory in order to incorporate artificial intelligence into artistic works and its influence on the current social context. METHODS: This study is fundamentally based on the analysis of artificial intelligence applied to art, the consequences of its use on copyright, and the development achieved in these areas by the Latin American Bio Art Laboratory. RESULTS: Contribution of the Latin American Bio Art Laboratory to the development of the conceptually complex work Robotika by artist Joaquín Fargas, that connects a deep human feeling with the incorporation of artificial intelligence in its evolution CONCLUSION: Art is always at the forefront of social change, offering the possibility of bringing the future closer to the present with a different outlook.
The rapid advancement of artificial intelligence technologies has propelled cognitive intelligence (CI) into becoming a pivotal force reshaping production modes, dissemination mechanisms, and business models within the cultural industry. Grounded in the theoretical framework of cognitive intelligence, this study employs multidimensional empirical analysis to demonstrate that while significant progress has been achieved in applications spanning cultural content generation, personalized dissemination, immersive experiences, and global cultural outreach, three-dimensional challenges persist across the technological, ethical, and institutional domains. These include: (1) the misalignment between algorithmic capabilities and the essence of humanistic creation, resulting in deficient emotional expression, (2) ambiguities in rights attribution triggering copyright disputes, and (3) existing regulatory frameworks lagging behind digital cultural production paradigms. To address these constraints, this research innovatively proposes a tripartite synergistic optimization pathway encompassing cognitive, cultural, and technological dimensions. By expanding cognitive paradigms, enhancing technology-enabled empowerment, ensuring institutional adaptability, and deepening stakeholder collaboration, this approach ultimately facilitates the cultural industry’s paradigm shift from efficiency-driven enhancement to humanistic value-led high-quality development. The study further emphasizes that advancing the intelligent transformation of the cultural industry necessitates strengthening technology-driven innovation, refining institutional support mechanisms, and constructing a collaborative and mutually beneficial ecosystem to realize the deep integration and sustainable innovation of cultural technologies.
This paper critically examines the impact of the Generative Artificial Intelligence (AI) on the legal, economic and cultural integrity of African music. It discusses the reciprocity of intellectual property (IP) functionality and undermining of culture in the backdrop of algorithmic datafication. Based on the Postcolonial theory (more specifically, data-colonialism framework), the research analysis focuses on the process in which Global North technology corporations harvest cultural information of Global South producers through the expansion of Artificial Intelligence. Through the legal-ethnographic triangulation of legal analysis of the copyright regimes, together with extensive interviews of Kenyan creative professionals, the study reveals a structural imbalance between western standards of IP laws and African epistemology of oral traditions. Although AI is a source of universal precarity in the global community of creators, the results have shown a particular danger of ontological obliteration of African idioms, which occurs in the form of rhythmic flattening and digital orientalism. Additionally, the analysis also records the techniques Kenyan counter-movements, also known as digital resistance, use to deconstruct algorithmic quantisation and demand creative sovereignty. The paper concludes with an urgent appeal for decolonized IP frameworks, such as communal data trusts, to safeguard the essence, or sovereign intentionality, of African music.
In recent years, Artificial Intelligence (AI) technology has been increasingly integrated into painting, video, and interactive installation art. At the same time, empirical studies have shown that humans are biased against AI Art. However, the reasons behind this bias need to be further explored. This study collects public comment data from Weibo to examine the Chinese public's attention to AI art and the reasons for differing attitudes, based on comment content, time, and gender. Using computational social science methods, the study analyzes discussions on "AI art" through LDA topic model and sentiment analysis. The results show that the Chinese public holds slightly more negative than positive attitudes toward AI Art, primarily due to concerns over original copyright issues, homogenization of style, lack of emotion, career replacement anxiety, and the lack of artistic aura.
This study employs environmental psychology and social psychology theories to comprehensively examine the ethical dilemmas and cultural tensions arising from artificial intelligence image art within digital media ecosystems, focusing on the socio-psychological effects and value conflict resolution mechanisms. The research reveals that the emergence of AI art creates "normative disruption environments" that systematically challenge traditional creative authority, authenticity perceptions, and cultural value systems, generating multifaceted ethical dilemmas including copyright attribution, moral responsibility delineation, and algorithmic transparency concerns. Simultaneously, significant tensions arise between technological democratization processes and established cultural authorities, manifesting as intergenerational conflicts, cross-cultural adaptation disparities, and value system reconstructions. At the individual level, AI art catalyzes profound psychological adaptation processes among creators, including "distributed creative identity" reconstruction, "algorithmic agency negotiation," and "aesthetic schema rebuilding," while audiences confront fundamental adjustments to their aesthetic cognitive frameworks. The study further unveils multilayered value conflict resolution mechanisms, encompassing individual "ethical pluralism development," community "cultural hybridization formation," and institutional "adaptive norm emergence," demonstrating human society's remarkable cultural resilience and adaptive capacity under technological disruption. The research indicates that successfully addressing AI art challenges requires developing new forms of "digital cultural literacy" that encompasses not only technical competencies but also deep psychological understanding of how algorithmic systems influence cognition, emotion, and social relationships, ultimately pointing toward a future of human-AI collaborative creation that must be grounded in sophisticated balance between technological capabilities and human psychological needs.
This paper examines the ethical implications and risks of applying artificial intelligence (AI) to Indigenous musical heritage, with a specific focus on the Mijikenda communities in the coast of Kenya. Traditional AI tools, deeply rooted in Western systems of knowledge, carry a significant risk of perpetuating digital colonialism. This manifests through the decontextualisation of sacred musical practices and the infringement of crucial data sovereignty rights. The study aims to analyse the fundamental effects emerging between conventional AI methodologies and the knowledge systems underpinning Mijikenda music. Furthermore, it seeks to identify cases of algorithmic bias and cultural appropriation that may arise from such applications. In response, the paper proposes a decolonial framework to guide the ethical development and deployment of AI within the field of African musicology. Employing a systematic review of existing literature, in line with documented perspectives of Mijikenda practitioners and a complete analysis of scholarly works, the research uncovers a significant divergence. The drawing apart forces exist between AI's often extractive logic and the holistic principles of Indigenous Knowledge Systems (IKS). A primary concern among Mijikenda communities is the widespread threat of cultural appropriation, alongside a strong emphasis on understanding music's core value beyond mere data reliant the paper concludes by advocating for a fundamental paradigm shift: moving away from AI as a tool for extraction towards its utilisation as a platform for community-led cultural care. This transformative approach necessitates grounding AI development firmly within the principles of IKS and decolonial theory, ultimately empowering Indigenous communities and ensuring the sustainable continuity of their living sonic heritage.
With the widespread application of artificial intelligence (AI) in music creation, the creative logic, aesthetic paradigms, and power structures of pop music are undergoing profound transformations. This paper takes AI-generated music as its research focus, examining the controversies surrounding its generative mechanisms, aesthetic presentation, copyright ethics, and social practices. Drawing on real-world international research findings and policy documents, it explores the future of human-machine collaboration. The study finds that while AI can enhance creative efficiency, it also poses serious challenges to originality, authorship, and emotional authenticity. Constructing an ethical framework and rights recognition system adapted to the AI context is an urgent issue that demands scholarly attention and practical resolution.
The increasing scope and public use of Generative Artificial Intelligence (GenAI) platforms, particularly image generation tools, have prompted questions about the safety and fairness of large Vision Language Models (VLMs), e.g., Flux and DALL-E. The ubiquity and convincing realism of AI-generated imagery injects significant challenges into modern digital literacy efforts because VLMs may unintentionally perpetuate historical stereotypes as a result of biases in training data scraped from the web. Because these VLMs are open-use and their synthetic images are not subject to copyright permissions, these model biases can have far-reaching effects that cement societal biases and reinforce exclusionary practices. Therefore, it is critical to explore and identify bias within these models and to cultivate an understanding of the cultural context in which these biases are echoed as a first step to mitigating these problems. This paper provides an in-depth study of bias against LGBTQ+ individuals in images generated by Flux, the leading image generator model. This work uses a One-Factor-at-a-Time (OFAT) approach to critique the heterosexism present in Flux’s generations, discusses the impact that biased GenAI imagery may have on society, and provides a survey of existing mitigation strategies. The results of these experiments highlight a lack of nuance in Flux’s training, leading to biased synthetic image generation.
This paper examines the evolving role of German Performing Rights Organizations (PROs) in the context of technological advancements and societal shifts, focusing on the tensions and transformations. Employing an interdisciplinary approach encompassing history, sociology, and computer science, the study analyzes three case studies spanning from the early 20th century to the present day. These cases explore tensions arising from monitoring practices in the 1930s, conflicts between the German PRO GEMA and actors of the underground electronic dance music scene, and the challenges posed by artificial intelligence (AI) to collective copyright management. The analysis highlights how PROs must adapt their core tasks – licensing, revenue collection, royalty distribution, and usage monitoring – to address technological disruptions and maintain cultural diversity. The paper concludes by advocating for increased transparency, technological openness, and interdisciplinary research to guide the future development of PRO business models in a rapidly changing media landscape.
As generative Artificial Intelligence (AI) systems continue to transform academic research, debates over their appropriateness within the academic community continue to garner global attention. These debates are exacerbated in the Global South, where limited access to AI infrastructure and slow adoption of ethical AI guidelines heighten vulnerabilities. Previous studies in Nigeria have largely examined the use of AI in academia from an empirical perspective, focusing on assessing students’ and academics’ levels of awareness, attitudes, and perceptions toward tools such as ChatGPT. While these studies provide valuable insights into patterns of use and acceptance, they pay little attention to the doctrinal interpretation of authorship and ownership under copyright law, issues that become increasingly complex when research outputs are generated with or by AI. Drawing on global contexts, this paper aims to fill this gap by critically analyzing how existing copyright principles of authorship and ownership apply to AI-generated academic works in the Nigerian context. The paper finds that Nigerian copyright law remains human-centric, recognizing only works demonstrating human creativity and originality. A distinction is emerging between AI-generated and AI-assisted works: while wholly AI-produced outputs lack protection, those involving meaningful human input—such as prompting or creative direction—may attract authorship and by extension ownership. Thus, students or researchers who apply intellectual effort in using AI tools can still be deemed authors. Ultimately, the challenge is not whether AI belongs in academia, but how to shape its presence in ways that uphold human creativity, accountability, and justice. Therefore, Nigerian universities and regulators must develop codes of conduct, establish AI ethics committees, and align with global authorship standards to ensure ethical use of AI while promoting equitable access to AI infrastructure.
The rise of digital streaming has reshaped the global music industry landscape. As the dominant platform, Spotify has not only changed the way music is distributed, but has also profoundly impacted the income structure and professional situation of creators. This article, set in Australia, explores the difficulties and challenges faced by creative workers in the streaming media economy in three aspects, including economy, technology and policy. This paper uses a case study approach, combining secondary data from Creative Australia, APRA AMCOS, and Spotify's "Loud & Clear" report, as well as academic literature, to analyze the impact of platform mechanisms on local musicians. Research shows that although Spotify provides distribution channels for creators, its revenue-based model exacerbates income inequality and the instability of creation. Its algorithmic recommendations tend to favor mainstream global content, reducing the exposure of local music. Music generated by artificial intelligence has also raised new copyright and fairness issues. This article holds that the current cultural policies lag behind in terms of algorithm transparency, the application of artificial intelligence, and the promotion of local content. A balance between innovation and fairness should be achieved through measures such as strengthening platform supervision.
The engagement of artificial intelligence (AI) has been presented as one of relentless and uniform progress reshaping management practices and employee relations across the globe. Or so the narrative goes. This paper argues that the adoption of AI-driven management and surveillance tools is not a culturally neutral phenomenon but contains differences as understood whether in an Eastern or Western context. Through a synthesis of existing literature, this analysis examines this divide by focusing on algorithmic management and workplace surveillance. Western contexts, prioritizing individual autonomy, tend to view algorithmic surveillance with precaution, framing it as an infringement on rights. Many Eastern contexts that emphasize collective harmony and national goals would demonstrate a higher degree of acceptance of these tools as instruments for efficiency and social order. This dichotomy is examined further through an analysis of China's Corporate Social Credit System as an apotheosis of the state-driven, collectivist model. The paper then provides a comparative analysis of worker pushback, and how responses to algorithmic control are themselves culturally coded. The paper concludes that a one-size-fits-all approach to the deployment of workplace AI is untenable, and discusses a few implications for multinational corporations, global AI ethics, and the future of labor rights.
South Korea is at the forefront of both cultural innovation and artificial intelligence (AI), merging technological advancements with artistic expression. This article explores how AI is reshaping South Korea’s creative industries, including K-pop, gaming, film, and visual arts. AI is being used for music composition, virtual idols, adaptive gaming, AI-generated art, and film production, fundamentally altering traditional artistic and entertainment landscapes. While AI presents new creative possibilities, it also raises ethical concerns, copyright issues, and debates about authenticity. The discussion extends to future trends, emphasizing the potential for human-AI collaboration, immersive interactive content, and ethical governance. By maintaining a balance between innovation and artistic integrity, South Korea has the opportunity to lead the global AI-driven cultural revolution while preserving the human essence of creativity.
The traditional notions of authorship and copyright in the Kenyan design industry have been significantly disrupted by the proliferation of artificial intelligence (AI) technologies. There is an exponential increase in visual data, such as photographs and typefaces, on digital platforms. This has been enabled by the click, like, and share culture, providing fertile ground for AI developers to mine and train generative models. Designers generate creative outputs from this data. Adapting these innovations has raised difficult questions on authorship and originality. Consequently, this study explores AI’s impact on the design process through the lens of copyright law. Interrogating whether AI-generated photographs and typefaces can qualify for protection under existing legal structures. The analysis is situated in the lived experiences of designers who frequently use AI tools in their daily crafts and the challenges they face. The research method is two-pronged, with an empirical analysis and qualitative data from interviews with practising designers. Two questions guide the study: 1) Is AI capable of independent creativity? 2) Who is considered an algorithmic author? The paper proposes considerations for reforming legal standards to address the significance of algorithmic authorship.
The social age of technologies has guided in a contemporary era cycle of cultural demurrers or challenges for Intellectual Property Rights (IPR) in Sierra Leone, demanding a comprehensive examining of prevailing legal social structures (Jiaxuan,2024). This abstract delves into the multifaceted complexities encountered by the Sierra Leone legal system in protecting intellectual property in the cyber prospect. From issues associated to online piracy and tectonic copyright breach to the onset of artificial intelligence and its ramifications on patent law (Singh Guar and Chugh,2026), this abstract punctuates the critical domains where Sierra Leonean Intellectual Property Rights law demands adaptation and departure or innovation. Primarily, it explores the moral quandary encircling emerging inventions and the demand for harmonizing international standards with Sierra Leone's impressive socio-cultural setting. Via a thorough inquiry of case sociological diaries and judicial precedents or socio-legal authority, this abstract delivers worthwhile discernments into the demurrers or challenges posed by the technology age and propose implicit social strategies to safeguard intellectual property rights in Sierra Leone.
The rise of artificial intelligence (AI) in the creative industries has sparked significant debates on its ethical, economic, and sociocultural implications. This study delves into the narratives of human artists grappling with the advent of AI-generated art, focusing on its impact on creativity, cultural identity, and the artistic community. Employing a qualitative phenomenological approach, the research gathered insights from eight artists through in-depth semi-structured interviews. Thematic analysis revealed three key concerns: economic challenges such as job displacement and income instability, ethical dilemmas surrounding originality and copyright, and the devaluation of human creativity. Despite these challenges, artists expressed diverse responses to AI, ranging from fear of obsolescence to embracing AI as a tool for collaboration and innovation. Further, the study examines the role of AI in reshaping digital communication patterns and how it influences the sociocultural dimensions of art in digital media environments. Findings highlight the duality of AI as both a threat and a creative partner, underscoring the urgent need for ethical guidelines and regulatory frameworks to address these challenges. This research contributes to the broader discourse on AI’s role in shaping creative industries and cultural authenticity, advocating for a balanced integration of AI that preserves the irreplaceable value of human creativity and identity.
The rise of generative AI presents a profound duality, or a ''Janus Face,'' for digital society. On one hand, its ability to synthesize hyper-realistic faces offers a powerful solution to long-standing privacy and data scarcity challenges in biometric systems---a promising but underexplored application. On the other hand, this same technology is weaponized to create 'deepfakes' that fuel misinformation campaigns on Online Social Networks (OSNs), posing a significant threat to digital integrity. However, countering this threat is hampered by critical failures in existing deepfake detectors. They are often: (i) Brittle in the Wild: They prove vulnerable to the compression and post-processing artifacts introduced by OSNs. (ii) Poorly Generalizable: They fail to detect forgeries from new or unseen generative models. (iii) Computationally Inefficient: Many state-of-the-art models are too parameter-heavy for practical deployment on resource-constrained devices. This dissertation confronts this duality by addressing both sides of the coin. First, it examines the ''substitutability'' of synthetic face data, demonstrating that biometric classifiers (e.g., Age, Gender etc.) trained on AI-generated faces can match or even exceed the generalization performance of those trained on real face data. Second, to counter the malicious use of this technology, this dissertation develops a framework of deepfake detectors designed to be robust, generalizable, and efficient by construction. My work introduces novel, lightweight feature sets on different cues (e.g., colour cue-based Relative Chrominance Difference, Gradient features, Depth cues etc.) that are inherently resilient to OSN transformations and improve generalization to unseen forgeries. Preliminary results confirm state-of-the-art performance, achieving high accuracy in challenging real-world scenarios with a significant reduction in model complexity.
Abstract This paper examines digital tenant risk-profiling tools in England’s Private Rented Sector (PRS) and their influence on housing access and fairness. Based on qualitative data from 50 interviews and a survey of PRS landlords drawn from a larger project, the study analyses adoption patterns, algorithmic biases and the implications for tenant rights. Issues such as data privacy, discrimination, and exclusionary practices affecting marginalised groups are highlighted. The research underscores how digital platforms reshape landlord-tenant relationships and broader housing market dynamics in the light of recent, broader, theorisations of what sociologists Marian Fourcade and Kieran Healy have conceptualised as an emerging ordinal society. In this article, we argue that the logic of such metrics and data-informed algorithmic systems has led to the emergence of an ordinal tenant.
With the rapid advancement of artificial intelligence (AI), AI-driven virtual try-on (AI-VTO) services are reshaping consumption patterns in fashion retail. At the same time, their reliance on sensitive personal data has intensified privacy-related concerns. As digital natives and a key consumer segment, Generation Z often exhibits a “privacy paradox” in AI-enabled contexts, expressing concern about privacy while continuing to use data-intensive services. To explain this phenomenon, this study integrates the Theory of Planned Behavior (TPB) and the Privacy Calculus Model (PCM) into a unified risk–benefit framework. Survey data were collected from 709 Generation Z consumers in the Yangtze River Delta region of China. The data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) and Fuzzy-set Qualitative Comparative Analysis (fsQCA). The results show that perceived responsiveness, attitude, and perceived behavioral control positively influence intention to use AI-VTO services, whereas intrusiveness concerns exert a significant negative effect. In contrast, traditional privacy concerns do not have a direct effect on usage intention. Attitude mediates the effects of both perceived benefits and perceived risks on behavioral intention. In addition, the fsQCA results identify three distinct pathways leading to high adoption intention: an efficacy trust–driven pathway, an experience-driven pathway, and a control convenience–driven pathway. These findings suggest that the privacy paradox is more likely to emerge in experience-oriented contexts. This study clarifies how Generation Z evaluates data-intensive AI services by revealing both net effects and configurational pathways underlying AI-VTO adoption. It extends current understanding of the privacy paradox in AI-enabled consumption and offers practical implications for developing transparent, user-centered, and trustworthy AI-VTO systems.
Driven by technological advances in various fields (AI, 5G, VR, IoT, etc.) together with the emergence of digital twins technologies (HDT, HAL, BIM, etc.), the Metaverse has attracted growing attention from scientific and industrial communities. This interest is due to its potential impact on people lives in different sectors such as education or medicine. Specific solutions can also increase inclusiveness of people with disabilities that are an impediment to a fulfilled life. However, security and privacy concerns remain the main obstacles to its development. Particularly, the data involved in the Metaverse can be comprehensive with enough granularity to build a highly detailed digital copy of the real world, including a Human Digital Twin of a person. Existing security countermeasures are largely ineffective and lack adaptability to the specific needs of Metaverse applications. Furthermore, the virtual worlds in a large-scale Metaverse can be highly varied in terms of hardware implementation, communication interfaces, and software, which poses huge interoperability difficulties. This paper aims to analyse the risks and opportunities associated with adopting digital replicas of humans (HDTs) within the Metaverse and the challenges related to managing digital identities in this context. By examining the current technological landscape, we identify several open technological challenges that currently limit the adoption of HDTs and the Metaverse. Additionally, this paper explores a range of promising technologies and methodologies to assess their suitability within the Metaverse context. Finally, two example scenarios are presented in the Medical and Education fields.
This article examines the ethical dimensions of data in the digital age, focusing on the interplay of privacy, ownership, and governance. It highlights the risks posed by large-scale data collection, algorithmic applications, and weak regulatory environments. Drawing on case studies such as Facebook–Cambridge Analytica, India’s Aadhaar biometric system, and Smart City projects, the analysis demonstrates the consequences of unethical practices and the reforms that followed. Particular attention is given to the challenges faced by developing economies, where digital divides, limited literacy, and reliance on externally developed systems exacerbate vulnerabilities. Strategies such as privacy-by-design, stakeholder engagement, and regular audits are presented as mechanisms for embedding ethical governance into data systems. The article synthesizes theory, practice, and real-world lessons, offering pathways to build trust, equity, and sustainable digital development.
No abstract available
This article investigates the recent fusion of AI colourisation with genealogy and ancestry databases to offer a set of reflections on this nascent technology. Combining humanistic approaches from film and media studies, science and technology studies, critical race studies, and visual and material culture, it attempts to disentangle the political, technical, and aesthetic concerns that arise when the achromatic past becomes the colourised present through machine vision. After tracing the computational origins of colourisation, the article reveals how deep learning-based colourisation tools mark a rupture in the way the machine ‘senses’ colour, where the logic of pattern recognition and classification overrides epistemologies of sensory perception. The final part of the essay turns to the racialised role colourisation occupies on genealogy platforms, arguing that such databases naturalise the historically fraught relationship between colour as both race and hue. Across these three sections, colour is at the centre of these questions of subjectivity, personhood, and technologically mediated ways of seeing. It remains the vexed and ambivalent site to which meaning adheres.
Facial beauty perception is a complex area of study that has intrigued researchers across various disciplines. While some argue that it is subjective, influenced by personal and cultural factors, others propose that it is objective and rooted in evolutionary biology. This study explores the latter perspective, aiming to model facial beauty with an emphasis on racial fairness. Departing from black box convolutional deep learning approaches that are susceptible to racial biases, particularly arising from their holistic consideration of facial attributes such as skin tone, our focus lies solely on designing a more transparent machine learning model that integrates guardrails to prevent the introduction of such biases. By deliberately excluding skin tone and selecting specific features for the model to learn from, we aim to ensure a more equitable assessment across diverse racial and ethnic groups. Following rigorous training and evaluation, our hybrid model demonstrated impressive predictive performance, despite prioritizing transparency and racial fairness over complexity.
No abstract available
No abstract available
Research on scientific/intellectual movements, and social movements generally, tends to focus on resources and conditions outside the substance of the movements, such as funding and publication opportunities or the prestige and networks of movement actors. Drawing on Pinch’s theory of technologies as institutions, I argue that research methods can also serve as resources for scientific movements by institutionalizing their ideas in research practice. I demonstrate the argument with the case of neuroscience, where the adoption of machine learning changed how scientists think about measurement and modeling of group difference. This provided an opportunity for members of the sex difference movement by offering a ‘truly categorical’ quantitative methodology that aligned more closely with their understanding of male and female brains and bodies as categorically distinct. The result was a flurry of publications and symbiotic relationships with other researchers that rescued a scientific movement which had been growing increasingly untenable under the prior methodological regime of univariate, frequentist analyses. I call for increased sociological attention to the inner workings of technologies that we typically black box in light of their potential consequences for the social world. I also suggest that machine learning in particular might have wide-reaching implications for how we conceive of human groups beyond sex, including race, sexuality, criminality, and political position, where scientists are just beginning to adopt its methods.
This article surveys the ways in which issues of race and gender bias emerge in projects involving the use of predictive analytics, big data and artificial intelligence (AI). It analyses some of the reasons biased results occur and argues for the importance of open documentation and explainability in combatting these inequities. Digital humanities can make a significant contribution in addressing these issues. This article was written in late 2020, and discussion and public debate about AI and bias has moved on enormously since the article was completed. Nevertheless, the fundamental proposition of this article has become even more important and pressing as debates around AI have progressed – namely, that as a result of the development of big data and AI, it is vital to foster critical and socially aware approaches to the construction and analysis of data. The greatest threat to humanity from AI comes not from autonomous killer robots but rather from the social dislocation and injustices caused by an overreliance on poorly designed and badly documented commercial black boxes to administer everything from health care to public order and crime.
Predictive policing has become a new panacea for crime prevention. However, we still know too little about the performance of computational methods in the context of predictive policing. The paper provides a detailed analysis of existing approaches to algorithmic crime forecasting. First, it is explained how predictive policing makes use of predictive models to generate crime forecasts. Afterwards, three epistemologies of predictive policing are distinguished: mathematical social science, social physics and machine learning. Finally, it is shown that these epistemologies have significant implications for the constitution of predictive knowledge in terms of its genesis, scope, intelligibility and accessibility. It is the different ways future crimes are rendered knowledgeable in order to act upon them that reaffirm or reconfigure the status of criminological knowledge within the criminal justice system, direct the attention of law enforcement agencies to particular types of crimes and criminals and blank out others, satisfy the claim for the meaningfulness of predictions or break with it and allow professionals to understand the algorithmic systems they shall rely on or turn them into a black box. By distinguishing epistemologies and analysing their implications, this analysis provides insight into the techno-scientific foundations of predictive policing and enables us to critically engage with the socio-technical practices of algorithmic crime forecasting.
Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social–technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on to help manage and govern the “anyone can edit” encyclopedia at a massive scale. These bots, scripts, tools, plugins, and dashboards make Wikipedia more efficient for those who know how to work with them, but like all organizational culture, newcomers must learn them if they want to fully participate. I illustrate how cultural and organizational expertise is enacted around algorithmic agents by discussing two autoethnographic vignettes, which relate my personal experience as a veteran in Wikipedia. I present thick descriptions of how governance and gatekeeping practices are articulated through and in alignment with these automated infrastructures. Over the past 15 years, Wikipedian veterans and administrators have made specific decisions to support administrative and editorial workflows with automation in particular ways and not others. I use these cases of Wikipedia’s bot-supported bureaucracy to discuss several issues in the fields of critical algorithms studies; critical data studies; and fairness, accountability, and transparency in machine learning—most principally arguing that scholarship and practice must go beyond trying to “open up the black box” of such systems and also examine sociocultural processes like newcomer socialization.
Artificial intelligence (AI) design traditionally prioritizes full automation and broadest data coverage yet often overlooks the contextual realities of pre-existing biases in datasets and human agency in real-world applications. This article explores how cross-disciplinary collaboration can transform this paradigm by opening AI's ‘black box' of full automation to public accountability, human operation and real-world use. Based on anthropologists’ direct collaboration with computer scientists in a machine learning (ML) AI design project, this project proposes cross-disciplinary methodological innovations for algorithm design and takes up the fairness issue as an exemplary domain to experiment with our new ML model framework. We developed a new ML framework that moves beyond conventional accuracy-coverage trade-offs to incorporate a human-operable three-way balance between accuracy, fairness, and coverage. In this framework, we foreground the real-world contexts and human agency at multiple stages of AI design, from data training, decision-making, interface design to model testing. The results demonstrate both the promise and challenges of bridging academic disciplines and connecting lab-based AI development with real-world needs. While our collaboration has made AI design more accountable, it also highlights enduring tensions in balancing competing priorities like fairness, accuracy, and coverage.
The surge in crime rates, particularly in urban regions, has underscored the importance of predictive policing within law enforcement strategies. This research introduces a neural network-based crime prediction model, specifically tailored to address the complexities of Sri Lanka’s crime landscape. By combining big data analytics with advanced machine learning methods—including ensemble models such as Random Forest and Gradient Boosting, alongside Artificial Neural Networks (ANNs)—our study presents a robust framework to forecast crime incidents, locations, and time spans. While neural networks excel in predictive accuracy, their "black-box" nature can hinder practical applications in critical fields like law enforcement. To address this, our model integrates Explainable AI (XAI), making the decision-making process of the system transparent and interpretable for end-users. XAI helps break down complex neural network predictions, ensuring trust and clarity in the model’s insights. With a prediction accuracy rate of 85%, this approach demonstrates substantial potential to improve crime prevention efforts and optimize resource allocation. Our research not only highlights the predictive strengths of neural networks but also showcases the essential role of interpretability for deploying these models effectively in real-world policing.
Artificial intelligence (AI) has become a significant paradigm of methodology and epistemology in the social sciences. Machine learning (ML), natural language processing (NLP), and generative models enable researchers to work with big, multimodal datasets, identify complex patterns, and recreate events in the social world in ways that previously were not feasible. At the same time, these innovations also lead to ethical challenges related to algorithmic bias, black boxes, data extractivism, and reinforced structural inequalities in welfare, government services, education, and criminal justice. The article critically questions the social sciences in the light of AI on three dimensions that are inextricably linked, namely: (1) the opportunities that AI provides to social-scientific inquiry; (2) the biases and constraints generated through data, models, and institutional application; and (3) ethical pathways that are necessary for the responsible governance of AI-facilitated research and decision support. The article is based on a scoping, critical thematic review of the recent literature, and its conceptualization of AI as a socio-technical infrastructure is that it produces knowledge and, at the same time, offers power. It explains the impact AI practices have on restructuring disciplines like sociology, psychology, political science, and policy analysis, and how it blindly predicts how data practices, design choices, and governance arrangements can either preserve or destroy existing hierarchies. The paper suggests an analytical framework synthesizing AI practices, social research practices, and governance structures in ethical frameworks. It argues that the emancipatory promise of AI in the social sciences is dependent on the attainment of something beyond principle-based claims of so-called ethical AI by operational governance mechanisms that make systems visible, debatable, and responsible in their respective situations.
Contemporary culture is shaped by information technology, in particular, artificial intelligence applications. One of the goals of this paper is to analyze how artistic practices could use machine learning algorithms as racial resistance. In addition, to remove from the black box how these applications work by relating the technical process that artists face. It will analyze the aesthetic and narrative perception around artificial intelligence, racism in the creation of data sets to train these algorithms and the possibilities that artificial intelligence opens to rethink concepts such as intelligence and imagination. This research is framed from the posthumanist subjectivity that uses critical imagination to question the classic and Eurocentric definition of human as a measure of what surrounds us. Finally, I will describe the work of the contemporary artist Linda Dounia and her interest in incorporating her experience as a Senegalese woman in the training of Generative Adversarial Networks models to reflect on her identity.
Social media can be a double-edged sword for society, either as a convenient channel exchanging ideas or as an unexpected conduit circulating fake news through a large population. While existing studies of fake news focus on theoretical modeling of propagation or identification methods based on machine learning, it is important to understand the realistic propagation mechanisms between theoretical models and black-box methods. Here we track large databases of fake news and real news in both, Weibo in China and Twitter in Japan from different cultures, which include their traces of re-postings. We find in both online social networks that fake news spreads distinctively from real news even at early stages of propagation, e.g. five hours after the first re-postings. Our finding demonstrates collective structural signals that help to understand the different propagation evolution of fake news and real news. Different from earlier studies, identifying the topological properties of the information propagation at early stages may offer novel features for early detection of fake news in social media.
This article explores the evolving role of cloud data architects in developing human-centric AI systems where artificial intelligence enhances rather than replaces human capabilities. As AI becomes increasingly embedded in cloud-native architectures, a paradigm shift is occurring from viewing AI as isolated black boxes toward seeing them as collaborative partners in sociotechnical systems. The article examines fundamental principles of human-centric AI architecture: meaningful human control through tiered autonomy frameworks, transparency by design across multiple levels, and sophisticated feedback integration mechanisms. It details architectural patterns including human-in-the-loop workflows, explainable architecture with layered explanation services, and adaptive feedback systems that enable continuous learning. The article addresses implementation challenges such as balancing automation with human judgment, scaling oversight as systems grow, and effectively handling human-AI disagreements. Looking toward future directions, it explores emerging concepts of collaborative intelligence frameworks, adaptive interfaces, and embedded ethics mechanisms. Throughout, the article emphasizes that successful human-centric architecture creates systems where humans retain appropriate control while leveraging the complementary strengths of machine intelligence.
The paper deals with the quandary of the neutrality and transparency of technologies. First, I show how this problem is connected with the image of the opening of 'black boxes' that is pivotal to much of science and technology studies. Second, methodological and socio-political dimensions of the 'black box' metaphor are discussed. Third, I analyze three typical solutions to the problem of the neutrality of technologies outside and inside constructivist technology studies. It is demonstrated that despite their apparent differences, these solutions are similar in their logic of conceptualizing technology as a neutral intermediary. Forth, I look for an alternative to this logic in the actor-network theory of Bruno Latour. Here technologies are conceived in terms of an eventful association of heterogeneous entities irreducible to its conditions of possibility. The construction of technologies is understood as mediation, or as a 'making-do' process where creators are surprised by their creations and vice versa. In Latour's actor-network, technologies are interpreted as opaque and non-neutral entities. Finally, I turn to some object-lessons from smart technologies powered by neural networks to demonstrate that these are empirical vindications of Latour's conception of technical mediation. Particular attention is paid to the opacity and (non)interpretability of machine learning algorithms.
In the era of 4.0 industry, artificial intelligence and machine learning are the undisputed protagonists on a world-wide stage. In the last decade, expert and learning systems have been widely applied to a plethora of tasks that were considered, in the past, as solvable only by a human. And on top of that, machines are becoming more reliable and accurate than human experts. However, despite the artificial intelligence being drenched in our everyday life, people do not completely realize this radical change, and they have a warped perception of it. In this work, we aim at exploring and analyzing the human perception and trust in artificial intelligence. To this end, we collected personal opinions concerning juicy topics in artificial intelligence literature, meaning the interpretability of black-box models, and the reliability of autonomous systems. The analysis has been conducted on different applications, including medical and cybersecurity domains.
that making disability explicit in discussions of AI and fairness is urgent, as the quick, black-boxed nature of automatic decision making exacerbates the disadvantages that people with disabilities already endure and creates new ones. Though low representation in datasets is blamed, increasing representation will be complex, given disability politics. For example, disabled people strategically choose whether and how to disclose their disabilities (if they even identify as having disabilities), likely leading to inconsistent datasets even when disability information is As machine learning becomes more ubiquitous, questions of AI and information ethics loom large. Much concern has been focused on promoting AI that results in more fair outcomes that do not discriminate against protected classes, such as those marginalized on the basis of gender and race. Yet little of that work has specifically investigated disability. Two notable exceptions, both from within the spaces of disability studies and assistive technology (AT), are Shari Trewin’s statement on “AI Fairness for People with Disabilities” [1] and the World Institute on Disability’s comments on AI and A What Is the Point of Fairness? Cynthia Bennett, Carnegie Mellon University Os Keyes, University of Washington
Tech leaders and even those of us building software systems for governing our world like to imagine the ways in which automated systems might enable a better future - curing diseases, lifting people from poverty, saving animals from extinction. However, there is plenty of evidence that instead platforms and algorithms are contributing to directly to huge societal problems like spreading of misinformation, political polarization, rising inequality and more. I will talk about the role of microtargeting of individuals based on digital surveillance, machine learning based on historical data and black box algorithms based on a lack of explainability and unaccountability. If we want automated decision making that protects the rights of individuals and healthy societies, that reduces bias rather than amplifying it, we are going to need to engage in real discussions of the tradeoffs with stakeholders ‚Ai not just empty promises that “this is going to be great for everyone”. We need to disrupt the current levels of digital surveillance, shift control over surveillance data, require explanations of automated decisions, enable aggressive third-party testing, and create incentives for iterative improvement in systems where those being impacted by the automated systems are not the customers.
AbstractComputational criminology has been seen primarily as computer-intensive simulations of criminal wrongdoing. But there is a growing menu of computer-intensive applications in criminology that one might call “computational,” which employ different methods and have different goals. This paper provides an introduction to computer-intensive, tree-based, machine learning as the method of choice, with the goal of forecasting criminal behavior. The approach is “black box,” for which no apologies are made. There are now in the criminology literature several such applications that have been favorably evaluated with proper hold-out samples. Peeks into the black box indicate that conventional, causal modeling in criminology is missing significant features of crime etiology.
Explainable AI (XAI) attempts to provide explanations and, thus, increase trust while the effect is influenced by demographic factors, cultural match, and perceived fairness. This study explores the role of trust and distrust regarding sociodemographic data and perceived fairness in the SOC system decision-making. It examines whether cultural match influences perceptions of fairness and whether the nature of the explanation (technical, plain language, or human) influences trust. This study employed a cross-sectional survey of 240 participants and used experimental vignettes where participants received decisions from an AI with/without explanations in one of three types. Relationship perception and algorithmic distrust and trust were analyzed using regression, MANOVA, and mediation. The study shows that human decisions are trusted most and are followed by plain language AI-generated reasons; the least trusted are technical reasons. It was also shown that perceived fairness regulates trust and that low-income users are more sensitive to fairness perception. Culture has been proven to have a strong association with fairness perception, emphasizing the need to adopt the context-based approach for AI governance. However, passive exposure to AI does not imply trust and, therefore, requires that transparency be perceived appropriately by the public. This study aims to expand knowledge in AI governance by attempting to apply both procedural justice and algorithmic accountability frameworks. It points to a lack of generalized public trust in AI and stresses the need for culturally sensitive and inclusive AI designs. Recommendations indicate that explainability is more important than just the technical process to drive policy changes.
This research seeks to address persistent socioeconomic disparities in Nepal’s education system by integrating explainable artificial intelligence (XAI) with foundational social theories. While enrollment rates have improved, inequities in access, retention, and learning outcomes remain among communities marginalized by caste, gender, and geography. Existing research and policies often depend on outdated statistical approaches and fail to combine social theory with modern machine learning. To overcome this gap, we adopt a mixed-methods design that blends quantitative modeling with qualitative insights from educators, policymakers, and community stakeholders. Using national datasets (EMIS, NLSS), machine learning models such as Random Forest and XGBoost are applied to predict educational disparities. SHAP (SHapley Additive Explanations) is employed to interpret results and highlight the most influential factors. These patterns are further contextualized using Sen’s Capability Approach and Bourdieu’s Cultural Capital Theory, ensuring that findings reflect both structural conditions and lived experiences. The study delivers several policy-relevant outcomes: a resource allocation framework to support equitable distribution, interactive dashboards for simulating policy scenarios, and earlywarning indicators for student dropouts. Importantly, the qualitative component complements the quantitative models, capturing voices and perspectives often excluded from policy discussions. By linking XAI with equity-focused theories, this work contributes to academic debates on educational data science, while also providing actionable tools for policymakers. Ultimately, it supports evidence-based advocacy that empowers marginalized communities and advances a more inclusive education system in Nepal.
Public institutions have begun to use AI systems in areas that directly impact people’s lives, including labor, law, health, and migration. Explainability ensures that these systems are understandable to the involved stakeholders, while its emerging counterpart contestability enables them to challenge AI decisions. Both principles support the responsible use of AI systems, but their implementation needs to take into account the needs of people without technical background, AI novices. I conduct interviews and workshops to explore how explainable AI can be made suitable for AI novices, how explanations can support their agency by allowing them to contest decisions, and how this intersection is conceptualized. My research aims to inform policy and public institutions on how to implement responsible AI by designing for explainability and contestability. The Remote Doctoral Consortium would allow me to discuss with peers how these principles can be realized and account for human factors in their design.
AI-enabled automation is reshaping production, services, and knowledge work, bringing real concerns about displacement, deskilling, and the concentration of decision power in opaque systems. At the same time, research in human-AI interaction, human-in-the-loop (HITL) methods, and explainable AI (XAI) shows that AI can be designed to augment human capabilities and support work practices that are more equitable and trustworthy. Existing work, however, tends to focus on interaction patterns and system-level governance, while leaving the design of jobs, skills, and certification pathways under-specified. This paper introduces the concept of human-in-the-loop labor architectures: socio-technical configurations in which HITL and XAI capabilities are deliberately used to reinstate and reconfigure jobs, define new skill profiles, and anchor certification and policy interventions. We propose a four-layer framework-capabilities, roles, skills and certification, and governance-that connects AI affordances to occupational structures and institutional arrangements. Drawing on research on AI and the future of work, human-centered AI, and organisational fairness, we show how this architecture can reinstate and upgrade roles in automation, human resources, collaboration, and well-being, and how governments and institutions can use it to elevate human capabilities and quality of life in an evolving global economy.
Background Medical-purpose software and Artificial Intelligence (“AI”)-enabled technologies (“medical AI”) raise important social, ethical, cultural, and regulatory challenges. To elucidate these important challenges, we present the findings of a qualitative study undertaken to elicit public perspectives and expectations around medical AI adoption and related sociotechnical harm. Sociotechnical harm refers to any adverse implications including, but not limited to, physical, psychological, social, and cultural impacts experienced by a person or broader society as a result of medical AI adoption. The work is intended to guide effective policy interventions to address, prioritise, and mitigate such harm. Methods Using a qualitative design approach, twenty interviews and/or long-form questionnaires were completed between September and November 2024 with UK participants to explore their perspectives, expectations, and concerns around medical AI adoption and related sociotechnical harm. An emphasis was placed on diversity and inclusion, with study participants drawn from racially, ethnically, and linguistically diverse groups and from self-identified minority groups. A thematic analysis of interview transcripts and questionnaire responses was conducted to identify general medical AI perception and sociotechnical harm. Results Our findings demonstrate that while participants are cautiously optimistic about medical AI adoption, all participants expressed concern about matters related to sociotechnical harm. This included potential harm to human autonomy, alienation and a reduction in standards of care, the lack of value alignment and integration, epistemic injustice, bias and discrimination, and issues around access and equity, explainability and transparency, and data privacy and data-related harm. While responsibility was seen to be shared, participants located responsibility for addressing sociotechnical harm primarily with the regulatory authorities. An identified concern was risk of exclusion and inequitable access on account of practical barriers such as physical limitations, technical competency, language barriers, or financial constraints. Conclusion We conclude that medical AI adoption can be better supported through identifying, prioritising, and addressing sociotechnical harm including the development of clear impact and mitigation practices, embedding pro-social values within the system, and through effective policy guidance intervention.
Purpose As global awareness of digital inclusivity increases, governments are more and more optimizing artificial intelligence (AI) technologies to bridge public service gaps and provide equitable access to essential service for persons with disabilities (PWDs). This study aims to explore how AI-powered public service for PWD is bridging digital inclusivity gap in Ghana. Design/methodology/approach Using qualitative research approach, interviews were conducted with the ministry of gender, children and social protection, AI developers, PWDs advocacy groups and PWDs. The data collected was analyzed using thematic analysis. Findings The study reveals that AI-powered public service in Ghana has the potential to reduce the exclusion gap for PWDs; however, poor policy design and implementation, lack of political commitment, financial constraint and weak IT infrastructure hamper its effectiveness. Practical implications The results indicate fundamental deficiency in the design, execution and accessibility of AI-powered public services for PWDs, highlighting the need to address these challenges to attain inclusive AI technologies. Social implications AI-powered public service must be developed to meet local needs, ensuring that no category of PWDs is left out. Failure to do so will widen the existing social inequality, where PWDs continue to be marginalized, even in the face of a growing digital world. Originality/value To the best of the author’s knowledge, the paper is among the first to deploy the use of critical disability theory, digital justice and political process theory to explain the use of AI-driven technologies in enhancing public service delivery for PWDs in sub-Saharan Africa.
This work argues that current approaches to mitigating the harms of generative AI (genAI) overlook the perspectives of lay stakeholders: people without professional or technical expertise relating to genAI, but who possess a contextual and situational experience about how the impacts of these systems affect them on the ground level. While expert-driven frameworks dominate risk assessment, this research introduces a participatory, sociotechnical framework that redistributes agency to lay stakeholders through three mechanisms addressing different scales of harm, each presented through a different case study. At the micro-scale, a technological mechanism empowers users of generative music tools to resist harms resulting from uninformed attribution, such as copyright infringement. At the macro-scale, a policy mechanism integrates lay stakeholders' perspectives into the policy pipeline to inform governance of AI in relation to media ecosystems. At the meso-scale, a judicial mechanism strengthens courts’ ability to evaluate genAI-related copyright disputes by incorporating lay perspectives into evidentiary processes. Together, these mechanisms propose actionable ways to balance relationships between technology developers, governing bodies, and lay stakeholders, ensuring that those most affected by genAI are meaningfully included in addressing the harms.
Algorithmic bias remains a persistent ethical challenge in the deployment of artificial intelligence (AI) systems, particularly where opaque decision-making intersects with entrenched social inequities. While technical solutions such as fairness-aware algorithms and explainability tools have proliferated, the governance dimensions of AI ethics, especially the role of diversity in shaping oversight structures, remain undertheorized. This article introduces the Diversity-Centric AI Governance Framework (DCAIGF), a novel model that integrates cognitive diversity, intersectionality ethics, and cross-cultural regulatory alignment as foundational elements of inclusive AI oversight. Grounded in 65 semi-structured expert interviews, comparative case studies (Google and IBM), and policy analysis of key global frameworks (e.g., EU AI Act, UNESCO Recommendation on AI Ethics, OECD AI Principles), this study finds that homogenous governance structures often reproduce epistemic blind spots and normative monocultures. In contrast, diverse institutional architectures foster reflexivity, accountability, and ethical robustness across contexts. By conceptualizing diversity as ethical infrastructure rather than symbolic representation, DCAIGF advances four innovations: mandated cognitive pluralism, embedded intersectionality, hybrid legal adaptability, and modular implementation pathways. These features enable practical translation across public, private, and multilateral governance ecosystems. The paper contributes to AI ethics by offering a socio-technical, globally relevant, and empirically grounded model for institutional reform. It further proposes a policy agenda that links epistemic justice to regulatory legitimacy offering a pluralistic roadmap for addressing algorithmic bias beyond the limits of technical mitigation alone.
This study analytically investigates the impact of AI-Generated Content (AIGC) on consumer engagement and the ethical challenges arising from the human-AI co-creation process within the specific context of Maharashtra, India. Ever since marketers started to deploy AI-Generated Content (AIGC) on a large scale, some have kept raising the issues of consumer trust, perceived authenticity and moral responsibility, because generative algorithms still operate in a mysterious, "black-box" way. It has been suggested that Explainable Artificial Intelligence (XAI) can serve as a tool to address such concerns through the transparency and understandability of AI-generated outputs. Whether consumers in emerging, digitally evolving markets really appreciate such transparency cues or whether the advantages of AIGC in terms of efficiency and personalization are so great that they make explainability unnecessary, empirical evidence is still scarce. This paper deals with this issue headon by experimentally comparing consumer reactions to disclosed and undisclosed AIGC in the Indian regional context. Taking the state as a major digital commerce hub with somewhat complicated multilingual dynamics (Marathi–Hindi–English), this research is not limited to global studies only, but it goes further to fill the critical gap in understanding regional consumer response to algorithmic marketing. The core theoretical mechanism tested is the Explainable Co-Creation (ECC) Model that suggests Perceived Authenticity (PA) is a mediator between AI Content Transparency (ACT) and ultimate Engagement (CE) whereas Consumer Trust (CT) and Ethical Awareness (EA) are two key moderators. A cross-sectional survey with mixed-methods, stimuli-based, component involving urban consumers in Mumbai, Pune, Nashik, and Nagpur was deployed to carry out this research work. The study tested six foundational hypotheses using PLS-SEM. The preliminary results are anticipated to reveal an important Paradox of Transparency in which a high level of Ethical Awareness could negatively moderate the acceptance of disclosed AIGC. This implies that for a digitally conscious Indian consumer, transparency by itself may not be enough to regain authenticity. The study provides important managerial implications for the creation of a culturally-sensitive, hybrid AI-human content strategy and it also supports the requirement of policy interventions like the introduction of mandatory AI content labeling to facilitate consumer autonomy in the emerging digital markets. This research makes a significant move by extending consumer engagement theory to the non-sentient co-creation domain and by providing a regional AI ethics lens for the future research.
Artificial Intelligence technologies radically transform modern societal paradigms with revolutionary uses in healthcare, education, economic systems, and governments, while at the same time raising unparalleled moral dilemmas to be addressed with inclusive policy solutions. The health sector is enhanced by advanced diagnostic algorithms proving to be of better accuracy in oncology imaging as well as pharmaceutical development platforms, shortening drug development times using molecular modeling capacities. Educational settings are subject to increased personalization by adaptive learning platforms that adapt content presentation according to individual student understanding patterns and interaction measures. Algorithmic bias expressions, though, expose systematic differences in facial recognition technologies and criminal justice systems to perpetuate discriminatory behavior towards marginalized groups. Privacy loss arises through widespread data collection systems that allow high-resolution behavioral profiling and commercial utilization of personal data by means of targeted advertising marketplaces. Economic disruption involves a workforce shift from experienced, skilled work to technology-enabled collaborative settings, best illustrated in the construction sector shifts necessitating holistic retraining programs integrating digital fluency with domain-specific knowledge. Trust mechanisms involve responsive, explainable AI systems with the ability to make dynamic transparency adjustments based on contextual needs and user levels of expertise. Multi-stakeholder governance models exhibit better performance in balancing promotion of innovation with risk control through cooperative regulatory models embracing a variety of viewpoints from healthcare professionals, technology innovators, patients, ethicists, and policymakers to promote responsible AI deployment, safeguarding human well-being while pursuing technological advancement.
Artificial intelligence is often framed as a neutral technical tool that enhances efficiency and consistency in institutional decision-making. This article challenges that framing by showing that automated systems now operate as social and institutional actors that reshape recognition, opportunity, and public trust in everyday life. Focusing on employment screening, welfare administration, and digital platforms, the study examines how algorithmic systems mediate social relations and reorganise how individuals are evaluated, classified, and legitimised. Drawing on regulatory and policy materials, platform governance documents, technical disclosures, and composite vignettes synthesised from publicly documented evidence, the article analyses how automated judgement acquires institutional authority. It advances three core contributions. First, it develops a sociological framework explaining how delegated authority, automated classification, and procedural opacity transform institutional power and individual standing. Second, it demonstrates a dual logic of inequality: automated systems both reproduce historical disadvantage through patterned data and generate new forms of exclusion through data abstraction and optimisation practices that detach individuals from familiar legal, social, and moral categories. Third, it shows that automation destabilises procedural justice by eroding relational recognition, producing trust deficits that cannot be resolved through technical fairness or explainability alone. The findings reveal that automated systems do not merely support institutional decisions; they redefine how institutions perceive individuals and how individuals interpret institutional legitimacy. The article concludes by outlining governance reforms aimed at restoring intelligibility, accountability, inclusion, and trust in an era where automated judgement increasingly structures social opportunity and public authority.
ABSTRACT The article looks at the anthropomorphic design of LLM-based chatbots and its implications in promoting the ascription of subjectivity to algorithms. This cognitive, sociological and cultural phenomenon is further explained in the context of the public discourse surrounding AI and the economic drivers shaping it. The article asks whether this development, that by now seems irreversible, should have been regulated differently. Two central frameworks considered in this respect are transparency and dignity, and the article looks at each of these paths to unpack its implications. The ‘post mortem analysis’ applied uncovers that the EU AI Act’s treatment of this phenomenon through transparency was proven ineffective, and it is argued that a more suitable and powerful regulatory framework could have been applied through the right to dignity. The article further examines the question from a broader international perspective and exemplifies its implications on cases currently taking place in the US.
Integrating Artificial Intelligence (AI) into digital recruitment platforms has introduced significant enhancements in efficiency and decision-making, alongside complex ethical challenges regarding fairness, transparency, and accountability in candidate evaluation. This study investigates how leading AI-driven recruitment platforms articulate and operationalize ethical principles and whether these commitments are effectively translated into practice. Employing a qualitative exploratory design, the research analyzes official white papers, privacy policies, and AI ethics statements from LinkedIn, HireVue, Pymetrics, and ModernHire. Data was examined using AI-assisted text mining and thematic content analysis to identify ethical discourse patterns and assess the depth of implementation. The findings indicate that moral terms such as “fairness” and “bias” are cited frequently, with LinkedIn referencing them 27 times and HireVue 19 times. A comparative transparency assessment yielded scores of 8.5 out of 10 for LinkedIn, 7.2 for HireVue, 6.8 for Pymetrics, and 4.3 for ModernHire, while formal mechanisms for candidate appeals were absent on most platforms. This study contributes to the field by revealing a persistent gap between stated ethical ideals and operational practices in AI recruitment and by recommending the adoption of explainable AI, transparent auditing frameworks, and international regulatory standards. Such measures are essential to foster more accountable, equitable, and humane AI-based hiring processes.
Generative artificial intelligence (GenAI) is transforming public health and medicine as well, in the form of disease surveillance, resource allocation and clinical decision making. Interventions to improve efficiency — multimodal predictive algorithms, federated learning platforms — reveal the internal contradictions of the system between algorithmic efficiency and fairness: speed of technical innovation and regulatory deficit, data flows without borders vs. ethical values of places. We present a three-dimensional governance structure for the topic covering the technical, institutional and ethical domains. From a technology point of view, explainability solutions and culturally-aware design align transparency with cultural sensibility. From an institution point of view, privacy-protecting data platforms and risk-based regulation align innovation with accountability. From an ethical point of view, incorporating local values and disbursing AI dividends sustain equitable health outcomes. There are still challenges that demand the utmost priority, including the algorithmic prejudice, the data imperialism and the opacity in medical AI decision making. Future priorities include the development of broader measurement tools that integrate clinical impact, equity, and societal impact; the development of transnational governance institutions to mitigate concerns relating to data sovereignty; and the development of forms of participatory design between designers, practitioners, and populations. A balance between technical creativity, visionary policy-making, and caring leadership to advocate for human-centered healthcare will provide us with trusted AI ecosystems. Technical excellence alone cannot guarantee success unless fairness and accessibility, social responsiveness, and justice for future global health is guaranteed.
The article analyzes the modern ethical aspects of the use of artificial intelligence (AI) in the activities of non-profit and charitable organizations. The relevance of the study is due to the need to ensure transparency, social responsibility and equal access to digital services in the context of the growing use of AI to automate processes and decision-making in humanitarian initiatives. It was established that the key ethical challenges are the opacity of algorithms, the risks of algorithmic bias, non-compliance with data confidentiality and digital inequality among beneficiaries. The aim of the work is to develop recommendations for the introduction of ethical standards for the use of AI in the non-profit sector, taking into account international practices and ensuring social responsibility. The study used methods of content analysis, systematic comparison and a review of international experience to identify key factors for ensuring the ethics and inclusiveness of digital solutions. The results of the analysis indicate the need to implement the principle of "explained AI", which ensures the transparency of algorithms and their accountability. It has been proven that the use of multilingual interfaces adapted for socially vulnerable groups, as well as the integration of multi-layered data protection policies, increase the level of trust in digital solutions. It is concluded that the effective implementation of AI in humanitarian programs is possible under conditions of constant monitoring and independent audit of algorithms to reduce the risks of "black boxes". Prospects for further research include the development of methods for adapting algorithms to local socio-cultural needs, the study of innovations in the field of personal data protection and the creation of training programs to increase the digital literacy of beneficiaries.
Background: Artificial intelligence (AI) and automation are increasingly influencing workplace decision-making, particularly in recruitment, performance evaluations, and career progression. While AI is often perceived as neutral, research highlights that these systems frequently replicate and amplify historical gender biases, disproportionately disadvantaging women and marginalized groups. Existing AI fairness models primarily focus on generic algorithmic bias but fail to address gender-specific and intersectional discrimination. Additionally, corporate AI governance frameworks lack structured enforcement mechanisms, leading to reactive rather than proactive bias mitigation. Objective: This study aims to develop a structured framework for mitigating gender bias in AI-driven workplace automation. It seeks to bridge the gap between AI development and ethical workforce practices by integrating fairness, accountability, and inclusivity into algorithmic decision-making. Methodology: A conceptual research design is adopted, synthesizing insights from AI fairness literature, gender studies, and corporate governance frameworks. The study relies on secondary data sources, including peer-reviewed journal articles, industry reports, and case studies on AI-driven workplace discrimination. Theoretical models such as Gender Role Theory, Algorithmic Bias Theory, and Intersectionality Theory inform the framework’s development. Proposed Model: The study introduces the G.E.N.D.E.R. AI Framework as a structured approach to mitigating gender bias in AI-driven workplace automation. This framework integrates six core components to ensure fairness, accountability, and inclusivity in algorithmic decision-making. Governance and regulation serve as the foundation, establishing AI fairness policies and ensuring compliance with ethical and legal standards. Equitable data training addresses biases embedded in historical datasets by implementing strategies to eliminate discriminatory patterns and promote balanced representation. Neutrality in algorithm design emphasizes fairness-aware programming and model transparency, ensuring that AI-driven systems do not reinforce systemic inequalities. Diversity in AI development teams plays a crucial role in reducing bias by incorporating inclusive perspectives in the design and deployment of AI technologies. Evaluation and bias audits enable continuous monitoring of AI-driven decisions, facilitating early detection and correction of discriminatory patterns in hiring, performance assessments, and career progression. Lastly, responsible AI usage mandates human oversight in AI-powered employment decisions, ensuring that algorithmic recommendations are critically reviewed and do not replace human judgment in critical workplace determinations. By integrating these principles, the G.E.N.D.E.R. AI Framework provides a comprehensive, interdisciplinary model designed to promote gender-equitable AI governance and ethical automation in workforce management. Results: The framework provides a structured, interdisciplinary approach to embedding gender equity into AI decision-making. It highlights key challenges in existing AI fairness models and offers actionable solutions for AI developers, HR professionals, and policymakers. Conclusion: As AI continues to shape workforce dynamics, it is critical to ensure that automation fosters inclusivity rather than reinforcing historical inequalities. The G.E.N.D.E.R. AI Framework serves as a foundation for ethical AI governance, promoting gender fairness in workplace automation. Future research should focus on empirical validation, industry-specific adaptations, and the integration of explainable AI techniques to enhance fairness in AI-driven employment decisions.
This article explores the extensive social impacts of generative artificial intelligence as it transforms the current economic, social, and cultural systems. Based on the insights provided by major organizations, the article develops an overall perspective to determine the ways in which the technologies are transforming the process of work, labor markets, creative actions, and flows of information. The article discusses some of the ways that generative AI can increase productivity by accelerating the creation of content, developing software, and analyzing data, leaving human intellect to more demanding responsibilities. It looks at the changes that have taken place in the labor market, pointing to the risk of job displacement, as well as the new career professions that have opened up, with particular emphasis placed on disproportionate effects between various populations and geographical locations. Besides, it addresses timely ethical dilemmas and governance challenges, ranging from issues of the authenticity of information shared to intellectual property disputes to algorithmic discrimination, data privacy challenges, and market consolidation risks. Exploiting the elements that are intertwined with one another, this article provides a fair analysis of both striking opportunities and severe threats related to generative AI technologies, which is desperately needed in the world of policy architects, business leaders, and technology experts who are undergoing this shift.
This study investigates the value tensions between operational efficiency and social responsibility arising from AI adoption in the airport industry and analyzes how different countries adjust these tensions institutionally. Moving beyond prior discussions focused only on technical performance or normative ethics, the study explores how efficiency interacts with fairness, explainability, accountability, data privacy, and security in high-risk public infrastructure. A qualitative comparative analysis was conducted on the EU, US, China, and South Korea, focusing on their AI regulatory philosophies and representative airport cases. The findings reveal four distinct governance models: preemptive regulation (EU), market-driven self-regulation (US), state-controlled unilateralism (China), and balanced experimentation (Korea). These models demonstrate that the effectiveness of AI deployment depends less on technical performance alone and more on the institutional governance capacity and the maturity of operational systems to balance efficiency with social values. The study concludes by underscoring the necessity of clear regulatory standards and practical policy guidelines for the responsible implementation of AI in high-risk public infrastructure.
This article examines a pivotal feature of the contemporary digital economy: the multibillion-dollar payments made by Google to Apple to secure default search placement across Apple’s ecosystem and the mounting pressures created by the rapid diffusion of AI-mediated search. Treating the “default” not as a neutral technical setting but as a sociological institution that structures attention, value flows, and competitive outcomes, the paper mobilizes three analytical lenses—Bourdieu’s forms of capital, world-systems theory, and institutional isomorphism—to explain (1) why such payments persist, (2) why Apple has not simply launched (or fully productized) a rival general-purpose search engine, and (3) how generative-AI interfaces destabilize the legacy “pay-for-default” business model. The argument is threefold. First, default status functions as a conversion mechanism among economic, symbolic, and social capital, reproducing platform dominance through habituated user practices and entrenched field relations. Second, the Google–Apple arrangement exemplifies a core–periphery dynamic in digital capitalism: a small number of “core” firms capture outsized rents from control of device ecosystems, data, and ad distribution while peripheral actors confront structural barriers to entry. Third, organizational convergence—explained by institutional isomorphism—helps clarify Apple’s rational non-entry into general search at scale: pursuing search would entail costly capability building, regulatory exposure, and brand repositioning that undercuts its device-centric identity, while the default model already transforms installed-base power into services revenue. Finally, the analysis shows how the rise of answer-centric AI (on-device and cloud-assisted) represents an inflection point: if users increasingly bypass link lists in favor of synthesized responses, the marginal value of “default search” falls. Device makers may thus pivot from exclusive default deals toward plural AI partnerships, threatening search-ad business models premised on traffic intermediation. Policy, competition strategy, and academic research must, therefore, move beyond browser defaults to interrogate AI intermediaries, data access, and interface governance in the next regime of information discovery.
Our research investigates the impact of Generative Artificial Intelligence (GAI) models, specifically text-to-image generators (T2Is), on the representation of non-Western cultures, with a focus on Indian contexts. Despite the transformative potential of T2Is in content creation, concerns have arisen regarding biases that may lead to misrepresentations and marginalizations. Through a Non-Western community-centered approach and grounded theory analysis of 5 focus groups from diverse Indian subcultures, we explore how T2I outputs to English input prompts depict Indian culture and its subcultures, uncovering novel representational harms such as exoticism and cultural misappropriation. These findings highlight the urgent need for inclusive and culturally sensitive T2I systems. We propose design guidelines informed by a sociotechnical perspective, contributing to the development of more equitable and representative GAI technologies globally. Our work underscores the necessity of adopting a community-centered approach to comprehend the sociotechnical dynamics of these models, complementing existing work in this space while identifying and addressing the potential negative repercussions and harms that may arise as these models are deployed on a global scale.
Generative AI has a wide range of impacts on how we access and use information, particularly as educational settings and perspectives differ greatly across different locations. These impacts extend to society and include impacts on intellectual and creative works and the potential infringement of authorship. Differences in institutional GenAI policies (and in funding) may create unequal access to AI tools, the potential disparity in student knowledge of AI tools, responsible uses of AI tools, ethical questions about AI tools, and uneven student knowledge of the benefits and limitations of AI tools. Generative AI introduces questions concerning academic integrity, bias, and data provenance. The training data's source, reliability, veracity, and trustworthiness may be in doubt, creating broader societal concerns about the output of the Generative AI models. This working group will conduct a landscape analysis on Global South ethical questions related to the use of Generative AI tools in higher education contexts, identifying promising principles, challenges, and ways to navigate the implementation of Generative AI in ethical and principled ways.
Since the emergence of generative AI, creative workers have spoken up about the career-based harms they have experienced arising from this new technology. A common theme in these accounts of harm is that generative AI models are trained on workers’ creative output without their consent and without giving credit or compensation to the original creators. This paper reports findings from 20 interviews with creative workers in three domains: visual art and design, writing, and programming. We investigate the gaps between current AI governance strategies, what creative workers want out of generative AI governance, and the nuanced role of creative workers’ consent, compensation and credit for training AI models on their work. Finally, we make recommendations for how generative AI can be governed and how operators of generative AI systems might more ethically train models on creative output in the future.
Generative models are nowadays widely used to generate graphical content used for multiple purposes, e.g. web, art, advertisement. However, it has been shown that the images generated by these models could reinforce societal biases already existing in specific contexts. In this paper, we focus on understanding if this is the case when one generates images related to various software engineering tasks. In fact, the Software Engineering (SE) community is not immune from gender and ethnicity disparities, which could be amplified by the use of these models. Hence, if used without consciousness, artificially generated images could reinforce these biases in the SE domain. Specifically, we perform an extensive empirical evaluation of the gender and ethnicity bias exposed by three versions of the Stable Diffusion (SD) model (a very popular open-source text-to-image model) - SD 2, SD XL, and SD 3 - towards SE tasks. We obtain 6,720 images by feeding each model with two sets of prompts describing different software-related tasks: one set includes the Software Engineer keyword, and one set does not include any specification of the person performing the task. Next, we evaluate the gender and ethnicity disparities in the generated images. Results show how all models are significantly biased towards male figures when representing software engineers. On the contrary, while SD 2 and SD XL are strongly biased towards White figures, SD 3 is slightly more biased towards Asian figures. Nevertheless, all models significantly under-represent Black and Arab figures, regardless of the prompt style used. The results of our analysis highlight severe concerns about adopting those models to generate content for SE tasks and open the field for future research on bias mitigation in this context.
Work is fundamental to societal prosperity and mental health, providing financial security, a sense of identity and purpose, and social integration. Job insecurity, underemployment and unemployment are well-documented risk factors for mental health issues and suicide. The emergence of generative artificial intelligence (AI) has catalysed debate on job displacement and its corollary impacts on individual and social wellbeing. Some argue that many new jobs and industries will emerge to offset the displacement, while others foresee a widespread decoupling of economic productivity from human input threatening jobs on an unprecedented scale. This study explores the conditions under which both may be true and examines the potential for a self-reinforcing cycle of recessionary pressures that would necessitate sustained government intervention to maintain job security and economic stability. A system dynamics model was developed to undertake ex ante analysis of the effect of AI-capital deepening on labour underutilisation and demand in the economy using Australian data as a case study. Results indicate that even a moderate increase in the AI-capital-to-labour ratio could increase labour underutilisation to double its current level, decrease per capita disposable income by 26% (95% interval, 20.6–31.8%), and decrease the consumption index by 21% (95% interval, 13.6–28.3%) by mid-2050. To prevent a reduction in per capita disposable income due to the estimated increase in underutilization, at least a 10.8-fold increase in the new job creation rate would be necessary. Results demonstrate the feasibility of an AI-capital-to-labour ratio threshold beyond which even high rates of new job creation cannot prevent declines in consumption. The precise threshold will vary across economies, emphasizing the urgent need for empirical research tailored to specific contexts. This study underscores the need for cross-sectoral government measures to ensure a smooth transition to an AI-dominated economy to safeguard the Mental Wealth of nations.
Amid the pervasive integration of AI technologies across societal and industrial domains, understanding users’ trust in these systems becomes increasingly crucial. This study addresses the growing need to understand users’ trust in Generative Artificial Intelligence (GenAI) and explores the societal implications of this type of trust. Based on the socio-technical systems theory, this work employs the FAT (Fairness, Accountability, Transparency) framework and humanness factors of AI, anthropomorphism, social presence, and emotions, as antecedents of users’ human-like trust, which is proposed to influence users’ attitudes, perceived performance, and behavioral intentions. Structural equation modeling analysis (N = 244) reveals that fairness significantly enhances trust, while accountability and transparency do not. Social presence and emotions positively impact trust, whereas anthropomorphism shows no significant effect. Furthermore, trust shapes users’ attitudes, perceived performance, and behavioral intentions toward GenAI systems. This study contributes to the AI adoption and user trust literature by illuminating the main antecedents of human-like trust and showing its impact on user acceptance from a social-technical perspective. Beyond the academic contribution, this research highlights the broader societal relevance of user trust in GenAI, particularly regarding public concerns over black box issues and humanness features of GenAI systems.
No abstract available
This paper discusses some societal implications of the most recent and publicly discussed application of advanced machine learning techniques: generative AI models, such as ChatGPT (text generation) and DALL-E (text-to-image generation). The aim is to shift attention away from conceptual disputes, e.g. regarding their level of intelligence and similarities/differences with human performance, to focus instead on practical problems, pertaining the impact that these technologies might have (and already have) on human societies. After a preliminary clarification of how generative AI works (Sect. 1), the paper discusses what kind of transparency ought to be required for such technologies and for the business model behind their commercial exploitation (Sect. 2), what is the role of user-generated data in determining their performance and how it should inform the redistribution of the resulting benefits (Sect. 3), the best way of integrating generative AI systems in the creative job market and how to properly negotiate their role in it (Sect. 4), and what kind of “cognitive extension” offered by these technologies we ought to embrace, and what type we should instead resist and monitor (Sect. 5). The last part of the paper summarizes the main conclusions of this analysis, also marking its distance from other, more apocalyptic approaches to the dangers of AI for human society.
In an era where generative AI (GenAI) is reshaping industries, public understanding of this phenomenon remains limited. This study addresses this gap by analyzing public beliefs about GenAI using the Technology Acceptance Model and Diffusion of Innovations Theory as frameworks. We adopted a big-data approach, utilizing machine-learning techniques to analyze 21,817 public comments extracted from an initial set of 32,707 on 44 YouTube videos discussing GenAI. Our investigation surfaced six pivotal themes: concerns over job and economic impacts, GenAI's potential to revolutionize problem-solving, its perceived shortcomings in creativity and emotional intelligence, the proliferation of misinformation, existential risks, and privacy decay. Emotion analysis showed that negative emotions dominated at 58.46%, including anger (22.85%) and disgust (17.26%). Sentiment analysis echoed this negativity, with 70% negative. The triangulation of thematic, emotional, and sentiment analyses highlighted a polarized public stance: recognition of GenAI's transformative potential is tempered by significant concerns about its implications. The findings offer actionable insights for engineering managers and policymakers. Strategies such as awareness-building, transparency, public engagement, balanced communication, governance, and human-centered development can address polarization and build trust. Ongoing research into public opinion remains essential for aligning technological advancements with societal expectations and acceptance.
As generative AI (GenAI) becomes increasingly embedded in everyday information environments, understanding how citizens engage with this technology is critical for science communication. This study examines public engagement with GenAI in Denmark, focusing on trust, AI literacy, experience with GenAI tools, and exposure to science-related information. Denmark provides a relevant case due to its high levels of institutional and scientific trust. Using data from a nationally representative survey conducted in 2024 (n = 514) as part of the cross-national ScI-AI project, we analyze how respondents encounter GenAI, assess its trustworthiness, understand its technical and epistemic features, and engage with science-related information across platforms. Descriptive results show moderate trust in GenAI, uneven AI and GenAI literacy, and concentrated experience centered primarily on ChatGPT, alongside pronounced concerns about misinformation and societal risks. To examine how these dimensions relate, we apply a probabilistic graphical model to 29 variables spanning trust, literacy, experience, science-related information exposure, and demographics. The analysis reveals that trust occupies a central position, mediating between technical understanding of GenAI’s functioning and epistemic beliefs about the reliability and truthfulness of its outputs. Science-related information exposure is largely disconnected from trust and GenAI literacy and links to general AI literacy primarily through gender. Overall, the findings highlight the importance of treating trust and literacy as multidimensional and context-sensitive constructs for understanding how GenAI reshapes science-related information encounters.
Deploying Large Language Models (LLMs) in specialized domains introduces significant societal and compliance risks, including bias amplification, misinformation propagation, and privacy violations. These risks predominantly emerge from the dynamic interactions between LLMs and humans in specific contexts. Different domains face unique distribution of hazards, and varying interaction modalities introduce distinct levels of exposure and vulnerability. However, current risk assessment frameworks lack a systematic methodology to capture this dynamic interplay. In this work, we introduce the HEV Generative Sandbox, a novel risk evaluation framework that simulates human-LLM behavior to quantify domain-contextual risks across three interdependent dimensions: 1) Hazard (H): Domain-specific threats inherent to a given context; 2) Exposure (E): The extent to which the LLM and its users are subjected to hazardous scenarios; 3) Vulnerability (V): The susceptibility of the system to risk due to human interaction or model weaknesses. Our approach pioneers "domain-rooted scenario generation", wherein we sample contextual distributions from domain-specific corpora and simulate diverse inputs. By unifying dynamic scenario simulation, causal risk decomposition, and closed-loop evaluation, the HEV Generative Sandbox provides a scalable, domain-sensitive methodology for responsible LLM deployment. This work contributes to advancing the safe deployment of LLMs by providing a comprehensive and automated risk evaluation framework.
From Gaze to Data: Privacy and Societal Challenges of Using Eye-tracking Data to Inform GenAI Models
Eye-tracking technology is increasingly integrated into smart glasses and wearable devices, becoming more prevalent in daily life. Meanwhile, generative artificial intelligence (GenAI) has the potential to transform user experiences through personalization and adaptive interactions. The integration of these technologies offers a novel opportunity to refine GenAI models by leveraging human gaze data in adaptive interfaces, personalized content generation, and human-computer interaction. However, gaze data is highly sensitive and can reveal several user attributes, such as cognitive states, emotions, and even medical conditions. Therefore, the use of gaze data to inform GenAI models raises significant privacy concerns. Hence, in this paper, we highlight the implications of using eye gaze information in GenAI models with privacy and societal considerations and discuss strategies to mitigate potential privacy violations. By addressing these issues, we can ensure a trade-off between technological advancements and privacy protection.
The mainstream launch of generative AI video platforms represents a major change to the socio-technical system of digital media, raising critical questions about public perception and societal impact. While research has explored isolated technical or ethical facets, a holistic understanding of the user experience of AI-generated videos—as an interrelated set of perceptions, emotions, and behaviors—remains underdeveloped. This study addresses this gap by conceptualizing public discourse as a complex system of interconnected themes. We apply a mixed-methods approach that combines quantitative LDA topic modeling with qualitative interpretation to analyze 11,418 YouTube comments reacting to AI-generated videos. The study’s primary contribution is the development of a novel, three-tiered framework that models user experience. This framework organizes 15 empirically derived topics into three interdependent layers: (1) Socio-Technical Systems and Platforms (the enabling infrastructure), (2) AI-Generated Content and Esthetics (the direct user-artifact interaction), and (3) Societal and Ethical Implications (the emergent macro-level consequences). Interpreting this systemic structure through the lens of the ABC model of attitudes, our analysis reveals the distinct Affective (e.g., the “uncanny valley”), Behavioral (e.g., memetic participation), and Cognitive (e.g., epistemic anxiety) dimensions that constitute the major elements of user experience. This empirically grounded model provides a holistic map of public discourse, offering actionable insights for managing the complex interplay between technological innovation and societal adaptation within this evolving digital system.
Abstract This commentary builds on Osman’s (2025) analysis of psychological harm in consumer products with internet connectivity (CPIC) by examining how AI and digital platforms are crucial in reshaping the epistemic environment, driven by changing notions of societal trust and perceptions of expertise. Drawing on evidence from studies of trust in digital environments and emerging patterns of online collective action, we argue that the rise of AI-enabled platforms has created a paradigm shift: from vertical, institution-centred models of trust and harm prevention to horizontal, network-based systems of knowledge validation and risk assessment. This transformation demands new frameworks for understanding psychological harm—ones that can account for both direct technological impacts and the broader reconfiguration of how society negotiates truth, expertise, and responsibility in an AI-enabled world. It also sets out the challenge for traditional, vertically structured institutions to more effectively engage with the public.
Generative AI models have transformed content creation by enabling high-quality image and data synthesis. However, these models often inherit and amplify societal biases present in their training datasets, leading to concerns about fairness in their applications. This paper addresses this issue by proposing a comprehensive framework for identifying, measuring and reducing bias in generative AI systems. The framework uses data reweighting, fairness-constrained training, and post-processing to enhance fairness across demographics like age, gender, and ethnicity. Using the FFHQ dataset, the framework demonstrates substantial improvements in fairness metrics, with demographic parity improving from 0.42 to 0.15 and equalized odds from 0.35 to 0.12, while maintaining competitive image quality, as indicated by a slight increase in FID from 7.21 to 8.12. Qualitative analysis confirms the generated outputs closely align with ideal demographics, highlighting the framework's effectiveness in addressing fairness without sacrificing performance, thus offering a practical solution for socially responsible AI development.
As a non-human agent that is capable of bringing about and influencing social outcomes, AI challenges the hegemonic anthropocentric foundations of social science. The latter’s human-centered causality models are inept when it comes to conceptualizing and explaining empirical interactions between AI and human agency. Contemporary societies are already experiencing heterogeneous and intricate ways of human and non-human agents interacting throughout digital and social spheres. This study identifies several pathways through which AI-driven processes are impacting social outcomes by placing focus on generative AI and deepfakes, which exhibit complex and often hidden impacts on human behavior. As the distinction between the phenomenal and the noumenal continues to blur, new theoretical and epistemological approaches will need to be developed, and integrating the phenomena–noumena distinction into discussions on technological development and posthumanism can enhance our understanding of societal change.
When working with generative artificial intelligence (AI), users may see productivity gains, but content generated with the help of AI may not match their preferences exactly. The boost in productivity may come at the expense of users' idiosyncrasies, such as personal style and tastes, preferences we would naturally express without AI. To let users express their preferences, many AI systems let users edit their prompt (e.g., Midjourney) or allow more natural interactions (e.g., ChatGPT), and users can always review and edit the AI-generated output themselves. However, aligning a user's intentions with an AI's output can take time and may not always be worth it if the AI's first or default output "does the job." In short, users face a trade-off between AI output fidelity and communication cost. The purpose of this work is to examine the impact of this human-AI interaction on the AI-generated content we produce as a society. We propose a Bayesian model to study the societal consequences of human-AI interactions. For a given task, rational users can exchange information with the AI to align its output with their heterogeneous preferences. The AI has a knowledge of the distribution of preferences in the population and uses a Bayesian update to create the optimal output with maximal expected fidelity given the information shared by the user. Users choose the amount of information they share to maximize their utility, balancing the cost of communication with the fidelity of the output. We show that the interplay between individual users and AI may lead to societal challenges. Outputs may become more homogenized. The AI-generated output distribution has a lower variance than the users' preference distribution. And this phenomenon is exacerbated when AI-generated content is used to train the next generation of AI: we show numerically that the users' rational decisions and the AI's training process can mutually reinforce each other, leading to a homogenization "death spiral." We also study the effects of AI bias, identifying who benefits or loses when using an AI model that does not accurately reflect the population preference distribution. At the population level, the censoring type of bias (e.g., biasing against the more unique preferences) negatively impacts the population utility as a whole, especially users with uncommon preferences who rely on AI interactivity the most. On the other hand, directional biases (e.g., a slightly left-leaning AI) will influence the users' chosen output, leading to a societal bias. Nonetheless, our research also demonstrates that creating models that facilitate human-AI interactions can limit these risks and preserve the population preference diversity. A full version of this paper can be found at https://arxiv.org/abs/2309.10448.
Teaching Parrots to See Red: Self-Audits of Generative Language Models Overlook Sociotechnical Harms
The release of ChatGPT as a “low-key research preview” and its viral growth spurred a gold rush among tech companies marketing generative AI (GenAI) as a universal tool. In 2023, the U.S. secured voluntary commitments from top AI developers, including OpenAI, Google, Meta, and Anthropic, to conduct self-audits ensuring model safety before release. However, these models exhibit widespread biases, including by race and gender, unjustly discriminating against users. To inspect this contradiction, we review ten corporate self-audits, finding a notable absence of real-world use cases in sectors like education, creative works, and public policy. Instead, audits focus on thwarting adversarial consumers in hypothetical scenarios and rely on GenAI models to approximate human impacts. This approach places consumers at risk by impairing the mitigation of representational, allocational, and quality-of-service harms. We conclude with recommendations to address audit gaps and protect GenAI consumers.
Intimate partner violence (IPV) is defined as “abuse or aggression that occurs in a romantic relationship." IPV survivors face barriers when help-seeking, such as epistemic injustice – secondary victimization from dismissal and indifference when disclosing, misdirection, or inappropriate interventions. Survivors may leverage generative AI to make sensitive disclosures and access hermeneutic resources. However, these tools mediate outcomes for IPV survivors through novel manifestations of epistemic injustice. Using mixed-methods, we investigated hermeneutic resource provision by large-language models (LLMs). We evaluated LLM responses to IPV disclosures on three axes: hermeneutic resource provision, readability, and risk. Prompts were derived from a content analysis of IPV and generative AI discussions in 5 abuse subreddits. We contribute a taxonomy of 7 uses of generative AI in the experience of IPV, empirical illustration of epistemic inequity, and considerations for evaluating epistemic harm in generative AI. Content Warning: This study contains descriptions of abuse and violence.
Generative Artificial Intelligence (AI), as a transformative technology, holds significant promise for applications in healthcare. At the same time, the datafication, AI integration, and commodification of health have opened the floodgates for ethical issues, including those related to fairness, access, beneficence, democracy, solidarity, inclusion, and societal harms. As further the digitalization, innovation, and disruption of healthcare is inevitable, the paper maps out how power, equity, access, identity, participation, and knowledge contribute to creating social injustice issues. It also discusses that current justice approaches—distributive justice, representational justice, restorative justice, and capabilities-centered justice—do not have enough impact to prevent or remedy the many harms and injustices that AI has already created in healthcare or will continue to do so. The paper proposes that a transformative justice approach is needed for generative AI as a transformative technology, focused on (1) peace, emancipation, and eliminating the root causes of injustice, (2) holistic conflict resolution, (3) human rights-based approaches, and (4) the empowerment of agency and actors.
Generative AI (GAI) technologies have demonstrated human‐level performance on a vast spectrum of tasks. However, recent studies have also delved into the potential threats and vulnerabilities posed by GAI, particularly as they become increasingly prevalent in sensitive domains such as elections and education. Their use in politics raises concerns about manipulation and misinformation. Further exploration is imperative to comprehend the social risks associated with GAI across diverse societal contexts. In this panel, we aim to dissect the impact and risks posed by GAI on our social fabric, examining both technological and societal perspectives. Additionally, we will present our latest investigations, including the manipulation of ideologies using large language models (LLMs), the potential risk of AI self‐consciousness, the application of Explainable AI (XAI) to identify patterns of misinformation and mitigate their dissemination, as well as the influence of GAI on the quality of public discourse. These insights will serve as catalysts for stimulating discussions among the audience on this crucial subject matter, and contribute to fostering a deeper understanding of the importance of responsible development and deployment of GAI technologies.
Generalizing Fairness to Generative Language Models via Reformulation of Non-discrimination Criteria
Generative AI, such as large language models, has undergone rapid development within recent years. As these models become increasingly available to the public, concerns arise about perpetuating and amplifying harmful biases in applications. Gender stereotypes can be harmful and limiting for the individuals they target, whether they consist of misrepresentation or discrimination. Recognizing gender bias as a pervasive societal construct, this paper studies how to uncover and quantify the presence of gender biases in generative language models. In particular, we derive generative AI analogues of three well-known non-discrimination criteria from classification, namely independence, separation and sufficiency. To demonstrate these criteria in action, we design prompts for each of the criteria with a focus on occupational gender stereotype, specifically utilizing the medical test to introduce the ground truth in the generative AI context. Our results address the presence of occupational gender bias within such conversational language models.
Generative artificial intelligence has advanced to a point where machine-generated content increasingly approximates human-created text, images, audio, and video. This paper argues that the absence of reliable mechanisms for identifying AI-generated content constitutes a systemic risk to societal trust, academic integrity, creative economies, and legal accountability. As generative systems improve, impersonation of authoritative figures and large-scale dissemination of synthetic misinformation become more scalable, credible, and difficult to mitigate. This work critically examines prevailing AI content detection strategies, including classifier-based detection and watermarking mechanisms, and demonstrates their structural insufficiencies. Detection models trained on synthetic data are inherently reactive and struggle to generalize as generative systems converge toward human-level expressiveness. Watermarking, while valuable, remains inconsistently adopted and vulnerable to circumvention in the absence of enforceable regulatory standards. The paper advances a coordinated framework integrating policy enforcement, technical safeguards, and sustained interdisciplinary research into independent content authentication mechanisms. Without proactive intervention, the distinction between authentic human expression and synthetic media risks becoming increasingly untenable, with far-reaching implications for information integrity and social trust.
本次综合报告将AI在数字社会的争议划分为五个关键领域:一是算法偏见与社会公平性,关注AI在决策中的歧视问题;二是生成式AI对创意版权与劳动市场的重塑;三是社交媒体环境下信息完整性与人机信任的维系;四是基于透明度、可解释性与治理策略的风险管控;五是实现以人为本的社会技术集成与包容性福祉。这些研究揭示了AI不仅是技术工具,更是深度嵌入制度、伦理与社会身份构建的动力要素,强调了跨学科治理对于构建可持续数字社会的必要性。