平台视角下的AI伦理与治理实践
企业AI治理框架、成熟度模型与全生命周期管理
该组文献关注AI治理的顶层设计,涵盖了从宏观战略、组织成熟度模型(如TOAST、RAI成熟度)到具体开发流程(如MLOps、负责任AI设计)的集成。研究强调将伦理准则转化为企业内部的可执行流程,平衡盈利与社会责任。
- A Brief Overview of AI Governance for Responsible Machine Learning Systems(Navdeep Gill, Abhishek Mathur, Marcos V. Conde, 2022, ArXiv Preprint)
- AI Governance and Ethics Framework for Sustainable AI and Sustainability(Mahendra Samarawickrama, 2022, ArXiv Preprint)
- Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems(Fabian Kovac, Sebastian Neumaier, Timea Pahi, Torsten Priebe, Rafael Rodrigues, Dimitrios Christodoulou, Maxime Cordy, Sylvain Kubler, Ali Kordia, Georgios Pitsiladis, John Soldatos, Petros Zervoudakis, 2025, ArXiv Preprint)
- Embedding Responsible AI into MLOps Pipelines: Ensuring Fairness, Explainability, and Governance in KYC and FinTech Decisioning(Joseph Oduro-Gyan, Ifeoma Eleweke, Samuel Ajuwon, Ahmed Bello, Abdul-Lateef Arotayo, 2025, Journal of Scientific Research and Reports)
- RAISE: leveraging responsible AI for service excellence(Linda Alkire, Anil Bilgihan, M. Bui, Alexander John Buoye, Seden Doğan, Seoyoung Kim, 2024, Journal of Service Management)
- GENXAI A RESPONSIBLE AI GOVERNANCE FRAMEWORK FOR GENERATIVE MODELS IN REGULATED INDUSTRIES(Elangovan Sivalingam, 2026, INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND BUSINESS SYSTEMS)
- Responsible AI Implementation: A Human-centered Framework for Accelerating the Innovation Process(D. Tjondronegoro, E. Yuwono, Brent Richards, D. Green, Siiri Hatakka, 2022, ArXiv)
- Implementation of Ethically Aligned Design with Ethical User stories in SMART terminal Digitalization project: Use case Passenger Flow(Erika Halme, Mamia Agbese, Hanna-Kaisa Alanen, Jani Antikainen, Marianna Jantunen, Arif Ali Khan, Kai-Kristian Kemell, Ville Vakkuri, Pekka Abrahamsson, 2021, ArXiv Preprint)
- AI Ethics in Industry: A Research Framework(Ville Vakkuri, Kai-Kristian Kemell, Pekka Abrahamsson, 2019, ArXiv Preprint)
- Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing(Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, Parker Barnes, 2020, ArXiv Preprint)
- Ethical Considerations of AI Implementation in Business Planning: Ensuring Fairness and Transparency(Dr. Atul Bansal, Dasarathy A K Professor, Dr.S. Gangadharan, M. Ismail, N. Kareem, Assi Halaf, 2023, 2023 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES))
- Ethical Implications of Generative AI in Business Decision-Making: A Governance Perspective(Aljwhrh Almtrf, 2025, International Journal of Business and Applied Social Science)
- Making Responsible AI the Norm rather than the Exception(Abhishek Gupta, 2021, ArXiv Preprint)
- Responsible AI Implementation for Societal Benefit: A Leadership Congruence Model(Mohammad Tajvarpour, Barry A. Friedman, Ann-Kathrin Harms, Saeed Shekari, 2024, The BRC Academy Journal of Business)
- Responsible AI: The Good, The Bad, The AI(Akbar Anbar Jafari, C. Ozcinar, G. Anbarjafari, 2026, ArXiv)
- Responsible AI Monetization: Ethical and Economic Implications of Algorithmic Business Models(Durga Krishnamoorthy, Raghu Para, 2025, 2025 International Conference on Responsible, Generative and Explainable AI (ResGenXAI))
- 从伦理风险到韧性重构:人工智能治理的四维驱动机制与实践进路(宋雁宾, 2025, 现代管理)
- Leverage zones in Responsible AI: towards a systems thinking conceptualization(E. Nabavi, Chris Browne, 2023, Humanities & Social Sciences Communications)
- AI and Ethics -- Operationalising Responsible AI(Liming Zhu, Xiwei Xu, Qinghua Lu, Guido Governatori, Jon Whittle, 2021, ArXiv Preprint)
- Ethical AI Implementation in Corporate Decision-Making(Dr. Syed Tabrez Hassan, Imam Gardezi, 2025, International Journal of Research in Engineering and Management Sciences)
- Impact of ethical consideration on AI-Driven recruitment and hiring moderated through green HRM may lead to more ethical AI-Implementation(K. L, P. D, 2025, International Journal of Research in Management)
- An Adaptive Responsible AI Governance Framework for Decentralized Organizations(K. Meimandi, Anka Reuel, Gabriela Aranguiz-Dias, Hatim Rahama, A. Ayadi, Xavier Boullier, J'er'emy Verdo, Louis Montanie, Mykel J. Kochenderfer, 2025, ArXiv)
- 负责任创新视角下人工智能的伦理困境及对策(仇裴如, 2024, 哲学进展)
- Ethical AI adoption in family firms: A values-based framework integrating legacy, governance, and responsible innovation(Nidhi Suthar, Dinesh Kumar, 2025, Journal of the International Council for Small Business)
- A five-layer framework for AI governance: integrating regulation, standards, and certification(Avinash Agarwal, Manisha J. Nene, 2025, ArXiv Preprint)
- Towards a Responsible AI Organizational Maturity Model(Amy K. Heger, Samir Passi, Shipi Dhanorkar, Zoe Kahn, Ruotong Wang, Mihaela Vorvoreanu, 2025, Proceedings of the ACM on Human-Computer Interaction)
- TOAST Framework: A Multidimensional Approach to Ethical and Sustainable AI Integration in Organizations(Dian Tjondronegoro, 2025, ArXiv)
- Ethical theories, governance models, and strategic frameworks for responsible AI adoption and organizational success(Mitra Madanchian, Hamed Taherdoost, 2025, Frontiers in Artificial Intelligence)
- Ethical practices of artificial intelligence: a management framework for responsible AI deployment in businesses(Ajay Tripathi, Vinod Kumar, 2025, AI and Ethics)
- A multilevel framework for AI governance(Hyesun Choung, Prabu David, John S. Seberger, 2023, ArXiv Preprint)
- Tailoring Requirements Engineering for Responsible AI(Walid Maalej, Yen Dieu Pham, Larissa Chazette, 2023, ArXiv Preprint)
- Developing a Framework for Trustworthy AI-Supported Knowledge Management in the Governance of Risk and Change(Rebecca Vining, N. McDonald, Lucy McKenna, M. Ward, Brian Doyle, Junli Liang, J. Hernandez, J. Guilfoyle, Arwa Shuhaiber, U. Geary, Mary Fogarty, Rob Brennan, 2022, No journal)
算法审计、技术性测评与可解释性问责
聚焦治理的技术落地,探讨如何通过算法审计、可计算评估、可追溯性标准(Traceability)及白盒化模型(XAI)实现技术问责。涉及具体的审计法案(如纽约LL 144)以及评估偏见与公平性的量化指标。
- Measurement as governance in and for responsible AI(Abigail Z. Jacobs, 2021, ArXiv Preprint)
- Evidence-based explanation to promote fairness in AI systems(Juliana Jansen Ferreira, Mateus de Souza Monteiro, 2020, ArXiv Preprint)
- Auditing Work: Exploring the New York City algorithmic bias audit regime(Lara Groves, Jacob Metcalf, Alayna Kennedy, Briana Vecchione, Andrew Strait, 2024, Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency)
- AUDITOR 4.0: THE HYBRID AUDITOR - INTEGRATING ARTIFICIAL INTELLIGENCE, PROFESSIONAL JUDGMENT AND ETHICAL ACCOUNTABILITY(Alexandra-Gabriela Marina, D. M. Sitea, I. Bogoslov, 2025, Revista Economica)
- Audit Trails for Accountability in Large Language Models(Victor Ojewale, Harini Suresh, Suresh Venkatasubramanian, 2026, ArXiv Preprint)
- Unravelling Responsibility for AI(Zoe Porter, Philippa Ryan, Phillip Morgan, Joanna Al-Qaddoumi, Bernard Twomey, Paul Noordhof, John McDermid, Ibrahim Habli, 2023, ArXiv Preprint)
- White-Box AI Model: Next Frontier of Wireless Communications(Jiayao Yang, Jiayi Zhang, Bokai Xu, Jiakang Zheng, Zhilong Liu, Ziheng Liu, Dusit Niyato, Mérouane Debbah, Zhu Han, Bo Ai, 2025, ArXiv Preprint)
- Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies(Briana Vecchione, Solon Barocas, Karen Levy, 2021, ArXiv Preprint)
- Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem(Sasha Costanza-Chock, Inioluwa Deborah Raji, Joy Buolamwini, 2022, Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency)
- A Framework for Assurance Audits of Algorithmic Systems(Khoa Lam, Benjamin Lange, Borhane Blili-Hamelin, Jovana Davidović, S. Brown, A. Hasan, 2024, Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency)
- Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems(Joshua A. Kroll, 2021, ArXiv Preprint)
- The Quest for Reliable Metrics of Responsible AI(Theresia Veronika Rampisela, Maria Maistro, Tuukka Ruotsalo, Christina Lioma, 2025, ArXiv Preprint)
- Actionable Auditing Revisited(Inioluwa Deborah Raji, Joy Buolamwini, 2022, Communications of the ACM)
- Audited Skill-Graph Self-Improvement for Agentic LLMs via Verifiable Rewards, Experience Synthesis, and Continual Memory(Ken Huang, Jerry Huang, 2025, ArXiv Preprint)
- Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems(Jennifer Cobbe, Michelle Seng Ah Lee, Jatinder Singh, 2021, ArXiv Preprint)
- Reproducibility: The New Frontier in AI Governance(Israel Mason-Williams, Gabryel Mason-Williams, 2025, ArXiv Preprint)
- TranspareGov-AI: A Multi-Stakeholder Framework for Auditable Algorithmic Decision-Making in Business Processes(Jasmin Jarsania, Jeshwanth Challagundla, Fnu Harsh, 2025, 2025 IEEE International Conference on Artificial Intelligence for Learning and Optimization (ICoAILO))
- Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling(Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji, 2024, Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems)
- Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising(Michelle S. Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa, 2023, ArXiv Preprint)
- Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning(A. Feder Cooper, Emanuel Moss, Benjamin Laufer, Helen Nissenbaum, 2022, ArXiv Preprint)
- Auditing Political Exposure Bias: Algorithmic Amplification on Twitter/X During the 2024 U.S. Presidential Election(Jinyi Ye, Luca Luceri, Emilio Ferrara, 2024, ArXiv Preprint)
- Computable Gap Assessment of Artificial Intelligence Governance in Children's Centres: Evidence-Mechanism-Governance-Indicator Modelling of UNICEF's Guidance on AI and Children 3.0 Based on the Graph-GAP Framework(Wei Meng, 2025, ArXiv Preprint)
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes(Basileal Imana, Aleksandra Korolova, John Heidemann, 2024, ArXiv Preprint)
- Algorithmic audits of algorithms, and the law(Erwan Le Merrer, Ronan Pons, Gilles Trédan, 2022, ArXiv Preprint)
- AI auditing: The Broken Bus on the Road to AI Accountability(Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah Raji, 2024, ArXiv Preprint)
- An Audit Logic for Accountability(J. G. Cederquist, R. Corin, M. A. C. Dekker, S. Etalle, J. I. den Hartog, 2005, ArXiv Preprint)
- Ethical AI Governance: Methods for Evaluating Trustworthy AI(Louise McCormack, Malika Bendechache, 2024, ArXiv Preprint)
- Designing Accountable Systems(Severin Kacianka, Alexander Pretschner, 2021, ArXiv Preprint)
- Meaningful human control: actionable properties for AI system development(Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert-Jan Houben, Catholijn M. Jonker, Jeroen van den Hoven, Deborah Forster, Reginald L. Lagendijk, 2021, ArXiv Preprint)
- Enhanced well-being assessment as basis for the practical implementation of ethical and rights-based normative principles for AI(Marek Havrda, B. Rakova, 2020, 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC))
金融科技领域的风险治理、透明度与合规
专门探讨AI在信贷评估、反欺诈、内部审计及高频交易中的应用。核心挑战在于缓解信贷偏见、处理第三方模型风险、确保算法透明度以符合严格的金融监管要求。
- Mitigating AI bias in financial decision-making: A DEI perspective(Omogbeme Angela, Oyindamola Modupe Odewuyi, 2024, World Journal of Advanced Research and Reviews)
- The Risk-Adjusted Intelligence Dividend: A Quantitative Framework for Measuring AI Return on Investment Integrating ISO 42001 and Regulatory Exposure(Hernan Huwyler, 2025, ArXiv Preprint)
- Ethical leadership and AI-driven financial management in Bangladesh: toward sustainable governance in the FinTech sector(Md. Abu Hasnat, Khandakar Kamrul Hasan, H. Khandakar, 2026, Leadership & Organization Development Journal)
- AI算法黑箱与直播电商供应链金融风险传导机制研究(张曦鹏, 2025, 电子商务评论)
- Quality Assurance Frameworks for AI Algorithms in High-Stakes Financial Risk Assessment(Arun Kuna, 2025, World Journal of Advanced Engineering Technology and Sciences)
- AI-Driven Governance Systems for Proactive Regulatory Compliance and Fraud Risk Management in Financial Service Environments(Iboro Akpan Essien, Joshua Oluwagbenga Ajayi, Eseoghene Daniel Erigha, Ehimah Obuse, Noah Ayanbode, 2025, Engineering and Technology Journal)
- Impact of data analytics on fraud prevention and public trust in the United States(Judith Tekyi Mensa, Owolabi Babatunde Akinsanya, 2026, Finance & Accounting Research Journal)
- Blockchain and AI Integration for Transparent ESG (Environmental, Social, Governance) Reporting in Corporate Finance(Namita Rajput, Kamna Chopra, Sonia Lohia, K. Kumar, 2025, 2025 2nd International Conference on Integration of Computational Intelligent System (ICICIS))
- Ethical implications of artificial intelligence in accounting: A framework for responsible ai adoption in multinational corporations in Jordan(A. Ahmad, 2024, International Journal of Data and Network Science)
- Artificial Intelligence and Internal Audit Effectiveness in Islamic Financial Institutions: A Conceptual Paper(Z. Baharom, 2025, International Journal of Research and Innovation in Social Science)
- “Measuring Google Cloud Platform Capabilities Against NDMO Governance‑Foundation Tool‑Dependent Specifications: Coverage, Gaps, and Adoption Challenges for Saudi Financial Institution(M. Asaad, 2026, Arab Journal for Scientific Publishing)
- Thinking Responsibly About Responsible AI in Risk Management: The Darkside of AI in RM(A. Metwally, S. Ali, Abdelnasser T.I. Mohamed, 2024, 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS))
- Navigating the ethical and governance challenges of ai deployment in AML practices within the financial industry(Vivian Ofure Eghaghe, Olajide Soji Osundare, Chikezie Paul-Mikki Ewim, Ifeanyi Chukwunonso Okeke, 2024, International Journal of Scholarly Research and Reviews)
- Digital Finance Transformation Model: Designing Risk and Control in Artificial Intelligence-Driven Accounting Systems(Temilola Aderonke Onalaja, riscilla Samuel Nwachukwu, Folake Ajoke Bankole, Tewogbade Lateefat, 2025, Engineering and Technology Journal)
- Algorithmic accountability and ethical AI frameworks for regulatory governance in financial technologies(M. Arshad, Chandan Kumar Tripathi, 2025, International Journal of Science and Research Archive)
- Double discrimination: Algorithmic amplification of gender bias in African fintech credit scoring—a 10-algorithm audit reveals 37% underfunding penalty against women-led SMEs(S. Dzreke, S. Dzreke, 2025, Advanced Research Journal)
- AI-driven revolution in credit underwriting: Technical implementation and impact analysis(Aditya Arora, 2025, Global Journal of Engineering and Technology Advances)
- On the current and emerging challenges of developing fair and ethical AI solutions in financial services(Eren Kurshan, Jiahao Chen, Victor Storchan, Hongda Shen, 2021, Proceedings of the Second ACM International Conference on AI in Finance)
- Responsible AI in Financial Risk Systems the Transparent Equity Framework (TEF): An Auditable Governance Methodology for Equitable Financial Decision-Making(Pankaj Kumar, 2025, Journal of Economics, Finance And Management Studies)
- Reimagining Financial Planning and Analysis: AI-Driven Innovations in Forecasting, Scenario Modeling, and Governance(Ashitosh Chitnis, 2025, Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023)
- Third-Party Model Risk: Advanced Due Diligence and Contractual Oversight for Embedded AI/ML Solutions in SaaS Core Banking and Risk-as-a-Service Platforms -- A Model Governance and Regulatory Risk Framework for Financial Institutions.(Puneet Redu, 2026, International Journal of Artificial Intelligence, Data Science, and Machine Learning)
- Ai Paradox in Nepalese Banking: Operational efficiency vs. Ethical and Regulatory Risks(Lal Mani Pokhrel, 2025, KDU Journal of Multidisciplinary Studies)
- The artificial intelligence governance framework for finance: A control-by-design approach to algorithmic decision-making in accounting(Priscilla Samuel Nwachukwu, Onyeka Kelvin Chima, Chinelo Harriet Okolo, 2025, Finance & Accounting Research Journal)
电商平台、内容推荐与消费者权益规制
侧重于研究电商平台和社交媒体中的推荐算法、动态定价、及跨境贸易。探讨“信息茧房”、大数据杀熟、算法权力失衡等问题,并提出法律与技术上的协同治理路径。
- 生成式人工智能与电子商务平台治理(江程华, 2026, 电子商务评论)
- 平台企业算法歧视形成动因及治理对策研究(张睿宇, 2024, 电子商务评论)
- 电商平台推荐算法的法律风险及规制路径研究(林泓妤, 2025, 电子商务评论)
- 论算法推荐机制下的“信息茧房”现象及其法律规制(唐佳佳, 2025, 法学)
- 算法权力的博弈场域:电商平台治理的理论重构与范式演进(翁赟赟, 尤甜甜, 2026, 电子商务评论)
- 算法推荐中的消费者信息弱势地位及其法律救济(刘 瑜, 2025, 电子商务评论)
- 人工智能技术赋能电商的方式与算法歧视问题研究(魏天天, 2025, 电子商务评论)
- When the Umpire is also a Player: Bias in Private Label Product Recommendations on E-commerce Marketplaces(A. Dash, Abhijnan Chakraborty, Saptarshi Ghosh, Animesh Mukherjee, K. Gummadi, 2021, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency)
- Research on the impact of e-commerce platform’s AI resources on seller opportunism: a cultivational governance mechanism(Guangkuan Deng, Jianyu Zhang, Lijuan He, Ying Xu, 2023, Nankai Business Review International)
- Intelligent Supply Chain Governance: Integrating AI for Sustainable, Resilient, and ESG-Compliant Procurement(Yuliia Zorina, 2025, Journal of Procurement and Supply Chain Management)
- 电商外卖平台算法治理与骑手权益保障(纪 雪, 2025, 电子商务评论)
- 算法何以成“算计”?——“大数据杀熟”的生成机理、多维影响与治理之道(王秋梅, 2026, 电子商务评论)
- 人工智能赋能电商平台用户信息分析与管理研究(陈琳涓, 2025, 电子商务评论)
- “十五五”时期数字平台治理驱动数字中国建设的机理、挑战与实践路径(王乐瑶, 2026, 社会科学前沿)
- 数智时代电子商务的演进路径与技术治理研究(皮吉祥, 2026, 电子商务评论)
- 电商平台人工智能体的责任分配困境与规则重构(王 澍, 2025, 电子商务评论)
- 个人信息保护视域下电商平台算法推荐策略的法律规制研究(原欣雅, 2025, 电子商务评论)
- 算法推送下消费者权益法律保护问题研究(焦禹涵, 张 煖, 李晓晔, 2024, 法学)
- 人工智能在电商个性化推荐中的应用、挑战及治理路径(陈英豪, 周蕾蕾, 2025, 电子商务评论)
- 算法推荐下电商消费者选择权的治理路径研究(曹佳莹, 2025, 电子商务评论)
- 算法推荐对国际贸易流程的影响研究(张 琪, 2025, 电子商务评论)
- 电子商务领域中算法歧视的法律规制探析(李茂星, 2024, 电子商务评论)
- TikTok Search Recommendations: Governance and Research Challenges(Taylor Annabell, Robert Gorwa, Rebecca Scharlach, Jacob van de Kerkhof, Thales Bertaglia, 2025, ArXiv Preprint)
人力资源管理、算法管理与零工经济伦理
研究AI对招聘公平性、人才获取及平台就业的影响。探讨“算法管理”下劳动者权益(如骑手、女性雇员)的保护,以及如何通过参与式设计和合作社模式优化人机协作。
- Ethical Implications of AI-Driven Recruitment: A Multi-Perspective Study on Bias and Transparency in Digital Hiring Platforms(Fadlul Musrifah, Ika Hasanah, 2025, Journal of Management and Informatics)
- 算法性别歧视的法律规制路径(崔泽姝, 2026, 争议解决)
- Generative AI for Responsible Job Design in SAP SuccessFactors: A Framework for Dynamic, Inclusive, and Data-Driven Role Architecture(Manoj Parasa, Prameela Durga Bhavani Katari, 2025, FMDB Transactions on Sustainable Management Letters)
- Embedding Intelligent Personalization in SAP SuccessFactors LMS Through a Responsible AI Framework(Manoj Parasa, 2025, FMDB Transactions on Sustainable Intelligent Networks)
- Platform employment and algorithmic management – a new level of work and management in the modern labor market(G. Eremicheva, Galina A. Menshikova, 2025, Semiotic studies)
- AI’s Impact on Talent Acquisition Strategies and Employee Engagement Methodologies: Ethical Considerations for Trustworthy AI-HRM Integration(Sharmina Akter, 2025, Journal of Humanities and Social Sciences Studies)
- AI Driven HR Transformation:opportunities,challenges,and Ethical Implications in Talent Management(V. Naik, Bhargavi Pandurangi, Suvarna Nimbagal, 2025, International Journal For Multidisciplinary Research)
- AI-Driven Platform Cooperatives: Redefining the Gig Economy through Decentralized Business Models(Aarav Gaba, 2025, International Journal For Multidisciplinary Research)
- 算法背景下智能招聘歧视的法律规制(蒋 浦, 2023, 法学)
- Ethical and Legal Implications of AI on Business and Employment: Privacy, Bias, and Accountability(K. Reddy, M. Kethan, S. Mahabub Basha, Arti Singh, Praveen Kumar, D. Ashalatha, 2024, 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS))
生成式AI、智能体与算法黑箱的法律规制
针对LLM、生成式AI及智能体(Agentic AI)带来的新风险(如版权、隐私、深度伪造)进行分析。探讨算法黑箱的穿透式监管、合规边界以及敏捷治理模式下的法律规制路径。
- 生成式人工智能嵌入公共服务治理的实际效能与风险防范(吕泽州, 2025, 现代管理)
- With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems(Shaun Khoo, Jessica Foo, Roy Ka-Wei Lee, 2025, ArXiv Preprint)
- Beyond Automation: Rethinking Work, Creativity, and Governance in the Age of Generative AI(Haocheng Lin, 2025, ArXiv Preprint)
- 趋利避害:生成式人工智能嵌入高校思政教育的伦理风险与规制路径(杨婉婷, 2025, 社会科学前沿)
- 生成式AI在个性化学习资源智能化推荐中的伦理风险与治理框架(邹 柳, 2025, 教育进展)
- 生成式人工智能驱动下跨境贸易安全管理的机遇、挑战与治理路径(王雪珂, 2026, 现代管理)
- 生成式AI在内容创作中的规制现状与伦理困境的研究(葛榆洋, 2025, 法学)
- 人工智能时代电子商务平台内容审核的合规边界与法律挑战(丁子涵, 2025, 电子商务评论)
- The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults(Michelle L. Ding, Harini Suresh, 2025, ArXiv Preprint)
- 论算法黑箱的安全伦理风险及其规制路径(邵凡翔, 许胜晴, 2025, 交叉科学快报)
- 人工智能算法伦理风险的适应性治理研究——基于浙江实践与欧美经验的整合框架(徐 丽, 徐 翌, 2025, 人工智能与机器人研究)
- 算法歧视的法律规制与数字公平平台之构建(张 豪, 李亮锦, 田顺帅, 2025, 法学)
- 算法黑箱之规制——基于金融法视角(李阳桂, 2022, 法学)
宏观政治经济学、数字政府与生态系统治理
从宏观视角审视AI治理,涉及数据权力分配、监管捕获、国家安全、公共卫生监测伦理。同时探讨AI在数字政府、超大城市治理及开源生态中的地位与社会影响。
- AI Ethics Needs Good Data(Angela Daly, S Kate Devitt, Monique Mann, 2021, ArXiv Preprint)
- Competing Visions of Ethical AI: A Case Study of OpenAI(Melissa Wilfley, Mengting Ai, Madelyn Rose Sanfilippo, 2026, ArXiv Preprint)
- An Overview of the Risk-based Model of AI Governance(Veve Fry, 2025, ArXiv Preprint)
- Law and the Emerging Political Economy of Algorithmic Audits(Petros Terzis, Michael Veale, Noëlle Gaumann, 2024, ArXiv Preprint)
- How Do AI Companies "Fine-Tune" Policy? Examining Regulatory Capture in AI Governance(Kevin Wei, Carson Ezell, Nick Gabrieli, Chinmay Deshpande, 2024, ArXiv Preprint)
- Report prepared by the Montreal AI Ethics Institute (MAIEI) on Publication Norms for Responsible AI(Abhishek Gupta, Camylle Lanteigne, Victoria Heath, 2020, ArXiv Preprint)
- AI Surveillance during Pandemics: Ethical Implementation Imperatives(C. Shachar, S. Gerke, E. Adashi, 2020, The Hastings Center Report)
- Australia's Approach to AI Governance in Security and Defence(Susannah Kate Devitt, Damian Copeland, 2021, ArXiv Preprint)
- 人工智能时代算法歧视风险及治理研究(皮道坤, 黄清枫, 2025, 社会科学前沿)
- Integrating enterprise risk management to address AI‐related risks in healthcare: Strategies for effective risk mitigation and implementation(G. Di Palma, R. Scendoni, V. Tambone, Rossana Alloni, F. De Micco, 2025, Journal of Healthcare Risk Management)
- Expose Uncertainty, Instill Distrust, Avoid Explanations: Towards Ethical Guidelines for AI(Claudio S. Pinhanez, 2021, ArXiv Preprint)
- The Global Majority in International AI Governance(Chinasa T. Okolo, Mubarak Raji, 2026, ArXiv Preprint)
- 数字政府建设政企合作模式的追责机制研究(龙成秀, 2023, 法学)
- 数据驱动与人工智能赋能:重庆三级治理中心的精准治理能力构建研究(刘 黎, 2026, 现代管理)
- 平台经济数字治理体系构建及实施路径研究(梅秀兰, 姚梦迪, 2025, 电子商务评论)
- Algorithms as Social-Ecological-Technological Systems: an Environmental Justice Lens on Algorithmic Audits(Bogdana Rakova, Roel Dobbe, 2023, ArXiv Preprint)
- Openness in AI and downstream governance: A global value chain approach(C. Foster, 2025, ArXiv)
- Ethical Implications and Challenges of AI Implementation in Business Operations(Ahmad Nur Ihsan Purwanto, M. Fauzan, Tiara Widya, Nabiel Syarof Azzaky, 2024, TechComp Innovations: Journal of Computer Science and Technology)
- Ethical Implications of Artificial Intelligence in Business Decision-making: A Framework for Responsible AI Adoption(Venkata Ramaiah Turlapati, P. Vichitra, Dr. Khaja Mohinuddeen J., Dr Navjyot Raval, Dr. Khaja Mohinuddeen J., Dr. Biswo Ranjan Mishra, 2024, Journal of Informatics Education and Research)
- The role of procurement frameworks in responsible AI innovation in the National Health Service: a multi-stakeholder perspective(T. Evans, Omer F. Ahmad, Joseph E. Alderman, Georgia Bailey, P. Bannister, N. Barlow, Natalie Davison, Amanda Isaac, A. Kale, Trystan Macdonald, Qasim Malik, S. Shelmerdine, H. D. J. Hogg, A. Denniston, 2025, Frontiers in Health Services)
组织运营、伦理领导力与社会技术系统演进
探讨“原则到实践”的转化难题,分析企业领导力、中小型企业的特殊适应性、内部审计有效性以及员工对AI系统的信任与协作习惯。
- Principles to Practices for Responsible AI: Closing the Gap(Daniel S. Schiff, B. Rakova, A. Ayesh, Anat Fanti, M. Lennon, 2020, ArXiv)
- Principles alone cannot guarantee ethical AI(Brent Mittelstadt, 2019, ArXiv Preprint)
- The Different Faces of AI Ethics Across the World: A Principle-Implementation Gap Analysis(Lionel Nganyewou Tidjon, Foutse Khomh, 2022, ArXiv Preprint)
- Ethical Leadership and Management of Small- and Medium-Sized Enterprises: The Role of AI in Decision Making(Tjaša Štrukelj, Petya Dankova, 2025, Administrative Sciences)
- Ethical AI Integration in Salesforce: A Framework for Privacy, Fairness, and Accountable Implementation(Ketankumar Hasmukhbhai, Patel, 2025, Journal of Computer Science and Technology Studies)
- Implementing AI Ethics: Making Sense of the Ethical Requirements(Mamia Agbese, Rahul Mohanani, Arif Ali Khan, Pekka Abrahamsson, 2023, ArXiv Preprint)
- AI Chatbots in Enterprise Solutions: Transforming Customer Support, Industry-Specific Challenges and Ethical Considerations(Shivali Naik, Praneeth Aitharaju, Sai Santosh Goud Bandari, 2025, Glovento Journal of Integrated Studies)
- Accountability and managerial advice-taking: comparing human and algorithmic advisers(Peter Kotzian, Kai A. Bauch, Barbara E. Weißenberger, 2025, Management Decision)
- Understanding accountability in algorithmic supply chains(Jennifer Cobbe, Michael Veale, Jatinder Singh, 2023, ArXiv Preprint)
- Enterprise API & Platform Strategy in the era of Agentic AI(Ashay Satav, 2025, Journal of Computer Science and Technology Studies)
- Managing Transparency Fairness Accountability in AI for Sustainable Human Resource Management(Rini Setiawati, Ardi Kusmara, Taavi Kuusk, 2025, 2025 4th International Conference on Creative Communication and Innovative Technology (ICCIT))
- Ethical AI Frameworks for Responsible Internal Auditing Practices: A Conceptual and Theoretical Framework(Dr. Mintu Gogoi, 2025, International Journal of Latest Technology in Engineering Management & Applied Science)
- A Framework for Auditing Chatbots for Dialect-Based Quality-of-Service Harms(Emma Harvey, Rene F. Kizilcec, Allison Koenecke, 2025, ArXiv Preprint)
- Enterprise AI Agents Observability and Evaluation: A Conceptual Framework for Responsible Deployment in Business Ecosystems.(Venkataramana Chowdary Vemana, 2025, International Journal of Novel Research and Development)
- ADAPTING EXISTING PRACTICES FOR ISMS-CERTIFIED ORGANIZATIONS IN SUPPORT OF RESPONSIBLE AI(David Lau Keat Jin, Ganthan Narayana Samy, Fiza Abdul Rahim, Mahiswaran Selvananthan, N. Maarop, Mugilraj Radha Krishnan, Sundresan Perumal, 2026, ASEAN Engineering Journal)
- Exploring Ethical Implications: Unraveling Factors Influencing Data Governance Awareness Behavior in Generative AI Chatbot(Roger Amendi, Erwin Halim, Hendry Hartono, 2024, 2024 2nd International Conference on Technology Innovation and Its Applications (ICTIIA))
- CONNECT Framework for AI-Augmented Enterprise Alignment and Platform Governance(Anand Kumar Vedantham, 2025, Journal of Computer Science and Technology Studies)
- Empowering Responsible AI Adoption: A Human-in-the-Loop Framework for Small and Medium Enterprises (SMEs)(Himanshu Joshi, Sahaj Vaidya, 2024, International Journal of Management and Organizational Research)
- Expansive Participatory AI: Supporting Dreaming within Inequitable Institutions(Michael Alan Chang, Shiran Dudy, 2022, ArXiv Preprint)
- Where Responsible AI meets Reality(B. Rakova, Jingying Yang, H. Cramer, Rumman Chowdhury, 2020, Proceedings of the ACM on Human-Computer Interaction)
- Perceptions of Agentic AI in Organizations: Implications for Responsible AI and ROI(L. Ackerman, 2025, ArXiv)
- The Role of Artificial Intelligence in Business Model Innovation of Digital Platform Enterprises(Zhengang Zhang, Yichen Kang, Yushu Lu, Peilun Li, 2025, Syst.)
- Bridging Innovation and Integrity: A Multi-Stakeholder Approach to Ethical AI Implementation in Human Capital Management(Dr. Priyanka Wandhe, 2025, SSRN Electronic Journal)
- Ethical Leadership Challenges in the Age of Artificial Intelligence: An In-depth Analysis(2025, Pan-African Journal of Education and Social Sciences)
合并后的分组全面覆盖了平台AI治理的核心领域。研究从底层的“算法审计与技术测评”工具出发,构建了“企业治理框架与生命周期管理”的系统论;并在金融、电商、人力资源等“高风险垂直平台”中进行了深度实践剖析。同时,针对“生成式AI”等前沿技术提出了法律规制的新范式,并最终延伸至“宏观政治经济学”与“组织伦理领导力”的深层社会影响,形成了一个从技术工具到组织实践,再到法律规制的立体化治理知识体系。
总计198篇相关文献
随着生成式人工智能在电子商务平台中的广泛应用,平台运行逻辑与数据使用方式正在发生深刻变化。生成式人工智能通过持续学习与内容生成,深度嵌入平台内容生产、交易撮合与决策过程,在显著提升运营效率和服务质量的同时,也引发了数据权属边界模糊、算法责任认定困难以及平台权力进一步集中的新型治理风险。本文以电子商务平台为研究对象,基于平台治理与数据风险相结合的理论视角,系统分析生成式人工智能嵌入平台运行后所引发的数据风险及其对既有治理机制的冲击。在方法上,本文通过文献梳理与理论分析,归纳生成式人工智能在电子商务平台中的应用特征,识别其在数据权属、算法责任与市场结构等方面引发的主要风险,并进一步分析现有平台治理机制在生成式人工智能情境下面临的制度性不匹配问题。研究发现,传统以静态数据合规和事后责任追究为核心的治理逻辑,已难以充分回应生成式人工智能所呈现出的生成性、动态性与复杂性特征。基于此,本文从治理主体、治理对象与治理机制三个维度构建了面向生成式人工智能的电子商务平台综合治理框架。研究结论有助于深化对生成式人工智能重塑平台治理逻辑的理论理解,并为电子商务平台治理实践与相关监管政策的完善提供参考。
超大城市治理的复杂性对传统治理模式构成严峻挑战,亟需向精准化、智能化转型。本文以重庆市构建的市、区县、镇街三级数字化城市运行和治理中心(以下简称“三级治理中心”)为研究对象,探讨其如何通过数据驱动与人工智能(AI)赋能,构建精准治理能力。研究发现,重庆的实践以“1361”总体架构为顶层设计,通过打造“城市大脑”、“实战枢纽”和“执行末端”三级贯通的实体化平台,实现了治理体系的组织重塑。其核心机制在于:构建全域感知、数据汇聚的“超级大脑”,形成对城市运行状态的实时监测;以数据流重构业务流程,建立“感知预警–决策处置–监督评价–复盘改进”的闭环工作体系,推动治理从被动响应转向主动干预;深度开发AI应用场景,通过大模型、智能体等技术实现风险预警、事件分派与决策支持的智能化。本研究认为,重庆的探索不仅实现了从“经验治理”到“数据治理”、从“条块分割”到“整体智治”的范式转型,更通过技术、制度与人的深度融合,为超大城市现代化治理提供了以精准为核心能力跃升的“重庆方案”。然而,数字基础设施的覆盖盲区、基层干部的“数字鸿沟”以及数据安全与伦理等问题,仍是未来需要持续优化的方向。
人工智能技术的深度应用正在重构电商平台的权责分配格局,引发“技术可控性”与法律规制的系统性冲突。技术层面,AI决策的自主优化与数据依赖特性使得传统法律主体二元结构难以适配;法律层面,现有法律体系因算法复杂性而无法有效落实平台监管义务,从而导致人工智能体在电商平台的运用存在责任主体不明、责任认定困难的困境。结合欧盟域外人工智能治理经验,针对高风险场景实施分级监管,明确算法透明度要求与人工干预机制;建立数据使用全周期合规义务,强化隐私保护与跨境数据合规;构建协同监管模式,将平台责任与技术可控性挂钩;完善立法与动态适应技术创新,可在保障消费者权益的同时促进电商平台有序发展,为AI时代平台责任规则重构提供制度范本。
平台经济双边市场、网络外部性等特征突出,容易造成垄断、负外部性等市场失灵现象和传统治理的失效。在梳理当前平台经济治理的困境基础上,从数字治理原则、治理主体、治理目标、治理手段等维度构建平台经济数字治理体系;进一步,以保障平台经济“效率、公平、安全”三位一体均衡为目标,从技术、组织、制度三重逻辑层面,提出了以基础设施助力数字治理政策实施、以主体协同助力多元共治关系构建的动力机制,以制度体系助力数字治理行动保障的实施路径。
在数字化时代,人工智能驱动的个性化推荐系统已成为电子商务生态的核心引擎,通过精准匹配供需关系显著提升商业效率。本文系统探讨人工智能在推荐系统中的技术应用、挑战及治理路径。研究显示,人工智能等计算机技术通过用户画像的动态构建与行为预测,实现了对用户的精准推送,以此大幅增强了用户的参与度与购买意愿。然而,技术跃进伴随深刻的伦理风险。数据隐私边界的消解、算法偏见对公平性的侵蚀等问题,均暴露了技术理性对人性尊严与社会公平的威胁。对此,本文提出“技术创新–制度约束–公众参与”协同治理框架。强调以“人的主体性”为核心重构智能商业生态,在技术创新中注入伦理自觉,最终实现从“流量攫取”向“价值共创”的范式转型。
随着数字经济飞速发展,人工智能技术在电子商务平台中的广泛应用推动了内容审核方式变革。自动化算法驱动的审核系统大幅提高了处理效率,但也带来了诸多法律合规挑战。本文探讨了人工智能时代电子商务平台内容审核的合规边界,重点分析了算法决策合法性、算法偏见与反歧视原则的冲突、透明度缺失与用户知情权的张力等法律问题。同时,探讨了跨境合规标准冲突的挑战及其可能的解决路径。为应对这些挑战,提出了构建算法全生命周期动态评估机制、重构平台责任认定、推行算法沙盒与穿透式监管等建议,希望对电子商务平台内容审核系统的合规性与公正性提供参考。
生成式人工智能的快速发展正深刻重塑公共治理的技术范式与价值逻辑。本文基于技术治理理论框架,以“技术嵌入性–治理适应性”为核心分析维度,系统阐释生成式人工智能赋能公共治理的运行效能、风险挑战与政府规制路径。研究发现,生成式人工智能通过创新治理场景构建、赋能治理模式革新、重塑治理评估范式,显著提升了公共服务的精准性、协同性与动态调优能力。然而,其技术动态性也衍生训练异化、内容失范、应用失当等风险,并与传统治理体系的线性思维、割裂管理及科层滞后性形成深层矛盾,凸显“科林格里奇困境”。研究提出以“敏捷治理”为核心的应对方案,通过伦理责任明晰化、算力数据协同化、多元共治制度化等政策工具组合,实现技术赋能与治理调适的动态平衡。
生成式人工智能的快速发展正在推动跨境数字贸易的技术升级,其在提升贸易效率和优化数据管理的同时,也带来了复杂的安全与治理挑战。本文系统分析了生成式人工智能在跨境贸易中的智能数据分类、匿名化处理和动态访问控制等技术优势,揭示其在提升数据安全性与贸易效率方面的作用。同时,从用户权益、伦理规范、监管合规和技术治理等多维视角深入探讨了生成式人工智能带来的潜在风险。研究进一步提出“技术–规则–治理”协同框架,包括算法审计机制的建立、技术标准适应性设计、开发者责任与信用体系完善,以及动态风险应对机制的创新。研究结论为企业和平台在推进数字贸易应用过程中提供了可行的技术与治理参考,有助于实现安全、可控与高效的数据流通环境。
在人工智能技术广泛赋能电商行业背景下,本文围绕用户信息分析与管理展开研究,系统探讨了AI在用户信息采集、画像构建、智能推荐与全生命周期管理中的应用路径。通过梳理当前面临的隐私泄露、算法偏见、数据孤岛及法规合规等关键问题,进一步提出了从数据保护、算法优化、平台集成到伦理治理的系统化应对策略,旨在为电商平台实现智能化、合规化运营提供理论支撑与实践借鉴。
在数字技术驱动下,电子商务正经历从交易连接平台向智能商业生态系统的根本性跃迁。通过系统剖析技术赋能下电商“连接–洞察–重构”的三阶段演进路径,揭示了随之产生的个性化与公平、平台权力扩张、数据利用与隐私保护、算法自动化与可解释性等多重治理悖论。传统单一监管模式难以应对这些系统性挑战,亟需构建政府、平台、社会多元协同的治理框架。该框架强调政府转向智慧监管与底线规制,平台践行伦理价值内嵌的主动治理,并通过多元社会力量的参与实现动态平衡,最终推动形成促进创新、保障公平的数字商业新生态。
人工智能算法伦理风险正成为制约技术赋能实体经济与包容性增长的关键瓶颈。本文以浙江省数字经济实践为研究场域,聚焦自动驾驶、AIGC等高频应用场景中的伦理治理挑战。基于“技术–社会–主体”三维分析框架,研究系统揭示了算法伦理风险的生成逻辑与多维呈现。研究发现,浙江在治理实践中虽展现出多元协同与敏捷治理特征,但仍面临治理科技鸿沟、标准缺失与协同梗阻等“系统性迟滞”。通过对欧盟刚性规制与美国柔性治理范式的比较与反思,本文构建了以监管沙盒为核心引擎的“基于风险的适应性治理”框架。该框架依托规制、技术与市场三大支柱,通过“风险识别–动态监管–效果评估–规则迭代”的闭环运行机制,实现技术创新与伦理规范的动态平衡。研究为解构算法伦理风险提供了系统理论视角,也为完善人工智能治理体系提供了兼具前瞻性与操作性的路径参考。
在数字经济背景下,人工智能广泛应用于电商平台的推荐、营销、客服与风控等环节,显著提升了效率与服务水平。然而,算法的不透明性与平台逐利逻辑也引发了价格歧视、标签化推荐和信用评价不公等“算法歧视”现象,严重损害消费者权益。本文在梳理算法歧视表现及成因的基础上,借鉴欧盟GDPR、AI Act与DSA的治理经验,提出我国应采取渐进式、分级治理路径,强化平台透明义务与消费者救济机制,推动多元共治,以实现技术发展与社会公平的平衡。
本文聚焦人工智能时代的算法性别歧视问题,探讨其法律规制路径。算法性别歧视是用人单位或平台利用算法技术对女性实施的不合理差别对待,本质上侵犯女性人身与财产权益。当前问题源于基础数据含偏见、算法设计缺乏监管机制及侵权追责法律规定模糊。本研究指出,需通过强化算法数据审核、加强个人信息保护来清理算法数据性别歧视因素,建立健全算法审查与监督制度,构建算法性别歧视侵权追责路径,以实现算法公平与性别公正,推动人工智能技术良性发展,保障女性平等就业权。
在算法时代下,有时候不是人在主动的歧视,而是一个自动系统在编程的时候有意或无意的进行歧视。这类的歧视是在计算机系统的人智能算法决策中产生的,虽然计算机系统本身没有意识,但它产生出的决策却构成了歧视。在就业市场招聘程序通过人工智能算法决策参与到招聘的过程中,其虽简化了招聘的流程提升了招聘效率,但在某种情况下也可能构成对应聘者的就业歧视。本文将涉及这些问题,提出应建立算法歧视审查机制应设立专门的算法审查部门对算法问题进行规制。
在数字政府建设背景下,因数字技术与行政权力的融合与运用产生了新型的算法侵害和数据安全风险,而由于数字政府治理模式中多元主体并存、公私利益对立、数字技术障碍难以准确定位责任主体,存在“数字避责”现象以及事后追责成效式微的追责困境。为进一步发挥追责机制的作用,应当以权责统一原则与权利义务相一致原则为理论基础,明确政府和企业之间的权利义务,正确划分政府和企业的内外部责任;明确数字政府建设中政企合作模式的追责机制,加强数字政府建设的法治化。
数字社会中,算法歧视的法律规制面临传统反歧视法范式失灵与技术治理碎片化的困境。以算法歧视的法律规制为核心,通过对美国COMPAS算法在刑事司法中的歧视等案例分析,解构算法歧视的生成机理,揭示其本质是数据偏见、算法黑箱与权力异化的权利侵害链条,提出技术合规、法律嵌入、社会共治的治理路径,通过法律代码化将技术标准上升为数字社会的“软性宪法”,并设计合规标准的全过程治理平台,重构数字正义的实现路径,推动算法治理从事后救济迈向全过程法治,为破解算法治理的“科林格里奇困境”提供方案。
算法推荐机制已广泛运用于现代信息传播,在提升效率的同时,其“信息茧房”效应引发了严重的法学难题。本文从规制私权力的视角切入,将“信息茧房”定位于新型认知风险与信息市场失灵。文章首先深入剖析了“信息茧房”对用户知情权与选择权的侵犯,并归因于算法的“黑箱”本质和平台的“准公共权力”属性。进而,本文主张法律规制应以实现“算法公正性”和“个人可控性”为目标,并构建“算法可解释性原则”“信息均衡义务”及用户“反向推荐权”的理论框架,以期为规制数字时代的平台权力提供理论支撑。
人工智能时代算法是重要的技术生产力基础,也是推动人工智能技术进步的重要因素。但是,因专业性、复杂性、不确定性以及政府或企业出于保密和知识产权需要而带来的算法黑箱问题也引发了诸多安全伦理风险。本文在分析算法黑箱基本概念和类型的基础上,分析其引发的主要安全伦理风险。论文认为算法黑箱主要涉及数据和隐私保护风险、算法偏见和滥用风险以及算法应用安全风险等。基于相关治理理论和实践需求,论文提出强化算法应用的数据和隐私保护、加强算法合理设计与应用的监管、完善算法应用安全的预防性机制等方面的建议。
在电子商务领域,算法的深度应用在提升效率的同时,也因其决策过程不透明而形成了“算法黑箱”。这对消费者权益构成了系统性威胁,并催生了价格歧视、信息茧房及搜索操纵等新型风险。本文通过剖析算法“黑箱”的风险呈现,指出我国现行法律框架在应对时,面临着规制工具滞后、信息不对称下的举证困境等难题。为此,本文尝试构建一个以“过程透明”与“结果公平”为双重支柱的法律规制新路径。该路径主张,通过从事前备案评估、事中分层解释到事后独立审计的全链条机制强化算法透明度;同时,以明确反歧视原则、保障消费者人工干预权和探索举证责任转移等方式,确立算法的实质公平性。最终旨在为实现技术创新与消费者权益保护的动态平衡提供可行的法律对策。
在我国当前的数字经济时代背景下,算法歧视规制面临着诸多困境。本文通过分析电子商务由算法歧视所带来的危害,并结合域外法律体系下的回应主要包括欧盟模式的对数据源头的规制和保护,以及将算法责任视为重点的美国模式。提出对我国算法歧视问题规制的路径建议,并结合国内基本国情和现状,对大数据算法歧视问题进行深入探讨和研究,从而期望完善我国电子商务平台算法,以适应数字经济的快速发展和变化。
随着数字经济的深入发展,电商平台推荐算法在提升交易效率的同时,也引发了复杂的法律风险。本文系统分析了推荐算法从技术工具向准公共权力演变的属性特征,深入探讨了算法在消费者交易和司法活动两个场景中引发的公平性、透明度及权利保障等法律风险。基于此,提出构建平台推荐算法的综合规制路径,以多方协同治理为核心,包含权责明晰的法律责任体系、技术赋能的监管体系和动态优化调整机制。以平衡技术创新与权益保护,维护个人权益与公共秩序。
随着算法技术嵌入应用于各种场景之中,大数据杀熟、信息茧房等弊端也逐渐显露。虽然算法技术是中立的,但在平台运用时引发了算法推送与消费者知情权、公平交易权等权利间的冲突。为此,本文通过剖析算法与平台之间的关系进而拨开“算法遮蔽”,以明确责任划分。最后,面对我国算法推送下消费者权益保护的困境,通过域外即欧美等发达国家已有的救济途径分析,从政策、平台和个人角度多层次提出自上而下的完善要求。
平台企业飞速发展的当前,规范数字平台算法歧视的市场力量和政府监管都具有局限性。利益驱使下企业容易丧失自主性且目前行业自律规则并不成熟,同时算法相关审查、问责机制不完善,受技术因素限制政府监管难以起效。算法歧视的有效治理直接关系消费者在平台的待遇是否公平乃至市场经济平稳发展。本文运用模糊定性分析比较法(fsQCA),结合扎根理论对平台企业算法歧视的形成动因及治理对策进行研究,总结出六大因素并进行fsQCA变量分析,得出对应四条组合路径并探讨结论与对策。
电商平台算法推荐技术在提升交易效率的同时,也引发了个人信息保护等方面的法律挑战。当前法律框架在应对算法治理问题时呈现明显滞后性,表现为规范体系碎片化、监管手段不足以及司法救济困难等现实困境。算法推荐系统的技术复杂性导致用户知情权与选择权受到实质性削弱,而平台的数据收集与处理行为往往超出合理边界,加剧了隐私侵害风险。针对这些挑战,需要构建多层次的法律规制体系。立法层面应制定专门性规范,建立算法分级分类管理制度,明确高风险算法的备案与审查要求;监管层面需引入技术治理工具,如算法审计与动态风险评估,提升监管机构的专业能力;平台层面应强化合规义务,落实有限披露原则并完善内部问责机制;用户权利救济方面可探索举证责任倒置规则,并发展公益诉讼等多元化救济渠道。电商平台算法治理的完善不仅关乎消费者权益保护,更对数字经济的健康发展具有重要意义,通过法律规范与技术治理的协同,能够实现技术创新与权益保障的平衡,推动形成公平、透明的数字市场秩序。
算法作为人工智能的重要组成部分,不断提升人工智能的智能化、自动化、自主化水平,从而使人工智能在网络购物、休闲娱乐、金融理财等领域的应用日益深化。但算法黑箱作为算法的负面效应之一,不仅侵害金融消费者的知情权,为金融机构谋取不正当利益提供便利,而且严重削弱金融监管机构履行正常的监管职能,扰乱正常的金融秩序,给金融市场带来系统性风险。为此,应从立法部门、监管机构、金融机构的维度出发,基于维护金融消费者的合法权益这一目标,构建一个多方参与、系统协同、监管有效、保障有力的算法黑箱应对体系。
人工智能技术的快速发展在提升产业效率的同时,也引发了隐私泄露、算法歧视等伦理风险,如何通过伦理治理增强产业韧性成为数字经济时代的重要命题。本研究使用文献综述法和逻辑推理法,构建“风险识别–韧性响应–治理迭代”的动态模型,运用跨学科分析框架和产业集群实证案例,系统解析人工智能伦理风险对产业韧性的三重侵蚀路径:技术滥用引发的社会信任崩塌、合规成本激增导致的创新资源挤占、技术迭代受阻形成的路径锁定效应。研究发现,伦理治理通过技术透明化重塑信任基础、敏捷治理提升风险免疫能力、伦理嵌入驱动创新范式转型、多元共治构建韧性网络生态四维机制,形成“治理投入–韧性增益”的价值闭环。本研究创新性地揭示了伦理风险与产业韧性的相互作用机制,提出的分层治理工具为差异化场景治理提供解决方案,通过开发技术适配性工具与建设国际标准话语权,为全球人工智能治理贡献理论范式与实践路径。
本文以生成式AI在个性化学习资源推荐中的应用为研究对象,探讨其引发的伦理风险及治理路径。随着生成式AI技术在教育领域的快速渗透,数据隐私泄露、算法偏见、学术诚信危机等问题日益凸显,亟需系统性治理框架。基于案例分析与文献综述,研究提出多维治理方案:技术层面融合联邦学习与差分隐私保护数据安全,政策层面构建分级分类监管体系,教育层面通过教师AI素养培训与学生数字公民教育强化主体责任,行业层面倡导算法公平优化与开源社区共建。该框架为平衡AI教育创新与伦理风险提供了实践参考,对推动教育智能化健康发展和教育公平具有重要现实意义。
人工智能自诞生至今,其发展已取得前所未有的突破,对人类社会产生深刻的影响。同时一系列伦理困境也随之产生,如隐私泄露、安全事故、人的全面发展失衡等,是全世界亟待面对和解决的难题。近年来负责任创新作为一种新的创新理念在欧美科技伦理研究领域兴起,旨在解决技术创新面临的道德难题及伦理规范问题。为解决人工智能产生的伦理困境,可将预期、反思、协商、反馈这四种负责任创新框架嵌入人工智能技术创新之中,为其指引正确、科学的方向,促使人工智能与人类社会能够和谐共处,创造更加美好的未来。
基于生成式AI在内容创作领域的广泛应用,虽然能够带来便捷与创新,但是同时也伴随着数据隐私、版权归属、内容真实性和偏见等伦理困境,以及相关法律法规的应对措施和规制路径等问题。通过梳理国内外政策法规,提出完善法律体系、强化技术监管、提高伦理意识等建议,以平衡技术创新和伦理法律规范之间的关系,推动生成式AI在内容创作领域的健康可持续发展。
在社交媒体平台的人机对话实践中,聊天机器人以其强社交、强声量及强入侵的特性,迅速成为平台生态中的重要组成部分。然而,这一技术飞跃在拓展了人机交互边界的同时,也触发了多个人机对话伦理的复杂议题。作为人类沟通领域的新兴形态,社交媒体中的人机对话是否遵循并契合传播伦理的框架,不仅关乎交流行为的合法性与正当性,还触及到传播生态系统的健康稳定、以及未来人机共融社会形态的前瞻构建。
生成式人工智能技术的纵深应用正加速重构高校思政教育生态,凭借其跨模态学习能力、可扩展模型和强化学习机制等技术优势,能够处理海量数据并生成模仿人类语言的交流内容,极大地推动了高校思政教育的数字化转型。但因其技术内生性、工具功能性与规制滞后性等多重影响,导致该技术的实践应用也面临着伦理关系异化、伦理价值偏差、伦理规范缺失以及伦理行为失当等潜在风险。为有效规制其伦理风险,应基于趋利避害原则,构建教育内部治理、平台伦理规范、制度协同防治的规制体系,形成“人–技–制”综合治理的三位一体格局,使生成式人工智能更好地为高校思政教育的高质量发展赋能增效,并有效规避其潜在的伦理风险,在动态平衡中解决好人的价值属性与技术的工具属性之间的矛盾冲突,真正实现技术服务教育、技术促进育人的价值功能。
随着人工智能技术的快速发展,新闻行业正经历着深刻的变革。自动化撰写、个性化推荐等技术的应用提高了新闻生产效率,但也带来了新闻真实性、公正性、信息同质化、隐私保护等多方面的伦理挑战。本文分析了这些伦理问题的具体表现及其根源,包括算法偏见、数据错误、信息过载、隐私泄露等。在此基础上,本文提出了加强算法伦理与透明度建设、强化新闻真实性与质量保障机制、促进新闻从业者伦理素养与能力提升、加强隐私保护与数据安全治理以及推动行业自律与国际合作等策略,以应对人工智能新闻伦理问题,维护新闻的真实性、公正性和社会影响力。
算法推荐技术作为数字时代的新型产物,在优化电商消费效率的同时,其引发的认知偏差、决策干预及价值观异化问题,客观上限制了消费者选择权。本研究基于治理现代化的视角,针对现行治理中法律规制滞后、平台责任模糊与协同效能不足等缺陷,构建“技术–制度–监管”的协同治理框架。在技术规制层面,构建涵盖透明度与责任化双重治理约束机制;在权利保障维度,确立包含消费者知情权与平台责任权的复合型权利束;在治理实施层面,形成政府主导的监管与社会力量参与的协同治理网络。本研究旨在通过多维制度的联动,实现技术应用与权利保护的动态平衡。
在数字经济加速演进的背景下,平台算法已逐步成为电子商务生态的基础性支撑,通过其对数据要素的系统性采集、处理与配置,深度参与到交易撮合、信息分发和价格形成等关键环节之中。这一机制在提升交易效率的同时,也带来了算法决策透明度不足、平台权力集中以及交易公平性弱化等治理问题。为深入分析算法治理强度如何通过影响数据要素配置,进而作用于电子商务交易效率,本文围绕数据要素市场化配置机制,构建了一个涵盖平台、商家和消费者的三方博弈模型。研究结果表明,算法治理强度与交易效率之间存在倒U型关系:适度的治理能够缓解信息不对称、优化数据配置结构,从而提升交易效率;而过度治理则会增加商家合规负担、抑制市场创新活力,导致效率下降。同时,平台的数据控制能力与消费者的信息敏感度在这一机制中具有显著的调节作用。本研究从理论上明确了算法治理的效率边界,为电商平台实施差异化、动态化的算法治理政策提供了学理依据。
“大数据杀熟”作为算法时代价格歧视的典型表现,已成为平台经济治理的焦点与难点。本文系统梳理了学术界相关研究,界定了其内涵与特征;基于经济动因、技术条件和市场环境等视角,揭示其生成机理;从消费者权益、市场竞争秩序及社会信任基础三个维度评估其复合影响;审视现有法律规制在认定、归责上的适用困境及消费者维权的高成本障碍。为此,提出“技术向善–法律约束–多元共治”的协同治理框架:算法层面倡导透明度与可问责性;法律层面推进类型化认定与精细责任;治理层面构建政府监管、平台自律、消费者赋权和社会监督的联动机制,以期实现算法向上向善和平台经济健康有序发展。
本文针对人工智能算法在电子商务平台中深度应用所导致的既有治理模式面临的根本性挑战展开研究。传统基于“平台–用户”二元对立的治理架构,由于算法同时兼具“效率提升引擎”与“隐性权力中枢”的双重属性而难以奏效。为此,本研究尝试建构一个整合博弈论与制度理论的综合性分析框架,将平台生态系统解析为一个由平台方、算法系统(视为具有准主体性的参与者)、各类商家、消费者以及监管机构共同构成的多元动态博弈场域。理论分析揭示,算法运作的“黑箱”特性超越了纯粹的技术范畴,成为重构所有参与者策略选择集、并引发“非对称性策略互动”的核心影响因素。基于此,本文论证了从依赖“固化的规则性治理”转向“动态调适的协同性治理”这一范式变迁的内在必然性。新范式的要义在于构建一套融合算法问责机制、敏捷响应式监管以及多元利益相关方协商共议的制度化耦合体系,从而为解读智能化时期的平台治理问题贡献一种新的理论视角。
在数字经济的浪潮下,算法推荐技术已广泛应用于电商平台。平台凭借数据垄断和算法黑箱的优势,使消费者陷入“信息茧房”,消费者面临“大数据杀熟”、知情权与选择权受损等问题。本文通过分析消费者信息弱势的成因,包括技术与资本驱动的数据垄断、算法操纵,以及由此导致的消费者在算法决策中的被动地位。同时,探讨现行法律体系在应对算法推荐挑战时的不足,如规则模糊、救济乏力、技术治理困境等。针对这些问题,提出“赋权 + 共治”的双向救济体系,旨在通过法律赋权和技术治理,平衡平台与消费者之间的信息不对称和权力失衡。具体措施包括细化算法解释权、引入数据可携带权、要求平台备案核心算法、实施动态监测、利用隐私计算技术优化模型,以及开发算法透明度工具等。强调多元共治的重要性,鼓励消费者参与维权、推动平台自律与政策激励相结合。
“十五五”时期是数字中国建设迈向全面深化与高质量发展的关键阶段,数字平台作为数据、技术与市场聚合的关键节点,治理效能深刻影响数字中国建设。本文通过阐述数字平台治理驱动数字中国建设的内在机理,从治理逻辑滞后、监管约束失效、技术风险扩散及多元价值失衡四个角度,深入辨析当下治理面临的结构性风险与核心挑战并提出从深化制度创新、构建协同体系、强化技术赋能及引导生态共荣的路径上协同推进,促进平台规范健康持续发展,实现我国平台治理能力质的飞跃,助力数字中国建设。
外卖平台算法作为电商经济中即时零售行业的核心管理工具,其自动化决策与监控功能重塑了用工模式,却也引发算法霸权、权益侵害等治理危机。算法透明化被视为破解骑手“困在系统里”困境的关键路径,但在电商平台商业秘密保护与技术复杂性制约下,其推进面临多重法律难题。本文以算法权力运行逻辑为切入点,剖析算法透明化与骑手权益保障的内在关联,系统梳理我国电商平台算法治理的规范现状与实践困境,从透明化边界界定、责任体系构建、权利救济完善三个维度,提出算法治理法律边界的划定方案,为实现电商平台效率与骑手权益的平衡提供理论支撑。
本研究聚焦AI算法对直播电商供应链金融风险的传导机制,揭示了算法不可解释性引发的三种核心问题:风险控制失效、信息不对称加剧以及系统性风险累积。通过机制分析后发现,算法黑箱通过模糊决策逻辑、加剧数据垄断、放大目标冲突等途径,促使风险从局部传导至全局,形成网络化的系统性金融风险威胁。针对上述问题,研究提出系统性治理框架,包括构建可解释性算法、重构数据治理结构、实施跨平台协同监管,以增强透明度、优化验证机制并阻断风险传播。然而,研究依旧存在局限:治理策略的实证效果尚未得到验证,算法异质性对治理普适性的影响并未充分探讨,风险动态性与非线性的特征的模拟分析不足。未来需结合案例模拟与跨学科方法深化研究,为直播电商平台供应链金融的智能化治理提供更为全面的理论与实践支持。
算法推荐系统作为跨境电子商务关键技术,持续重构国际贸易流程。本文探讨算法推荐在需求预测、供应链协同、营销策略优化等领域的应用,分析其对效率提升的作用机制,研究显示智能决策系统展现明确优势。文章剖析数据隐私、算法偏见、技术依赖等技术挑战,建立针对性解决方案框架。研究表明,建立合规数据治理框架有助于平衡创新与伦理约束;增强算法透明度与公平性可提升系统韧性;推动跨文化适配机制促进全球治理协作;构建责任追溯体系保障贸易可持续性。后续研究需拓展算法推荐与区块链、物联网的融合场景,完善跨学科治理框架,构建技术赋能与社会效益协调发展的生态体系——这些探索将推动智能技术在全球贸易中的价值释放。
人工智能时代,在人工智能与大数据技术深度融合的时代背景下,算法借助各类数据实现倍增式的发展,带来便利的背后也引起不同程度的算法歧视。算法歧视作为传统社会歧视的数字化延伸,算法歧视的成因包括:算法黑箱带来的算法不可解释和决策不透明、数据偏差对文化历史偏见的继承放大、设计者和使用者的主观思维嵌入等。它的形成不仅加剧数字时代的信息失衡,抑制了创新发展的内生动力,更对社会的伦理秩序与法治根基构成深层冲击。规制算法歧视需多方协同发力,量化算法歧视风险程度,细化相关法律条款,构建更为细致明确的责任划分体系,设计者和使用者提高数字素养和算法认知、企业和社会需对算法的运用和监管勠力同心,构建责任共担的治理机制,多方协同护航数字社会的公平正义与可持续发展。
在算法时代,电商平台信用评价机制的法律规制已成为亟待解决的重要问题。深入探讨这些法律问题,寻求合理的法律对策,对于保障电商市场的公平竞争、维护消费者和商家的合法权益、促进电商行业的健康可持续发展具有重要的理论和现实意义。本文针对信用评价机制的公平性、数据安全及权利救济三大核心问题,分别提出透明化算法治理、全生命周期数据管理及多元化争议解决的应对策略。这些对策旨在通过法律制度的完善、监管机制的创新以及技术手段的辅助,规范算法权力运行,保障用户与商家的合法权益,维护电商市场的健康生态。
Organizations pursuing digital transformation frequently encounter systemic fragmentation despite aspirations for unified enterprise connectivity. Common manifestations include siloed development practices where teams independently construct similar systems, resulting in redundant solutions and integration complexities. Tool sprawl emerges as teams introduce disparate technologies without strategic alignment, creating fragmented workflows and operational inefficiencies. Legacy system accumulation compounds these challenges as innovative solutions become entrenched and costly to maintain. Additionally, teams often revert to waterfall methodologies when lacking proper alignment mechanisms. The CONNECT Framework for AI-Augmented Enterprise Alignment and Platform Governance addresses these persistent challenges through structured principles enhanced by intelligent automation: Collaborate to promote cross-functional alignment, Orchestrate integrated workflows, Normalize architectural standards, Navigate agile practices, Enable user adoption, Consolidate redundant systems, and Transform toward platform-driven capabilities. Implementation involves establishing platform governance councils, adopting API-first design principles, conducting intelligent discovery phases, maintaining automated tool registries, and deploying adaptive change enablement programs supported by machine learning algorithms and natural language processing capabilities. Expected outcomes include reduced duplication and technical debt, enhanced market agility, improved security compliance, consistent engineering practices, elevated team empowerment, and development of scalable, cohesive digital enterprises that support sustained innovation through data-driven decision-making and automated optimization capabilities.
and
Purpose Drawing on the wisdom of ancient Chinese philosopher Xunzi, this paper aims to present a novel mechanism for governing opportunism, referred to as “cultivational governance.” By examining the role of artificial intelligence (AI) resources possessed by e-commerce platforms, the authors explore how these resources contribute to mitigating seller opportunism. The central hypothesis of this study posits that two distinct types of AI resources, namely, AI technology resources and AI human resources, serve as crucial factors in curbing seller opportunism. Furthermore, the authors propose that platform digital empowerment and value cocreation act as mediating variables linking AI resources to opportunism. Design/methodology/approach Based on the resource-based view and resource orchestration theory, the authors developed a framework and tested it using survey data from sellers. This framework encompasses five key variables: e-commerce platform’s AI technology resources, AI human resources, platform digital empowerment, value cocreation and seller opportunism. Regression analysis was used for data analysis. Findings The empirical results validate the effectiveness of cultivational governance mechanisms, as both AI resources effectively suppress seller opportunism through digital empowerment and value cocreation. Specifically, e-commerce platforms’ AI technology resources significantly promote value cocreation and platform digital empowerment, while AI human resources primarily contribute to platform digital empowerment. Although platform digital empowerment encourages value cocreation, its direct impact on reducing seller opportunism was not supported. Notably, value cocreation negatively affects seller opportunism. Originality/value The present research mainly contributes to the marketing channel governance literature by introducing a new approach to inhibit opportunism, namely, the cultivational governance mechanism.
This research paper investigates the critical importance of robust API and platform strategies for enterprises adapting to the proliferation of agentic AI, wherein AI systems autonomously execute tasks with limited human intervention. It addresses the imperative of facilitating seamless communication among AI agents, enterprise data systems, and external applications. The research examines the architectural and performance considerations essential for organizations to maintain competitiveness in this rapidly growing technological landscape of agentic AI projected to expand from $5.1 billion in 2024 to $47.1 billion by 2030. Key elements explored include unified data layer APIs, zero-trust authorization models, event-driven orchestration, and latency-sensitive design. Furthermore, the study considers emerging trends such as AI-powered SDKs, self-optimizing API gateways, autonomous API discovery, and ethical AI governance APIs. The findings emphasize that the adoption of modern API and platform architectures, optimization of performance metrics, and adherence to regulatory mandates are paramount for organizations to fully capitalize on the transformative potential of agentic AI. It is posited that enterprises embracing this paradigm shift will achieve a demonstrable competitive advantage, fostering innovation and operational excellence in the AI-driven future.
The discipline of Financial Planning and Analysis (FP&A) is undergoing a fundamental transformation, driven by the integration of artificial intelligence (AI) into core planning functions. This article explores how AI is reshaping forecasting accuracy, scenario planning agility, and governance frameworks within FP&A. It highlights the growing use of machine learning for predictive forecasting, natural language processing (NLP) tools to streamline analyst workflows, and real-time scenario modeling to enhance strategic responsiveness. The paper also addresses critical issues of data integrity, regulatory compliance, and explainability, underscoring the importance of AI governance in financial systems. Drawing from cross-industry applications and platform innovations across SAP, Oracle, Anaplan, and Microsoft Azure, this study presents a forward-looking perspective on how finance leaders can leverage AI to drive intelligent, agile, and compliant financial decision-making in an increasingly volatile business environment.
Aim: This study aims to evaluate how artificial intelligence (AI) can enhance ESG-oriented supplier risk assessment and strengthen sustainability compliance in global supply chains. Methods: Theoretical foundations of ESG monitoring were analyzed, the main risks were categorized into Environmental, Social, and Governance domains, and the role of AI technologies in enhancing the adaptability, transparency, and efficiency of supply chains was substantiated. The study employed a case study approach, utilizing secondary data from the Prewave platform, and applied comparative analysis to evaluate improvements in ESG risk detection efficiency. Particular attention was given to the algorithmic structure of modern intelligent platforms capable of real-time cognitive analysis of textual and numerical data. Results: The advantages of employing AI models in reducing ESG incident detection time, automating counterparty evaluation, and ensuring compliance with international regulatory frameworks (CSRD, LkSG) were empirically demonstrated. Furthermore, a conceptual structural model was proposed for implementing an AI-oriented ESG supplier assessment system, covering all stages - from data source formation to managerial decision-making. Conclusion: The study concludes that AI-based ESG monitoring systems significantly enhance transparency, operational resilience, and sustainable procurement practices, ultimately marking a paradigm shift in global supply chain governance. Recommendations: Key implementation barriers in emerging market contexts included fragmented data infrastructure, low digital literacy within public sectors, and inconsistent regulatory compatibility with international ESG standards. Future research should focus on developing and localizing AI models for ESG monitoring, specifically addressing the unique data, infrastructure, and policy environments of developing economies.
The gig economy has expanded rapidly through digital platforms that rely on artificial intelligence to manage labor processes. While these systems create efficiency and flexibility, they also deepen precarity, opacity, and unequal power relations for workers. This paper examines AI-driven platform cooperatives as a democratic alternative to investor-owned platforms. Grounded in cooperative economics, AI ethics, and sociotechnical systems theory, the study explores how decentralized governance, blockchain-enabled accountability, and privacy-preserving data practices can transform gig work. Drawing on secondary research and global case studies, it highlights the opportunities of AI-enabled cooperatives in promoting fairness, transparency, and worker participation, while also identifying challenges of scale, funding, and regulatory adaptation. The findings suggest that AI-driven cooperatives can not only redistribute economic value more equitably but also set normative benchmarks for digital labor markets, offering a sustainable model that integrates technological innovation with democratic governance and social justice.
Corporate sustainability and responsible investing are now evaluated based on Environmental, Social, and Governance (ESG) metrics. Yet, there is a lack of openness, agreed-upon standards, and instant traceability in ESG reporting. Here, I discuss an innovative way to combine Blockchain and AI to improve how ESG is used in finance. Blockchain ensures that data stays unchanged, can be tracked, and inspires trust, while AI streamlines data retrieval, reads text in qualitative reports, and helps predict whether ESG standards are followed. Because of this model, investment decisions and oversight can be based on more reliable and easier-to-compare data. The paper further presents an architecture for a Blockchain-AI-driven ESG reporting platform, supported by a case simulation and performance benchmarks. It demonstrates its potential to transform corporate accountability in innovative financial environments.
Financial entities operating in distributed cloud ecosystems are grappling with significant challenges. As they attempt to maintain consistent data governance and comply with often complex and jurisdictional-based regulatory requirements, they endure diminished results when attempting to comply with regulatory obligations simultaneously across multiple cloud implementations governed by standards such as SOX, GDPR, Basel III, and CCPA. The article introduces an intelligent data governance framework that leverages machine learning and artificial intelligence technologies to coordinate compliance efforts, increase risk assessment capacity, and provide unified oversight of heterogeneous cloud environments. The framework has a federated architecture with orchestration, intelligence, and enforcement layers that allow for governance consistency but leverage unique capabilities of each platform. AI-based data discovery and classification methods can effectively and consistently identify and classify data assets within distributed ecosystems without manual indexing methods and automatically adapt and modify with changing regulatory requirements using adaptive learning algorithms. Automated compliance monitoring systems include intelligent rule engines that can interpret regulatory instances and translate them into actionable policies by monitoring activity against the policy. In addition, predictive analytics identify possible risk, allowing the system to monitor and alert as necessary before regulatory violations occur. Implementation strategy emphasizes phased approach implementations that generate the least disruption to current operations while ensuring seamless integration across the current enterprise infrastructure under API-first architecture principles. Business impact assessment confirms significantly increased compliance effectiveness, reduced manual monitoring obligation, and improved audit execution that, combined, provide a highly attractive return on investment to organizations adopting intelligent governance capabilities.
The rise of AI has been rapid, becoming a leading sector for investment and promising disruptive impacts across the economy. Within the critical analysis of the economic impacts, AI has been aligned to the critical literature on data power and platform capitalism - further concentrating power and value capture amongst a small number of"big tech"leaders. The equally rapid rise of openness in AI (here taken to be claims made by AI firms about openness,"open source"and free provision) signals an interesting development. It highlights an emerging ecosystem of open AI models, datasets and toolchains, involving massive capital investment. It poses questions as to whether open resources can support technological transfer and the ability for catch-up, even in the face of AI industry power. This work seeks to add conceptual clarity to these debates by conceptualising openness in AI as a unique type of interfirm relation and therefore amenable to value chain analysis. This approach then allows consideration of the capitalist dynamics of"outsourcing"of foundational firms in value chains, and consequently the types of governance and control that might emerge downstream as AI is adopted. This work, therefore, extends previous mapping of AI value chains to build a framework which links foundational AI with downstream value chains. Overall, this work extends our understanding of AI as a productive sector. While the work remains critical of the power of leading AI firms, openness in AI may lead to potential spillovers stemming from the intense competition for global technological leadership in AI.
For employees, generative AI chatbots can encourage productivity and optimize work processes with the specific insights provided. The problem is the ethical implications for the privacy and confidentiality of company data that employees carelessly enter into a generative AI chatbot platform. This research aims to study how employees behave when entering company data using a generative AI chatbot based on behavioral intentions on data governance awareness. This research uses quantitative methods like the Partial Least Squares Structural Equation Modeling (PLS-SEM) statistical method. Purposive sampling determines the number of samples, with data collection from February to April 2024. Based on the data collection results, 402 employees in Indonesia who used generative AI chatbots became valid respondents who answered the online questionnaire using Google Forms. The research results show a significant picture of the factors influencing awareness of data governance behavior and company data usage behavior in generative AI chatbots. The test used nine hypotheses, where the results found that six hypotheses had a significant effect and three hypotheses were not significant. The findings of this research have implications for increasing awareness and data governance practices regarding the role of organizational support and employee training to mitigate risks in the use of generative AI chatbots in corporate environments.
Saudi financial institutions increasingly adopt Google Cloud Platform (GCP) to modernize analytics and AI, while also facing audit expectations under the National Data Management Office (NDMO) Data Management and Personal Data Protection Standards issued by SDAIA. Some NDMO specifications are largely organizational (policies, roles, committees), while others explicitly require tool-generated evidence such as catalog records, metadata attributes, lineage graphs, profiling scans, data quality rules and results, and operational dashboards and alerts. This paper focuses on those tool-dependent evidence expectations and evaluates how far a GCP-based governance foundation can satisfy them, and where additional governance automation or third-party tooling remains necessary.
This study explores how sustainable leadership and financial technology (FinTech) adoption influence ethical artificial intelligence (AI) governance outcomes in Bangladesh’s FinTech sector. It further examines the mediating role of ethical AI skills development and the moderating influence of organizational capacity, offering an integrated view of responsible digital innovation in emerging economies. A qualitative research design was employed, utilizing semi-structured interviews with six stakeholder groups, including FinTech leaders, compliance officers, regulators, platform users and advocates for financial inclusion. Thematic analysis was conducted using a structured six-phase approach, supported by qualitative data analysis software. A dual-stage literature review guided the conceptual framework and interpretation of findings. The study finds that ethical AI governance is strengthened when sustainable leadership is supported by strategic FinTech adoption, embedded ethics training and institutional capacity. Ethical AI skills development plays a mediating role by operationalizing leadership vision, while organizational capacity moderates the effectiveness of leadership and technology strategies on governance outcomes. A moderated-mediation model is proposed to explain these interactions. The findings offer actionable insights for FinTech firms and policymakers. Organizations should align AI deployment with ethical mandates, invest in workforce capacity-building and establish cross-functional governance mechanisms. Policymakers are encouraged to support national guidelines, certification schemes and capacity-building grants to promote inclusive and transparent digital finance ecosystems. This study makes a novel contribution by integrating leadership, technology, training and institutional capacity into a unified model of ethical AI governance. It offers original insights into how internal and external enablers can jointly support responsible AI innovation in the context of a rapidly growing FinTech sector.
The transformation of financial institutions toward modular, platform-based operating models has altered how quantitative and algorithmic decision systems are developed, deployed, and governed. Software-as-a-Service (SaaS) core banking platforms and Risk-as-a-Service (RaaS) providers increasingly embed externally developed Artificial Intelligence (AI), Machine Learning (ML), and quantitative models into functions such as credit underwriting, fraud detection, capital estimation, and regulatory reporting. While this shift offers meaningful efficiency and analytical benefits, it also introduces a structurally distinct form of model risk driven by external control, limited transparency, continuous vendor-managed change, and increasing concentration on a small number of technology providers.Existing regulatory frameworks including the Federal Reserve’s SR 11-7, the UK Prudential Regulation Authority’s SS1/23, the EU’s Digital Operational Resilience Act (DORA), and the Monetary Authority of Singapore’s Technology Risk Management (TRM) Guidelines establish that institutions retain responsibility for the governance and risk management of third-party models [1–4]. However, these frameworks are intentionally principle-based and provide limited operational guidance on how to govern, validate, and evidence effective challenge over opaque, externally operated models in practice.
The healthcare industry is going through major changes as organizations start combining business process tools with artificial intelligence (AI). This paper looks at how the Appian platform helps manage and automate important tasks in healthcare settings. Appian brings together low-code development tools, connected data systems, and smart document processing. These features help hospitals and healthcare providers deal with common problems such as scattered data, too much paperwork, and staff burnout. By connecting systems and automating routine work, healthcare teams can save time and focus more on patient care. The paper also explains how AI can support tasks like patient triage, where cases are prioritized based on urgency, and revenue cycle management, which handles billing and payments. It discusses how healthcare systems use standard technologies like HL7 FHIR® to make sure different platforms can share information smoothly. Furthermore, the paper outlines five key AI design approaches that help organizations use AI responsibly. These approaches make sure patient data stays protected and that systems follow regulations such as HIPAA. Overall, the study shows that combining AI with business process management helps healthcare organizations move from simply reacting to problems toward planning ahead and delivering more patient-centered care.
Digital advertising ecosystems increasingly rely on large-scale artificial intelligence infrastructures that personalize marketing messages, optimize bidding strategies, and allocate attention across millions of users and advertisers. Traditional advertising architectures depend heavily on centralized data aggregation, where behavioral logs from multiple platforms are combined to train large predictive models. While this approach enables highly accurate personalization, it also raises significant concerns related to privacy protection, regulatory compliance, data governance, and systemic concentration of informational power. As privacy regulations expand globally and user expectations regarding data protection intensify, the advertising industry faces increasing pressure to develop new system architectures capable of preserving personalization capabilities while minimizing direct data collection and centralized storage. This paper proposes a privacy-aware advertising framework based on federated learning for cross-platform personalization. Rather than treating federated learning solely as a distributed optimization technique, the framework conceptualizes it as a socio-technical infrastructure that redistributes data custody, computational responsibilities, and governance accountability across multiple actors in the advertising ecosystem. The study examines how decentralized model training can enable collaborative personalization across advertisers, publishers, and devices without requiring raw behavioral data to leave local environments. Particular attention is given to system-level design challenges including heterogeneous data distributions, delayed feedback signals, adversarial manipulation risks, fairness constraints, and cross-jurisdictional regulatory compliance. The paper develops a multi-layer architectural model integrating local representation learning, secure aggregation protocols, differential privacy mechanisms, and policy-aware governance structures. It further explores the implications of federated advertising systems for market competition, algorithmic fairness, and institutional accountability. The analysis demonstrates that federated learning can significantly reduce centralized data risks while maintaining effective personalization performance when combined with robust coordination protocols and transparent governance frameworks. The paper concludes that privacy-aware federated infrastructures represent a promising direction for the future evolution of digital advertising ecosystems.
No abstract available
The advancement of artificial intelligence (AI) offers new opportunities for business model innovation in digital platform enterprises. Despite growing interest in AI applications, the specific mechanisms through which digital platform firms leverage AI to drive business model innovation remain insufficiently explored, particularly from the integrated perspective of resource mobilization and organizational capability reconfiguration. To address this gap, this study conducts a single-case analysis of a highly successful digital platform enterprise in China. This study explores how digital platform enterprises can effectively utilize AI technologies to support business model innovation. The findings reveal that AI technologies enable digital platform enterprises to develop organizational capabilities in intelligent connectivity, intelligent development, and intelligent governance. AI-enabled organizational capabilities in digital platform enterprises evolve through three progressive stages: AI-assisted, AI-augmented, and AI-integrated. At each stage, these capabilities are shaped through different types of resource actions—namely, entry-oriented resource patchwork, depth-oriented resource arrangements, and coordination-oriented resource orchestration. This study offers practical insights for digital platform enterprises seeking to leverage AI technologies for business model innovation. By integrating the concepts of resource actions and organizational capabilities, it provides a dynamic explanation of how AI drives innovation in digital platform business models. The research contributes to the theoretical advancement of human-AI integration and resource action frameworks, offering actionable intelligence for the broader industry.
The increasing reliance on third-party vendors for payment processing, cloud services, and digital operations has significantly expanded organizational exposure to cyber, financial, and regulatory risks. This study presents an Automated Third-Party Risk Management (TPRM) platform that integrates artificial intelligence–driven vendor scoring with dynamic Payment Card Industry Data Security Standard (PCI DSS) compliance mapping to enhance risk visibility, assessment accuracy, and governance efficiency. The platform is designed to replace manual, questionnaire-heavy vendor reviews with continuous, data-driven risk intelligence across the vendor lifecycle. The proposed system employs machine learning algorithms to aggregate and analyze multi-source vendor data, including security posture indicators, historical incidents, compliance attestations, financial stability signals, and operational dependencies. These inputs are processed through weighted risk models to generate real-time vendor risk scores that adapt as conditions change. Natural language processing is applied to vendor documentation, audit reports, and contractual clauses to identify latent control gaps and emerging compliance concerns. Explainable AI techniques are incorporated to ensure transparency and regulatory defensibility of scoring outcomes. A core innovation of the platform is automated PCI DSS compliance mapping, which aligns vendor controls and evidence directly to applicable PCI DSS requirements. The system continuously tracks compliance status, highlights deviations, and quantifies residual risk associated with each vendor’s role in the cardholder data environment. This enables organizations to prioritize remediation efforts, enforce proportionate controls, and demonstrate compliance readiness during audits. The platform architecture supports workflow automation, risk escalation triggers, and executive-level dashboards, enabling informed decision-making by risk, compliance, and procurement stakeholders. By unifying vendor risk scoring and PCI DSS control mapping within a single intelligent platform, the solution reduces assessment fatigue, improves response times, and strengthens organizational resilience. The study concludes that AI-enabled TPRM platforms represent a scalable and proactive approach to managing third-party risk in increasingly complex and regulated digital ecosystems. The findings provide practical implications for financial institutions, merchants, and service providers seeking to operationalize continuous compliance, reduce audit costs, and align third-party governance with evolving cybersecurity regulations while maintaining trust, transparency, and accountability across interconnected payment and digital service ecosystems across regulated industries and global supply chains worldwide in practice today.
Platform employment (PE) is a new segment of the labor market, where traditional labor relations are giving way to flexible, short-term contracts. It is characterized by non-traditional tripartite labor relations (employee, client, and digital platform). The appeal of PE is based on its ability to reduce transaction costs, create conditions of relative independence for the contractor from the employer, and minimize entry requirements to the labor market. Digital platforms are shaping a new type of governance (algorithmic management) using Artificial Intelligence (AI) technologies: algorithms, rules, instructions, ratings (rankings), and a special control mechanism. When assessing the social significance of algorithmic management, particular attention is paid to the conflict between technological and humanistic dimensions. The results of an analysis of pilot surveys (2023–2025) of delivery couriers revealed the main barriers and violations they face, as well as the practices they use to maintain a balance between the effectiveness of algorithms and human participation, data protection, interaction, and trust. Young people seek autonomy and self-expression, independence in earning income and managing their free time. This leads to a blurring of the boundaries between work and leisure. Courier services often fall prey to informal employment and criminal activity, prompting state and city authorities to take specific steps toward legislative and regulatory oversight of the platform economy. Despite the high level of youth involvement in the courier industry, the likelihood that courier services will help address the labor shortage in the traditional labor market remains uncertain.
Tourism applications are no longer limited to providing booking services; they must serve as intelligent companions for tourist that guide, personalize, and build trust. This paper introduces Yatra360, a tourism platform designed for India that blends AI-powered personalization, and cultural intelligence underpinned by privacy-aware governance. Using a structured review of academic work from 2019–2025, the study synthesizes contributions from both international research and Indian scholars in areas such as usability, recommendation systems, multilingual integration, and tourism marketing. Insights are translated into a conceptual framework for Yatra360, with design implications addressing accessibility-first principles, explainable recommendations, multilingual pipelines, cultural representation, and trust-building through real-time alerts. By mixing local and global perspectives, Yatra360 is positioned as a culturally grounded and technically robust model for the next generation of tourism platforms.
In today’s enterprise environment, Learning Management Systems (LMS) must evolve from static repositories into intelligent platforms that enable dynamic, individualised development. This study presents a scalable, ethically governed AI framework integrated into SAP SuccessFactors LMS to enable hyper-personalised learning journeys that align employee development with strategic business goals. By leveraging SAP Business Technology Platform (BTP), SAP AI Core, the Learning Recommendation Service, and the Talent Intelligence Hub, the proposed solution delivers real-time, AI-powered learning content recommendations tailored to user behaviour, competency gaps, and aspirational roles. Using a mixed-methods approach that includes SAP architecture modelling, semi-structured expert interviews, and simulation with synthetic workforce data, the framework demonstrates improvements in content relevance, time-to-skill efficiency, and internal mobility outcomes. Key findings reveal that personalised learning journeys not only increase learner engagement and course completion rates but also support organisational agility by enabling cross-functional upskilling and career pathing. The model also incorporates responsible AI practices, including explainable recommendations, opt-in personalisation, and fairness-aware logic to ensure transparency and trust. This research fills a critical gap in enterprise learning analytics by offering a validated, SAP-native blueprint for embedding intelligent personalisation within LMS environments. The implications reinforce the strategic role of learning in enabling adaptive, future-ready workforces while delivering measurable business alignment.
No abstract available
This paper examines the assessment challenges of Responsible AI (RAI) governance efforts in globally decentralized organizations through a case study collaboration between a leading research university and a multinational enterprise. While there are many proposed frameworks for RAI, their application in complex organizational settings with distributed decision-making authority remains underexplored. Our RAI assessment, conducted across multiple business units and AI use cases, reveals four key patterns that shape RAI implementation: (1) complex interplay between group-level guidance and local interpretation, (2) challenges translating abstract principles into operational practices, (3) regional and functional variation in implementation approaches, and (4) inconsistent accountability in risk oversight. Based on these findings, we propose an Adaptive RAI Governance (ARGO) Framework that balances central coordination with local autonomy through three interdependent layers: shared foundation standards, central advisory resources, and contextual local implementation. We contribute insights from academic-industry collaboration for RAI assessments, highlighting the importance of modular governance approaches that accommodate organizational complexity while maintaining alignment with responsible AI principles. These lessons offer practical guidance for organizations navigating the transition from RAI principles to operational practice within decentralized structures.
The accelerated progress of Artificial Intelligence (AI) within the accounting field has resulted in a heightened use of this technology in international enterprises, therefore generating noteworthy ethical concerns. This research investigates the ethical implications that arise from the use of AI in accounting practices, focusing on international corporations operating in Jordan. The objective of this research is to provide a comprehensive framework for the ethical and responsible integration of AI within the accounting domain. The research used a survey methods approach while 379 respondents were selected using cluster and proportional sampling. The qualitative component of the research investigates the viewpoints and concerns of persons pertaining to the use of AI. The study results provide significant contributions to the development of a context-specific paradigm for AI ethics that prioritizes concepts such as transparency, fairness, and accountability. The findings of this study have substantial value for multinational corporations engaged in commercial operations in Jordan and similar regions. The results provide organizations with the necessary tools to proficiently address the ethical dilemmas that emerge as a result of using artificial intelligence in accounting procedures.
Responsible AI in Education: A Cross-Cultural Business Model and Ethical Framework for K-12 Adoption
The rapid integration of Artificial Intelligence (AI) into K-12 education has outpaced the development of essential governance, creating significant risks in data privacy, algorithmic bias, and equitable access. This study addresses the critical lack of a holistic business model that aligns ethical imperatives with sustainable school operations. Using a qualitative multiple-case study across five schools in the United Arab Emirates and India, this research investigates AI adoption in contrasting technological and regulatory contexts. It is guided by a novel socio-technical framework: the People-Process-Platform-Purpose (4P) model. The primary outcomes are two-fold: a validated, cross-cultural business model presented as a modular toolkit, and a functional prototype of a Student Early Support System (SESS). The SESS is a generative AI application designed for proactive, non-punitive student welfare interventions. This research contributes to educational technology and business ethics by providing a prescriptive, evidence-based roadmap for institutionalizing Responsible AI (RAI). The findings offer schools a practical method to mitigate risk, safeguard student welfare, and harness pedagogical innovation effectively.
Financial institutions face escalating regulatory scrutiny and stakeholder pressure to ensure their automated decision systems—including artificial intelligence and algorithmic systems—operate fairly and equitably across all demographic groups. This paper introduces the Transparent Equity Framework (TEF), a comprehensive governance methodology designed to embed fairness principles throughout the entire lifecycle of financial risk management systems, from initial concept through production deployment and continuous monitoring. Unlike retrospective bias-checking approaches, TEF integrates equity considerations proactively at each stage, creating verifiable audit trails and establishing clear organizational accountability structures. The framework provides concrete implementation guidance including quantitative fairness thresholds, statistical validation methodologies, documentation standards, governance structures, and escalation procedures. TEF has been explicitly designed to align with Equal Credit Opportunity Act (ECOA), Fair Credit Reporting Act (FCRA), and Fair Housing Act requirements while addressing emerging regulatory guidance on algorithmic accountability from the Consumer Financial Protection Bureau (CFPB), Federal Reserve, and Office of the Comptroller of the Currency (OCC). The methodology emphasizes the inseparability of technical validation and organizational governance, recognizing that effective fairness assurance requires both sophisticated monitoring procedures and clear accountability mechanisms. Financial institutions implementing this framework can demonstrate to regulators, auditors, and stakeholders that their automated decision-making systems produce equitable outcomes while maintaining operational efficiency and regulatory compliance.
Small and Medium Enterprises (SMEs) are increasingly recognizing the potential of Artificial Intelligence (AI) to boost efficiency and innovation. However, responsible AI adoption remains a challenge due to limited resources and technical expertise. This research paper addresses this gap by proposing a novel, human-in-the-loop (HiTL) framework specifically designed for SMEs. It builds upon existing literature on AI governance, identifies key challenges faced by SMEs, and presents a practical framework with tangible outputs that can be readily implemented.
This study explores barriers to AI adoption in automated organizational decision-making. Through qualitative interviews with 13 senior managers in South Africa, the study identified human social dynamics, restrictive regulations, creative work environments, lack of trust, dynamic business environments, loss of power, and ethical considerations. The study applied the adaptive structuration theory (AST) model to AI decision-making adoption, providing recommendations to overcome these barriers. The AST offers a deeper understanding of the dynamic interaction between technological and social dimensions.
No abstract available
No abstract available
There is still a significant gap between expectations and the successful adoption of AI to innovate and improve businesses. Due to the emergence of deep learning, AI adoption is more complex as it often incorporates big data and the internet of things (IoT), affecting data privacy. Existing frameworks have identified the need to focus on human-centered design, combining technical and business/organizational perspectives. However, trust remains a critical issue that needs to be designed from the beginning. The proposed framework is the first to expand from the human-centered design approach, emphasizing and maintaining the trust that underpins the whole process. This paper proposes a new theoretical framework for responsible artificial intelligence (AI) implementation. The proposed framework emphasizes a synergistic business-technology approach for the agile co-creation process. The aim is to streamline the adoption process of AI to innovate and improve business by involving all stakeholders throughout the project so that the AI technology is designed, developed, and deployed in conjunction with people and not in isolation. The framework presents a fresh viewpoint on responsible AI implementation based on analytical literature review, conceptual framework design, and practitioners' mediating expertise. The framework emphasizes establishing and maintaining trust throughout the human-centered design and agile development of AI. This human-centered approach is aligned with and enabled by the "privacy-by-design” principle. The creators of the technology and the end-users are working together to tailor the AI solution specifically for the business requirements and human characteristics. An illustrative case study on adopting AI for assisting planning in a hospital will demonstrate that the proposed framework applies to real-life applications.
Procurement carries legal requirements across public services in the UK but, for stakeholders in clinical Artificial Intelligence (AI) innovation, it is often poorly understood. This perspective piece summarises insights from a cross-sector workshop exploring the role of procurement frameworks in supporting AI innovation in the National Health Service (NHS). The significant characteristics of AI from a procurement perspective are identified and their consequences are explored. The workshop identified challenges including visibility of AI procurement processes, uncertainty in the value in AI products, process inefficiencies, sustainability and framework design. Opportunities relating to AI procurement were also identified. These insights highlight the potential for procurement frameworks to enable responsible AI innovation in healthcare but acknowledge the need for collaborative efforts from a range of stakeholders to overcome the difficulties experienced by many to date.
Artificial intelligence (AI) holds tremendous potential but also poses consequential risks. Regulation frameworks like the EU AI Act aim to mitigate these risks, yet organizations struggle to understand and operationalize Responsible AI (RAI). We introduce the RAI Organizational Maturity (RAI-OM) framework as an initial step towards a RAI maturity model to highlight the many factors that influence an organization's RAI maturity. Developed through in-depth qualitative interviews and co-design sessions with 90 RAI experts, the RAI-OM framework consists of 24 dimensions grouped into three main categories: Organizational Foundations, Team Approach, and RAI Practices. Our findings also provide further evidence for the interdependent nature of RAI's organizational factors, the import of collaboration for mature RAI, and the need to start RAI early in the AI lifecyle. Researchers and practitioners can use the RAI-OM framework and our research findings to not only understand the different moving parts in RAI's complex organizational machinery, but also address organizational barriers to RAI, unpack the different types of collaborations needed for mature RAI, and support RAI's articulation work and process.
As artificial intelligence (AI) systems rapidly gain autonomy, the need for robust responsible AI frameworks becomes paramount. This paper investigates how organizations perceive and adapt such frameworks amidst the emerging landscape of increasingly sophisticated agentic AI. Employing an interpretive qualitative approach, the study explores the lived experiences of AI professionals. Findings highlight that the inherent complexity of agentic AI systems and their responsible implementation, rooted in the intricate interconnectedness of responsible AI dimensions and the thematic framework (an analytical structure developed from the data), combined with the novelty of agentic AI, contribute to significant challenges in organizational adaptation, characterized by knowledge gaps, a limited emphasis on stakeholder engagement, and a strong focus on control. These factors, by hindering effective adaptation and implementation, ultimately compromise the potential for responsible AI and the realization of ROI.
As businesses push to monetize from AI, from powering social media feeds to driving financial decision tools, they're unlocking major economic opportunities but also surfacing complex ethical dilemmas. This paper explores how organizations can balance profit with principles of fairness, transparency, and accountability. It highlights real-world cases where AI-driven monetization has led to bias, misinformation, and loss of trust, underscoring the risks of a short-term, revenue-first approach. To address this, we introduce the Responsible AI Monetization (RAIM) framework, a practical guide designed to help businesses innovate while avoiding ethical pitfalls. The findings suggest that embedding ethics into AI strategy not only safeguards against regulatory and reputational risks but also builds consumer confidence and supports sustainable growth. Through case studies and actionable guidance, the paper outlines how enterprises can create profitable AI solutions while ensuring that long-term value and responsibility remain at the core.
PurposeThis article introduces the Responsible AI for Service Excellence (RAISE) framework. RAISE is a strategic framework for responsibly integrating AI into service industries. It emphasizes collaborative AI design and deployment that aligns with the evolving global standards and societal well-being while promoting business success and sustainable development.Design/methodology/approachThis multidisciplinary conceptual article draws upon the United Nations' Sustainable Development Goals (SDGs) and AI ethics guidelines to lay out three principles for practicing RAISE: (1) Embrace AI to serve the greater good, (2) Design and deploy responsible AI and (3) Practice transformative collaboration with different service organizations to implement responsible AI.FindingsBy acknowledging the potential risks and challenges associated with AI usage, this article provides practical recommendations for service entities (i.e. service organizations, policymakers, AI developers, customers and researchers) to strengthen their commitment to responsible and sustainable service practices.Originality/valueThis is the first service research article to discuss and provide specific practices for leveraging responsible AI for service excellence.
This paper explores the complex ethical dilemmas associated with AI-driven decision-making, providing a robust framework for the responsible and transparent use of AI. Through comprehensive case studies, it investigates the practical implementations of techniques and best practices for tackling significant concerns such as deepfake manipulation, algorithmic bias, fairness, accountability, transparency, and data protection. These case studies elucidate how firms effectively execute ethical AI governance, emphasizing actionable strategies for risk mitigation and trust enhancement. The study highlights the essential function of corporate leadership in fostering ethical AI cultures and offers evidence-based recommendations for companies operating in this evolving environment. This work addresses significant gaps in existing research, therefore enhancing academic debate and outlining a prospective direction for future study. Ultimately, it enables stakeholders to develop and execute AI systems that protect human values, enhance societal trust, and foster sustainable innovation.
Building on the findings of Obioha-Val (2024), which reveal both the transformative potential of AI and the risks associated with unregulated or opaque implementation, this study investigates the responsible deployment of AI-driven cybersecurity systems in e-commerce by examining structural, ethical, and governance challenges. Using four open-source datasets, including the IEEE-CIS Fraud Detection Dataset, OECD AI Readiness indicators, the Stanford AI Governance Index, and the Global AI Ethics Guidelines Dataset, the study applies Principal Component Analysis, Logistic Regression, Weighted Scoring Models, and K-means clustering to evaluate adoption barriers and framework adaptability. Results show that only 40% of e-commerce firms are AI integration-ready, with SMEs particularly hindered by outdated infrastructure and limited workforce capacity. Algorithmic fairness testing revealed zero transaction flags under the applied threshold, raising concerns of underfitting and potential hidden biases. Ethical risks such as privacy violations, consent ambiguity, and algorithmic discrimination, particularly in pricing and service delivery, are highlighted as critical threats. Governance analysis ranked the UK highest (8.00/10), while 95% of firms lacked formal AI oversight structures. Cluster analysis indicated that only 30% of international AI frameworks sufficiently incorporate operational principles like security and human oversight. This study adapts the Obioha-Val framework originally applied in U.S. public school systems to the commercial e-commerce context, offering a recalibrated, sector-specific model of responsible AI governance. Recommendations include developing AI-specific cybersecurity protocols, integrating fairness auditing tools, harmonizing global standards, and investing in infrastructure and AI literacy for SMEs.
This article presents a comprehensive framework for implementing responsible artificial intelligence in revenue lifecycle automation systems. As organizations increasingly deploy AI to enhance revenue operations through contract analysis, pricing optimization, and approval workflows, they face complex ethical considerations and compliance challenges. The framework addresses these challenges through five interconnected domains: fairness in algorithmic decision-making, explainability and transparency, data governance and privacy, human-in-the-loop controls, and compliance and auditability. Drawing from real-world implementations across financial services, technology, and regulated industries, the article outlines practical design patterns that balance innovation with ethical considerations. Case studies demonstrate how organizations have successfully applied these principles to contract intelligence and dynamic pricing systems, achieving both business value and ethical implementation. The article provides a phased implementation roadmap and explores current challenges and future research directions. By embedding responsible AI principles into revenue operations, organizations can mitigate risks while maximizing business value, ensuring systems operate equitably, transparently, and in alignment with organizational values and regulatory requirements.
Artificial intelligence (AI) has enhanced efficiency, scalability, and personalization in FinTech applications such as Know-Your-Customer (KYC) and credit decisioning. However, reliance on complex models introduces risks related to bias, opacity, and regulatory gaps. This paper presents an integrative review of 25 academic articles published between 2015 and 2025, synthesizing current knowledge on fairness, explainability, and governance (FEG) in MLOps pipelines. The review’s main contribution is a conceptual framework that highlights persistent implementation gaps and the limited operational integration of FEG principles in practice. While fairness interventions, explainability mechanisms, and governance structures can be embedded throughout the MLOps lifecycle, empirical evidence from production-grade deployments remains sparse. Key recommendations include monitoring bias across multiple stages, applying XAI tools such as SHAP, LIME, and counterfactuals, and strengthening governance through automated dashboards and audit trails. Overall, the findings emphasize that responsible AI must function not only as an ethical aspiration but as an operational imperative that fosters transparency and trust among stakeholders.
Artificial Intelligence (AI) holds great potential for enhancing Risk Management (RM) through automated data integration and analysis. While the positive impact of AI in RM is acknowledged, concerns are rising about unintended consequences. This study explores factors like opacity, technology and security risks, revealing potential operational inefficiencies and inaccurate risk assessments. Through archival research and stakeholder interviews, including chief risk officers and credit managers, findings highlight the risks stemming from the absence of AI regulations, operational opacity, and information overload. These risks encompass cybersecurity threats, data manipulation uncertainties, monitoring challenges, and biases in algorithms. The study emphasizes the need for a responsible AI framework to address these emerging risks and enhance the effectiveness of RM processes. By advocating for such a framework, the authors provide practical insights for risk managers and identify avenues for future research in this evolving field.
The proliferation of artificial intelligence (AI) technologies, such as ChatGPT, is revolutionizing society, significantly impacting the economy, and reshaping how work is conducted. Effective and prudent leadership is paramount for society to fully benefit from the responsible adoption of AI within organizations. To this end, we employ the Congruence Model for Organizational Change, which provides a comprehensive framework for effective leadership to ensure that the integration of new technologies yields societal benefits. Our model delineates three core goals for responsible AI adoption: organizational well-being, societal well-being, and ecological well-being. Within the framework of the Congruence Model, we examine how four critical elements—task engineering, informal organization, formal organization, and individual empowerment—should be strategically aligned to achieve these goals. This alignment ensures that AI adoption not only enhances organizational performance but also contributes positively to society and the natural environment. Finally, implications for future research and ethical leadership practices are discussed, highlighting the need for continued exploration of the ethical dimensions of AI and the development of leadership strategies that support sustainable and responsible technology adoption.
There is a growing debate amongst academics and practitioners on whether interventions made, thus far, towards Responsible AI have been enough to engage with the root causes of AI problems. Failure to effect meaningful changes in this system could see these initiatives not reach their potential and lead to the concept becoming another buzzword for companies to use in their marketing campaigns. Systems thinking is often touted as a methodology to manage and effect change; however, there is little practical advice available for decision-makers to include systems thinking insights to work towards Responsible AI. Using the notion of ‘leverage zones’ adapted from the systems thinking literature, we suggest a novel approach to plan for and experiment with potential initiatives and interventions. This paper presents a conceptual framework called the Five Ps to help practitioners construct and identify holistic interventions that may work towards Responsible AI, from lower-order interventions such as short-term fixes, tweaking algorithms and updating parameters, through to higher-order interventions such as redefining the system’s foundational structures that govern those parameters, or challenging the underlying purpose upon which those structures are built and developed in the first place. Finally, we reflect on the framework as a scaffold for transdisciplinary question-asking to improve outcomes towards Responsible AI.
The accelerating integration of Artificial Intelligence (AI) into internal auditing is reshaping assurance practices through continuous auditing, advanced analytics, and predictive risk assessment. While AI enhances audit efficiency, coverage, and timeliness, it also introduces significant ethical, accountability, and governance challenges, including algorithmic opacity, data bias, privacy risks, and the potential erosion of professional judgment. Addressing these issues is essential for preserving audit integrity and stakeholder trust in increasingly automated audit environments. This study adopts a conceptual and theory-driven approach, drawing on Stakeholder Theory, Ethical Decision-Making Theory, and Technology Governance Theory. Through a systematic synthesis of academic literature, professional auditing standards, and global ethical AI guidelines, the paper develops the Ethical AI Audit Framework (EAAF). The framework embeds core ethical principles—transparency, fairness, accountability, explainability, privacy, and integrity—across the internal audit lifecycle, from planning to follow-up. The EAAF emphasizes “human-in-command” oversight, highlights governance enablers such as ethical review mechanisms and bias audits, and identifies auditor competence as central to ethical AI outcomes. This study contributes a domain-specific ethical governance model to support responsible, transparent, and trustworthy AI-enabled internal auditing.
The rapid evolution of workforce technologies has accelerated the need for intelligent, ethical, and scalable approaches to job architecture design. This research presents a governance-oriented framework for integrating Generative Artificial Intelligence (AI) into the SAP SuccessFactors ecosystem to automate and optimise job description creation. The proposed model employs Natural Language Generation (NLG) within the Job Profile Builder and Talent Intelligence Hub to produce competency-driven, bias-mitigated, and contextually adaptive job content. A mixed-methods design was applied, encompassing system prototyping, linguistic analysis, and expert validation across global HR and compliance teams. Empirical evaluation demonstrates that AI-generated job profiles achieve higher accuracy in competency alignment, a 38% reduction in biased or exclusionary language, and a 70% improvement in content development speed compared to traditional authoring methods. Beyond operational gains, the research highlights the importance of human-in-the-loop governance, explainable AI logic, and transparent ethical checkpoints to sustain trust and regulatory compliance. By framing Generative AI as a co-creative partner within SAP SuccessFactors rather than a fully autonomous author, this study establishes a replicable blueprint for responsible AI deployment in Human Capital Management (HCM). The findings position responsible job design as a critical enabler of digital transformation, workforce inclusivity, and organisational resilience in the era of intelligent enterprise systems.
No abstract available
No abstract available
The rapid proliferation of artificial intelligence across organizational contexts has generated profound strategic opportunities while introducing significant ethical and operational risks. Despite growing scholarly attention to responsible AI, extant literature remains fragmented and is often adopting either an optimistic stance emphasizing value creation or an excessively cautious perspective fixated on potential harms. This paper addresses this gap by presenting a comprehensive examination of AI's dual nature through the lens of strategic information systems. Drawing upon a systematic synthesis of the responsible AI literature and grounded in paradox theory, we develop the Paradox-based Responsible AI Governance (PRAIG) framework that articulates: (1) the strategic benefits of AI adoption, (2) the inherent risks and unintended consequences, and (3) governance mechanisms that enable organizations to navigate these tensions. Our framework advances theoretical understanding by conceptualizing responsible AI governance as the dynamic management of paradoxical tensions between value creation and risk mitigation. We provide formal propositions demonstrating that trade-off approaches amplify rather than resolve these tensions, and we develop a taxonomy of paradox management strategies with specified contingency conditions. For practitioners, we offer actionable guidance for developing governance structures that neither stifle innovation nor expose organizations to unacceptable risks. The paper concludes with a research agenda for advancing responsible AI governance scholarship.
Prior to adoption of Artificial Intelligence (AI), organizations may be required to comply with certain industry standards to ensure customer confidence and interoperability of their products, which demands resource allocation and designated responsibilities. For Malaysian public offices certified under the Information Security Management System (ISMS), compliance with a new standard in support of Responsible AI would entail further resources and new reporting structures. Hence, this study proposed the adaptation of current practices for these organizations at the early stages of AI adoption. Ten sources, chosen for authenticity, credibility, representativeness, and meaning, provide the basis for the relevant proposals, including context establishment, risk identification, risk prioritization, and focus area for each control in Annex A of ISO/IEC 27001:2022. The results outlined key actions to support Responsible AI, with future research focusing on validating this framework in ISMS-certified settings.
Companies have considered adoption of various high-level artificial intelligence (AI) principles for responsible AI, but there is less clarity on how to implement these principles as organizational practices. This paper reviews the principles-to-practices gap. We outline five explanations for this gap ranging from a disciplinary divide to an overabundance of tools. In turn, we argue that an impact assessment framework which is broad, operationalizable, flexible, iterative, guided, and participatory is a promising approach to close the principles-to-practices gap. Finally, to help practitioners with applying these recommendations, we review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
Large and ever-evolving technology companies continue to invest more time and resources to incorporate responsible Artificial Intelligence (AI) into production-ready systems to increase algorithmic accountability. This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice. We present the results of semi-structured qualitative interviews with practitioners working in industry, investigating common challenges, ethical tensions, and effective enablers for responsible AI initiatives. Focusing on major companies developing or utilizing AI, we have mapped what organizational structures currently support or hinder responsible AI initiatives, what aspirational future processes and structures would best enable effective initiatives, and what key elements comprise the transition from current work practices to the aspirational future.
The integration of Artificial Intelligence (AI) into financial technologies (FinTech) is reshaping the global financial landscape by enhancing efficiency, enabling predictive risk management, and automating regulatory compliance. Despite these advances, the growing reliance on opaque, data-driven algorithms raises fundamental concerns about accountability, fairness, and consumer protection. The absence of transparent mechanisms to explain or audit algorithmic decisions has generated skepticism among regulators and the public, particularly in sensitive areas such as credit scoring, fraud detection, and investment recommendations. This study examines how principles of algorithmic accountability and ethical AI can be systematically embedded into the governance of FinTech systems. It adopts an interdisciplinary approach, drawing on perspectives from computer science, financial regulation, and legal scholarship, to analyze existing ethical frameworks and identify their limitations. The paper proposes a lifecycle governance model that integrates continuous monitoring, bias mitigation, and explainability into the design and deployment of financial algorithms. The framework emphasizes regulatory tools such as adaptive oversight, algorithmic auditing, and regulatory sandboxes, while also highlighting the importance of stakeholder engagement and cross-disciplinary collaboration. By aligning technological innovation with ethical safeguards, the proposed model addresses the challenges of systemic risk, discrimination, and regulatory fragmentation. Ultimately, the study contributes a practical blueprint for balancing innovation with accountability, ensuring that AI in finance evolves in ways that are trustworthy, transparent, and socially responsible.
In July 2023, New York City (NYC) implemented the first attempt to create an algorithm auditing regime for commercial machine-learning systems. Local Law 144 (LL 144), requires NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to be subject to annual bias audits by an independent auditor. In this paper, we analyse what lessons can be learned from LL 144 for other national attempts to create algorithm auditing regimes. Using qualitative interviews with 17 experts and practitioners working within the regime, we find LL 144 has failed to create an effective auditing regime: the law fails to clearly define key aspects like AEDTs and what constitutes an independent auditor, leaving auditors, vendors who create AEDTs, and companies using AEDTs to define the law’s practical implementation in ways that failed to protect job applicants. Several factors contribute to this: first, the law was premised on a faulty transparency-driven theory of change that fails to stop biased AEDTs from being used by employers. Second, industry lobbying led to the definition of what constitutes an AEDT being narrowed to the point where most companies considered their tools exempt. Third, we find auditors face enormous practical and cultural challenges gaining access to data from employers and vendors building these tools. Fourth, we find wide disagreement over what constitutes a legitimate auditor and identify four different kinds of ‘auditor roles’ that serve different functions and offer different kinds of services. We conclude with four recommendations for policymakers seeking to create similar bias auditing regimes that use clearer definitions and metrics and more accountability. By exploring LL 144 through the lens of auditors, our paper advances the evidence base around audit as an accountability mechanism, and can provide guidance for policymakers seeking to create similar regimes.
The rapid integration of artificial intelligence (AI) into financial and accounting systems has redefined decision-making processes, creating both opportunities for efficiency and risks related to transparency, bias, and regulatory compliance. Traditional governance mechanisms often lag behind technological innovation, resulting in accountability gaps in algorithmic decision-making. This paper introduces the Artificial Intelligence Governance Framework for Finance (AIGF-F), a control-by-design model aimed at embedding governance, risk management, and ethical oversight directly into AI-driven accounting systems. The framework emphasizes proactive governance through three core dimensions: algorithmic transparency, embedded control mechanisms, and adaptive regulatory alignment. It incorporates auditability features such as algorithmic audit trails, explainability protocols, and fairness metrics to ensure that AI outputs remain accountable to stakeholders. By adopting a control-by-design philosophy, the AIGF-F moves governance from a reactive supervisory function to an integral component of system architecture, minimizing risks before they materialize. The framework also highlights the role of augmented human oversight, ensuring that accountants and auditors remain central in interpreting AI-driven insights and validating ethical boundaries. Furthermore, the model demonstrates how financial institutions can balance innovation with compliance by integrating dynamic monitoring tools and continuous feedback loops that adjust controls in response to evolving data environments and regulatory landscapes. For practitioners, the AIGF-F offers a structured approach to implementing AI responsibly in areas such as financial reporting, auditing, fraud detection, and compliance monitoring. For regulators, it provides a scalable framework for establishing adaptable supervisory structures capable of keeping pace with algorithmic complexity. Ultimately, this study positions AI governance not as a barrier but as an enabler of sustainable financial transformation. By embedding governance into design, the AIGF-F enhances trust, accountability, and resilience in AI-enabled accounting systems, contributing to the broader discourse on ethical and responsible digital finance. Keywords: Artificial Intelligence Governance, Control-By-Design, Algorithmic Decision-Making, AI In Accounting, Financial Reporting, Auditability, Explainability, Fairness Metrics, Compliance Monitoring, Ethical AI.
An increasing number of regulations propose ‘AI audits’ as a mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently lacks agreed-upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue that AI audits should similarly provide assurance to their stakeholders about AI organizations’ ability to govern their algorithms in ways that mitigate harms and uphold human values. We discuss the necessary conditions for the criterion audit and provide a procedural blueprint for performing an audit engagement in practice. We illustrate how this framework can be adapted to current regulations by deriving the criteria on which ‘bias audits’ can be performed for in-scope hiring algorithms, as required by the recently effective New York City Local Law 144 of 2021. We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing where robust guardrails against quality assurance issues are only starting to emerge. Our discussion—informed by experiences in performing these audits in practice—highlights the critical role that an audit ecosystem plays in ensuring the effectiveness of audits.
The proliferation of Artificial Intelligence (AI) in Human Resource Management (HRM) offers significant efficiencies but concurrently introduces critical ethical challenges regarding transparency, fairness, and accountability, thereby impacting sustainable workforce management. The novelty of this study lies in its exploration of strategies for effectively managing these ethical dimensions in AI-driven HRM, with a focus on identifying practical solutions to mitigate algorithmic bias and enhance transparency. The research focuses on identifying ethical issues, such as algorithmic bias and lack of transparency in AI-assisted recruitment, performance evaluation, and overall employee management. Adopting a Systematic Literature Review (SLR) methodology, this paper analyzes recent publications (2019-2024) from Scopus to synthesize existing knowledge on AI ethics in HRM. Key findings reveal a significant lack of standardized best practices and audit trails, with opaque AI-driven decisions often undermining trust and fairness. If not properly designed, AI algorithms can perpetuate biases embedded in historical data, leading to discriminatory outcomes, as evidenced by cases at companies like Amazon and HireVue. The study concludes that organizations must proactively implement robust governance frameworks, including ethics training, the development of transparent and fair-by-design algorithms, regular audits, and mechanisms for human oversight. Integrating frameworks such as Fairness, Accountability, and Transparency (FAT) and Ethical AI by Design is crucial for ensuring that AI applications in HRM are ethically sound, legally compliant, and contribute to sustainable and equitable workforce management.
The auditor profession is currently undergoing its most significant transformation, evolving into Auditor 4.0 through the extensive incorporation of AI and ML. This paper provides a conceptual-exploratory investigation into this paradigm shift, moving the practice from traditional, sample-based, retrospective verification toward full-population testing, continuous auditing and proactive risk management. Our analysis addresses four critical objectives for navigating the digital age of assurance. First, we look at how AI technologies, such as predictive analytics and continuous monitoring, significantly improve audit quality by broadening the audit scope, making it more efficient and enhancing the ability to detect material misstatements and fraud (O1). Second, the study examines how the auditor s role has evolved into the Auditor 4.0 - "Hybrid Auditor", explaining that new technical skills (such as data science and algorithmic understanding) and a higher level of professional skepticism are needed in AI-enabled environments (O2). Third, we examine the complex world of governance and ethics, focusing on the "black box" problem, algorithmic bias, data integrity, transparency and accountability (O3). We suggest ways to reduce these risks and ensure AI is used responsibly while upholding the profession s core values of honesty and independence. Finally, we identify the main technical, organizational, cultural and regulatory obstacles to AI adoption. We also suggest structured pathways and implementation frameworks to ensure these tools are successfully integrated into the global profession (O4). This paper provides an essential framework for auditors, companies and regulators to leverage AI s capabilities while maintaining public confidence in the accuracy and integrity of financial reporting.
The integration of Artificial Intelligence (AI) into core business processes promises significant efficiencies and innovations, but also introduces challenges related to transparency, accountability, and fairness. Effective AI governance is crucial to mitigate risks and build trust among stakeholders. This paper introduces TranspareGov-AI, a novel multi-stakeholder framework designed to enable transparent and auditable algorithmic decision-making within organizational business processes. TranspareGov-AI establishes a structured approach encompassing AI system registration, impact assessment, continuous monitoring with auditable logging (extending concepts like EBBL), stakeholder engagement portals, and independent audit protocols. We detail the framework's architecture, its key components including a Governance Dashboard and Audit Trail Repository, and an implementation within a simulated customer relationship management (CRM) process involving AI-driven lead scoring. Our evaluation demonstrates how TranspareGov-AI facilitates stakeholder understanding, supports compliance with regulatory requirements, and enhances the auditability of AI-driven decisions. The framework provides a practical pathway for organizations to implement responsible AI governance, fostering trust and ensuring alignment with ethical and business objectives.
Artificial Intelligence (AI) is transforming internal audit functions in financial institutions, offering capabilities such as predictive analytics, automated risk assessment, and real-time monitoring. In Islamic financial institutions (IFIs), integrating AI presents unique challenges and opportunities, particularly in aligning technological innovations with Shariah governance and ethical principles. Despite growing interest in AI applications, research on integrating AI into internal auditing for IFIs remains limited. This study proposes a conceptual framework illustrating the role of AI in enhancing internal audit effectiveness while ensuring compliance with Maqasid al-Shariah. The framework addresses key opportunities, including improved efficiency, fraud detection, and compliance monitoring, as well as conceptual challenges such as algorithmic bias, data privacy, and ethical accountability. The paper contributes to theory by extending audit and governance literature through the lens of digital transformation in Islamic finance. Practical implications are offered for regulators, Shariah boards, and internal auditors seeking to adopt AI responsibly. Future research directions include empirical validation of the proposed framework and cross-comparative studies with conventional financial institutions.
Algorithmic audits (or ‘AI audits’) are an increasingly popular mechanism for algorithmic accountability; however, they remain poorly defined. Without a clear understanding of audit practices, let alone widely used standards or regulatory guidance, claims that an AI product or system has been audited, whether by first-, second-, or third-party auditors, are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this knowledge gap, we provide the first comprehensive field scan of the AI audit ecosystem. We share a catalog of individuals (N=438) and organizations (N=189) who engage in algorithmic audits or whose work is directly relevant to algorithmic audits; conduct an anonymous survey of the group (N=152); and interview industry leaders (N=10). We identify emerging best practices as well as methods and tools that are becoming commonplace, and enumerate common barriers to leveraging algorithmic audits as effective accountability mechanisms. We outline policy recommendations to improve the quality and impact of these audits, and highlight proposals with wide support from algorithmic auditors as well as areas of debate. Our recommendations have implications for lawmakers, regulators, internal company policymakers, and standards-setting bodies, as well as for auditors. They are: 1) require the owners and operators of AI systems to engage in independent algorithmic audits against clearly defined standards; 2) notify individuals when they are subject to algorithmic decision-making systems; 3) mandate disclosure of key components of audit findings for peer review; 4) consider real-world harm in the audit process, including through standardized harm incident reporting and response mechanisms; 5) directly involve the stakeholders most likely to be harmed by AI systems in the algorithmic audit process; and 6) formalize evaluation and, potentially, accreditation of algorithmic auditors.
Artificial intelligence (AI) is rapidly transforming decision-making across various sectors, introducing both opportunities and ethical challenges for leadership. While AI enhances efficiency and innovation, concerns, such as algorithmic bias, transparency deficits, and accountability gaps, pose significant risks to governance. This study examines these ethical dilemmas through real world cases, including Amazon’s recruiting tool, Olay’s algorithmic audit, IBM Watson for Oncology, and predictive policing via COMPAS, to assess their impact on leadership frameworks and the necessity for proactive ethical oversight. Through a comprehensive interdisciplinary analysis, this paper explores traditional ethical leadership models alongside emerging AI governance frameworks, notably the Ethical Management of Artificial Intelligence (EMMA) model. By synthesizing research across ethics, psychology, and management, this study demonstrates how leaders must integrate technical expertise with ethical sensitivity to align AI adoption with organizational values and societal expectations. These findings underscore the crucial need for explainable AI (XAI), bias audits, and transparent accountability structures to promote trust in AI systems. To address these challenges, this study recommends a multi-stakeholder approach that prioritizes interdisciplinary collaboration, continuous ethical monitoring, and enforceable AI governance policies. Ethical AI leadership necessitates adaptive oversight to ensure that AI innovation benefits humanity without perpetuating systemic biases or ethical blind spots.
The accelerating adoption of artificial intelligence (AI) in accounting and financial management has introduced both transformative opportunities and complex risks. While AI-driven systems promise enhanced efficiency, predictive analytics, and real-time reporting, they simultaneously create vulnerabilities related to data integrity, model bias, algorithmic opacity, and regulatory compliance. This paper proposes the Digital Finance Transformation Model (DFTM), a conceptual framework designed to integrate risk management and control mechanisms into AI-enabled accounting systems. The model emphasizes a layered architecture of governance, technological safeguards, and adaptive controls that align financial innovation with trust and accountability. Key elements include algorithmic audit trails, explainability protocols, embedded compliance monitoring, and dynamic risk assessment tools capable of adjusting to evolving data environments. By positioning risk and control as integral to system design rather than afterthoughts, the DFTM provides organizations with a blueprint for embedding resilience, transparency, and ethical assurance into digital finance infrastructures. The framework also highlights the role of human oversight through augmented decision-making, where finance professionals complement AI outputs with contextual judgment and ethical considerations. Application of the DFTM ensures that AI-driven accounting systems do not only enhance efficiency but also preserve the fundamental principles of accuracy, reliability, and stakeholder trust. For regulators, the model offers insights into creating adaptable supervisory frameworks capable of keeping pace with technological innovation. For practitioners, it serves as a guide for integrating AI responsibly into core accounting functions such as auditing, reporting, fraud detection, and compliance monitoring. Ultimately, the DFTM positions risk and control as enablers rather than inhibitors of digital finance transformation, offering a structured pathway for reconciling innovation with accountability. By advancing a holistic approach that combines technology, governance, and human judgment, this study contributes to the evolving discourse on sustainable and trustworthy AI adoption in financial systems.
The integration of Artificial Intelligence (AI) into regulatory compliance frameworks has transformed the financial services sector by enabling more adaptive, predictive, and proactive governance systems. This review examines the current landscape of AI-driven regulatory technologies (RegTech), emphasizing how machine learning, natural language processing, and anomaly detection algorithms are being leveraged to monitor compliance, assess risk, and prevent fraud in real-time. The paper explores the evolution of regulatory requirements, such as Basel III, GDPR, and AML directives, and evaluates how AI tools can streamline compliance reporting and enhance audit readiness. It also assesses the challenges of algorithmic accountability, regulatory uncertainty, data privacy, and explainability in deploying AI for compliance management. Case studies from leading financial institutions and fintech firms illustrate practical applications and emerging best practices. This study concludes by identifying strategic frameworks that integrate AI ethics, legal compliance, and real-time fraud analytics to support resilient and transparent financial ecosystems.
The proliferation of artificial intelligence systems in high-stakes financial risk assessment has created unprecedented challenges related to algorithmic transparency, accountability, and regulatory compliance. Financial institutions increasingly rely on complex machine learning models for credit scoring, fraud detection, and portfolio risk evaluation, yet existing quality assurance frameworks prove inadequate for managing black-box AI systems. The Explainability-Driven Quality Assurance framework addresses these critical gaps by establishing systematic protocols for bias detection, regulatory compliance verification, real-time performance monitoring, and continuous model validation. Implementation across multiple financial institutions demonstrates substantial improvements in audit readiness, compliance verification effectiveness, and operational efficiency while maintaining rigorous quality standards. The framework integrates automated testing modules, fairness assessment protocols, and explainability mechanisms within existing development workflows, enabling seamless adoption across diverse institutional environments. Comparative evaluation reveals superior performance characteristics relative to traditional quality assurance methodologies, particularly in addressing dynamic model behavior, algorithmic fairness requirements, and regulatory transparency mandates. The framework establishes industry benchmarking standards for measuring AI system accountability and provides scalable solutions adaptable to various financial applications and regulatory jurisdictions.
Financial fraud poses a growing threat to the stability and public trust of United States financial institutions, yet existing detection systems frequently fail to balance predictive accuracy with the transparency and accountability that regulatory frameworks demand. This study evaluates how standardized algorithmic governance frameworks and explainable artificial intelligence architectures can substantiate public trust in automated fraud detection systems operating within the U.S. financial infrastructure. A mixed-methods research design was employed, which combines systematic literature review, comparative policy analysis, and empirical data synthesis from real-time monitoring environments to assess how formal verification workflows, drift monitoring protocols, and third-party audit mechanisms translate into measurable institutional accountability. The findings demonstrate that hybrid ensemble models augmented with post-hoc interpretability tools such as SHAP and LIME achieve F1-scores of 92.2% and AUC-ROC values of 0.97, as well as maintaining high regulatory alignment, which outperform opaque deep learning architectures across both predictive and compliance dimensions. Institutions deploying layered governance architectures incorporating federated learning, cryptographic audit trails, and continuous drift monitoring record compliance scores of 94%, compared to 41% for systems with no formal governance structure. Furthermore, the integration of contestable AI mechanisms and stakeholder inclusion protocols significantly reduces perceived institutional bias and strengthens procedural fairness. This study concludes that the convergence of explainable AI, proactive bias mitigation, and rigorous governance architecture is essential for reconciling fraud detection efficacy with the democratic accountability that modern financial oversight requires. Keywords: Financial Fraud Detection, Explainable Artificial Intelligence, Algorithmic Governance, Public Trust, Regulatory Compliance, Institutional Accountability.
The growing reliance on Artificial Intelligence (AI) in financial decision-making offers significant potential for increased efficiency and innovation but also raises critical concerns about perpetuating existing inequities. As financial services adopt AI for tasks like credit scoring, loan approvals, and investment analysis, the risk of algorithmic bias impacting marginalized groups has garnered significant attention. Biases in training data, model design, and deployment processes can lead to discriminatory outcomes, undermining efforts to promote Diversity, Equity, and Inclusion (DEI). Addressing these challenges requires a comprehensive framework for aligning AI systems with DEI principles. This study analyses the risks of AI reinforcing systemic inequities in financial decision-making and identifies key strategies for mitigating these biases. Transparent algorithmic processes are essential to enable stakeholders to understand decision-making logic and detect potential biases. Accountability mechanisms, such as regular audits and independent evaluations, ensure that AI systems comply with ethical standards and DEI objectives. Inclusivity must be prioritized in both data collection and model design, ensuring diverse representation to reduce inherent biases. By implementing these strategies, financial institutions can leverage AI to drive equitable decision-making, fostering trust among stakeholders while addressing regulatory and societal demands. This paper proposes a roadmap for developing and deploying bias-mitigating AI systems in financial services, with a focus on creating fair, inclusive, and accountable models. Through case studies and best practices, it highlights actionable solutions to bridge the gap between technological innovation and ethical imperatives, ensuring AI serves as a tool for equity and inclusion rather than a perpetuator of inequality.
Algorithmic recommendations mediate interactions between millions of customers and products (in turn, their producers and sellers) on large e-commerce marketplaces like Amazon. In recent years, the producers and sellers have raised concerns about the fairness of black-box recommendation algorithms deployed on these marketplaces. Many complaints are centered around marketplaces biasing the algorithms to preferentially favor their own 'private label' products over competitors. These concerns are exacerbated as marketplaces increasingly de-emphasize or replace 'organic' recommendations with ad-driven 'sponsored' recommendations, which include their own private labels. While these concerns have been covered in popular press and have spawned regulatory investigations, to our knowledge, there has not been any public audit of these marketplace algorithms. In this study, we bridge this gap by performing an end-to-end systematic audit of related item recommendations on Amazon. We propose a network-centric framework to quantify and compare the biases across organic and sponsored related item recommendations. Along a number of our proposed bias measures, we find that the sponsored recommendations are significantly more biased toward Amazon private label products compared to organic recommendations. While our findings are primarily interesting to producers and sellers on Amazon, our proposed bias measures are generally useful for measuring link formation bias in any social or content networks.
Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits and continue to find it difficult to translate such independent assessments into meaningful corporate accountability. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender- and skin-type performance disparities in commercial facial analysis models. This paper (1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, (2) presents new performance metrics from targeted companies such as IBM, Microsoft, and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, (3) provides performance results on PPB by non-target companies such as Amazon and Kairos, and (4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new application program interface (API) versions. All targets reduced accuracy disparities between males and females and darker- and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup that underwent a 17.7--30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72--8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively. This is an expanded version of an earlier publication of these results, revised for a more general audience, and updated to include commentary on further developments.
Audits are critical mechanisms for identifying the risks and limitations of deployed artificial intelligence (AI) systems. However, the effective execution of AI audits remains incredibly difficult, and practitioners often need to make use of various tools to support their efforts. Drawing on interviews with 35 AI audit practitioners and a landscape analysis of 435 tools, we compare the current ecosystem of AI audit tooling to practitioner needs. While many tools are designed to help set standards and evaluate AI systems, they often fall short in supporting accountability. We outline challenges practitioners faced in their efforts to use AI audit tools and highlight areas for future tool development beyond evaluation—from harms discovery to advocacy. We conclude that the available resources do not currently support the full scope of AI audit practitioners’ needs and recommend that the field move beyond tools for just evaluation and towards more comprehensive infrastructure for AI accountability.
African women-run small and medium-sized companies (SMEs) need $42 billion more in capital each year. Even if they pay back their loans just as well as male-led enterprises, women may only access 7% of formal credit. This is the first research to look at gender bias in African fintech lending algorithms in depth. It achieves this by using both gender entrepreneurship theory and algorithmic fairness indicators. It analyzes 10 standard credit scoring models from conventional banks and fintechs in Nigeria, Kenya, and South Africa using 1,200 synthetic SME profiles that have the same financial fundamentals but distinct gender indications in ownership signals, sector coding, and networking patterns. The audit demonstrates that female entrepreneurs are systematically punished with a 37% underfunding penalty. This shows that AI algorithms change digital lending from a promised equalizer into an engine of prejudice. This is shown by closely measuring approval rates, interest spreads, and collateral demands. These numbers show that women are punished with hidden proxies, such as sector-based risk misclassification (for example, calling beauty services high-risk even though they are known to be profitable), network analysis that favors male-dominated affiliations, and linguistic bias against communal leadership language. These findings demonstrate that lawmakers need to act right now to make algorithms accountable. They also demonstrate that machine learning, which is supposed to be fair, instead makes human prejudices worse, creating self-perpetuating cycles of financial exclusion in Africa's digital lending environment.
Both the literature on advice-taking from humans and the literature on advice-taking from algorithms suggest that providing advice rationales—information explaining how an adviser arrived at a recommendation—increase advice-taking. We examine how accountability in the form of managers having to justify their judgments and decisions to superiors moderates this effect and whether the moderating effect is dependent on the adviser being a human or al algorithm. We use an experiment, in which we manipulate the type of adviser (human vs algorithm), the presence of an advice rationale (present vs absent) accountability (low vs high), and measure the extent of advice-taking. For human advisers, we find that receiving an advice rationale increases advice-taking more under high accountability than under low accountability. However, a significant three-way interaction suggests that this effect is less so when the adviser is an algorithm: the positive effect of an advice rationale on advice-taking from an algorithm is not moderated by accountability but consistently strong. Our results emphasize the importance of providing advice rationales, especially when managers are operating under high accountability. Firms may consider the cost–benefit trade-off of providing advice rationales in the case of human advisers. In settings of low accountability, the benefits of providing an immediate advice rationale for the advisee may not outweigh the costs for the adviser. In the case of algorithmic advisers, providing an advice rationale is essential, regardless of the accountability under which a potential advice-taker operates. We advance both the literature on advice-taking from humans and advice-taking from algorithms by showing how the effects of advice rationales are affected by the level of accountability, which is ubiquitous in the managerial context. Given that generating advice rationales can be costly and the potential negative effects of accountability, these insights are also of major importance to practitioners who need to consider the costs and benefits of organizational structures.
AI has become a part of the corporate decision-making process and has contributed to strategic planning, risk evaluation, human resource management, customer analytics, and financial predictions. Although the AI systems have strong efficiency, accuracy, and scalability benefits, they are becoming more autonomous, which leads to important ethical issues of fairness, transparency, accountability, and trust. This paper is an analysis of the ethical application of artificial intelligence into corporate decision-making, including the framework of governance, ethics and organizational practices that guarantee responsible adoption of artificial intelligence. The paper combines information in academic literature and international policy guidelines, and corporate governance reports by utilizing a descriptive and analytical research design based on secondary data. The results show that the ethical application of AI can substantially improve the quality of decisions, stakeholder trust, and the sustainability of the organization in the long term in case it is supported by ethical principles (transparency, explainability, accountability, and human control). Nevertheless, there are still difficulties such as the possibility of algorithmic bias, data privacy risks, uninterpretability, and uncertainty in regulation that do not promote successful ethical integration. The research emphasizes that leadership plays a significant strategic role, ethical governance arrangements, and cross-functionality in entrenching ethical AI in the business decision making. This study presents the current discussion on responsible AI by combining theoretical knowledge about ethics with practical business concerns and offers an idea of what organizations need to do to find a compromise between innovation and ethical accountability.
No abstract available
No abstract available
This article examines the multifaceted ethical challenges emerging from the integration of artificial intelligence within Salesforce platforms and proposes a comprehensive framework for addressing these concerns. As organizations increasingly deploy AI-powered solutions for customer relationship management, questions regarding data privacy, algorithmic bias, transparency, and accountability have become paramount. Through analysis of current implementation practices and regulatory landscapes, the article identifies critical vulnerabilities in existing approaches and demonstrates how these issues can undermine both customer trust and business outcomes. The article introduces structured governance mechanisms and technical safeguards that Salesforce architects and administrators can implement to ensure AI systems operate fairly and transparently while maintaining robust privacy protections. By synthesizing insights from both technical and ethical perspectives, this article contributes to the evolving discourse on responsible AI deployment in enterprise environments and offers actionable guidance for practitioners seeking to balance innovation with ethical considerations in Salesforce implementations.
This study explores the ethical implications and challenges associated with the implementation of artificial intelligence (AI) in business operations. The research aims to identify key ethical concerns, such as data privacy, algorithmic bias, and workforce displacement, while analyzing their impact on organizational practices and stakeholder trust. Utilizing a library research approach, the study synthesizes insights from academic literature, industry reports, and case studies to examine the intersection of AI ethics and business performance. The findings highlight the need for transparent governance frameworks, inclusive algorithm design, and ethical AI policies to address these challenges effectively. The research underscores the importance of balancing technological innovation with ethical considerations, offering practical implications for businesses seeking to integrate AI responsibly into their operations
Ethical Considerations of AI Implementation in Business Planning: Ensuring Fairness and Transparency
The advent of widespread AI in the field of corporate planning has sparked a revolutionary shift, ushering in previously unimaginable possibilities for efficacy and data-driven decision-making. This evolution, however, is not without of moral consequences. Our increasingly AI-powered environment has elevated the need of considering the ethical implications of AI application in company strategy. Incorporating sophisticated fairness criteria, bias reduction strategies, transparency evaluations, and legal framework compliance, this study examines the varied terrain of AI ethics.The research shows that achieving justice and openness in AI systems requires constant juggling, with model accuracy typically taking a backseat. Concerns about potentially unfair AI-driven conclusions are met with effective solutions for bias reduction, such as reweighing and resampling. Improving AI model interpretability by feature-level transparency evaluation using SHAP values. Also, AI has been shown to be ethically and legally compliant, as shown by its high compliance ratings with legislation like the General Data Protection Regulation and the Equal Employment Opportunity Commission’s requirements.These results provide useful insights for corporate executives, legislators, and the general public as they pertain to using AI in an ethical manner. The full potential of AI may be realized by enterprises while maintaining ethical standards if the correct balance is struck between fairness and openness, investment is made in bias prevention, and regulatory compliance is encouraged. In this study, we focus on how artificial intelligence might be reimagined from its current state as a potentially biased and opaque planning tool to one that drives ethical excellence in corporate strategy.This study acts as a catalyst for change by advocating for ethical and responsible AI deployment, which in turn creates a climate in which judgments influenced by AI are more fair, open, and lawful. These findings will serve as a springboard for other studies and activities that will push for more ethical AI development and use in company strategy.
: Artificial intelligence is transforming marketing through enhanced personalization, automation, and predictive analytics, providing new opportunities for business agility. However, organizations face a critical challenge when rapidly deploying AI (Artificial Indigence) while upholding ethical and sustainable practices. This research examines how marketing professionals balance the need for speed and innovation with responsible AI implementation. By surveying 60 marketing practitioners, this study identifies common strategies and best practices that promote both agility and ethical, sustainable outcomes in AI-driven marketing. Initial analysis suggests that clear ethical guidelines, cross-functional oversight, and continuous training are key to achieving agility without compromising integrity. The findings are presented in a practical framework of best practices and recommendations, demonstrating that ethical AI practices can increase business agility in a competitive marketing environment.
AI has found a wide range of application areas in the financial services industry. As the number and the criticality of the applications continue to increase, fair and ethical AI has emerged as an industry-wide objective. In recent years, numerous ethical principles, guidelines and techniques have been proposed. However, the model development organizations face serious challenges in building ethical AI solutions. This paper focuses on the overarching issues model development teams face, which range from the design and implementation complexities, to the shortage of tools, and the lack of organizational constructs. It argues that focusing on the practical considerations is an important step in bridging the gap between the high-level ethics principles and the deployed AI applications, as well as starting industry-wide conversations toward solution approaches.
The integration of artificial intelligence (AI) within the decision-making processes of small- and medium-sized enterprises (SMEs) presents both significant opportunities and substantial ethical challenges. The aim of this paper is to provide a theoretical model depicting the interdependence of organisational decision-making levels and decision-making styles, with an emphasis on exploring the role of AI in organisations’ decision making, based on selected process dimension of the MER model of integral governance and management, particularly in relation to routine, analytical, and intuitive decision-making capabilities. The research methodology employs a comprehensive qualitative analysis of the scientific literature published between 2010 and 2024, focusing on AI implementation in SMEs, ethical decision making in integral management, and regulatory frameworks governing AI use in business contexts. The findings reveal that AI technologies influence decision making across business policy, strategic, tactical, and operative management levels, with distinct implications for intuitive, analytical, and routine decision-making approaches. The analysis demonstrates that while AI can enhance data processing capabilities and reduce human biases, it presents significant challenges for normative–ethical decision making, requiring human judgment and stakeholder consideration. We conclude that effective AI integration in SMEs requires a balanced approach where AI primarily serves as a tool for data collection and analysis rather than as an autonomous decision maker. These insights contribute to the discourse on responsible AI implementation in SMEs and provide practical guidance for leaders navigating the complex interplay between (non)technological capabilities, ethical considerations, and regulatory requirements in the evolving business landscape.
As artificial intelligence (AI) becomes integral to organizational transformation, ethical adoption has emerged as a strategic concern. This paper reviews ethical theories, governance models, and implementation strategies that enable responsible AI integration in business contexts. It explores how ethical theories such as utilitarianism, deontology, and virtue ethics inform practical models for AI deployment. Furthermore, the paper investigates governance structures and stakeholder roles in shaping accountability and transparency, and examines frameworks that guide strategic risk assessment and decision-making. Emphasizing real-world applicability, the study offers an integrated approach that aligns ethics with performance outcomes, contributing to organizational success. This synthesis aims to support firms in embedding responsible AI principles into innovation strategies that balance compliance, trust, and value creation.
Integrating Artificial Intelligence (AI) into digital recruitment platforms has introduced significant enhancements in efficiency and decision-making, alongside complex ethical challenges regarding fairness, transparency, and accountability in candidate evaluation. This study investigates how leading AI-driven recruitment platforms articulate and operationalize ethical principles and whether these commitments are effectively translated into practice. Employing a qualitative exploratory design, the research analyzes official white papers, privacy policies, and AI ethics statements from LinkedIn, HireVue, Pymetrics, and ModernHire. Data was examined using AI-assisted text mining and thematic content analysis to identify ethical discourse patterns and assess the depth of implementation. The findings indicate that moral terms such as “fairness” and “bias” are cited frequently, with LinkedIn referencing them 27 times and HireVue 19 times. A comparative transparency assessment yielded scores of 8.5 out of 10 for LinkedIn, 7.2 for HireVue, 6.8 for Pymetrics, and 4.3 for ModernHire, while formal mechanisms for candidate appeals were absent on most platforms. This study contributes to the field by revealing a persistent gap between stated ethical ideals and operational practices in AI recruitment and by recommending the adoption of explainable AI, transparent auditing frameworks, and international regulatory standards. Such measures are essential to foster more accountable, equitable, and humane AI-based hiring processes.
The rapid integration of generative artificial intelligence (AI) into business decision-making has introduced unprecedented ethical challenges for which existing governance frameworks are insufficient. This study investigates these challenges by conducting a literature review and framework analysis focused on the ethical implications of generative AI in corporate contexts. Key findings reveal significant algorithmic risks including bias, transparency deficits, privacy violations, and autonomy erosion that threaten stakeholder trust and organizational integrity. To address these issues, the research develops a novel multi-dimensional analysis framework that quantitatively evaluates ethical alignment in the implementation of generative AI across five domains, providing an empirical measurement approach not previously available. Based on this foundation, the study proposes an Ethical AI Governance Framework (EAGF) structured around five interlinked dimensions, emphasizing accountability structures, risk assessment protocols, and stakeholder engagement mechanisms. Finally, the research identifies implementation considerations—such as cultural transformation, resource allocation, and phased deployment that are critical to achieving sustainable and ethically responsible AI governance in contemporary business organizations.
Abstract The incorporation of artificial intelligence (AI) in health care offers revolutionary enhancements in patient diagnostics, clinical processes, and overall access to services. Nevertheless, this technological transition brings forth various new, intricate risks that pose challenges to current safety and ethical norms. This research explores the ability of enterprise risk management as an all‐encompassing framework to tackle these arising risks, providing both a forward‐looking and responsive strategy designed for the health care industry. At the core of this method are instruments that together seek to proactively uncover and address AI‐related weaknesses like algorithmic bias, system failures, and data privacy issues. On the reactive side, it incorporates incident reporting systems and root cause analysis, tools that enable health care providers to quickly address unexpected events and consistently improve AI implementation procedures. However, some application difficulties still exist. The unclear, “black box” characteristics of numerous AI models hinder transparency and responsibility, prompting inquiries about the clarity of AI‐generated choices and their adherence to ethical benchmarks in patient treatment. The research highlights that with the progress of AI technologies, the enterprise risk management framework also needs to evolve, addressing these new complexities while promoting a culture focused on safety in health care settings.
Human Resource Management (HRM) is being transformed by Artificial Intelligence (AI), which automates fundamental areas like talent acquisition and workforce planning, together with employee engagement and performance management. AI technologies provide organizations with efficient operations and predictive insights that help refine hiring processes and employee satisfaction while optimizing workforce distribution. The use of AI in HRM brings about substantial ethical issues such as algorithmic bias, together with transparency deficits and data privacy risks, and a reduction in human oversight. AI systems that learn from past datasets may propagate discrimination throughout hiring procedures and performance assessments by strengthening current workplace prejudices. The implementation of AI surveillance tools for employee monitoring brings up fundamental questions about workplace privacy and ethical practices while challenging notions of consent. Organizations should implement fairness-aware AI models along with explainability frameworks and robust data governance policies while incorporating hybrid AI-human decision-making methods for proper AI integration. HRM applications of AI demand ongoing bias evaluations alongside adherence to data protection regulations and clear AI decision processes to uphold accountability and trustworthiness. Through an extensive review, this paper investigates how AI affects HRM operations while identifying ethical risks and proposing governance strategies to achieve an equilibrium between automation and ethical responsibility. Future investigations must prioritize creating regulatory structures along with enhancing AI bias reduction methods and analyzing how AI influences long-term workforce diversity and employee job conditions, and well-being. HRM departments that prioritize ethical AI governance will fully harness AI capabilities while maintaining decision-making processes that are transparent and fair to build trust within organizations.
Artificial Intelligence (AI) has emerged as a transformative technology with the potential to revolutionize various sectors, from healthcare to finance, education, and beyond. However, successfully implementing AI systems remains a complex challenge, requiring a comprehensive and methodologically sound framework. This paper contributes to this challenge by introducing the Trustworthy, Optimized, Adaptable, and Socio-Technologically harmonious (TOAST) framework. It draws on insights from various disciplines to align technical strategy with ethical values, societal responsibilities, and innovation aspirations. The TOAST framework is a novel approach designed to guide the implementation of AI systems, focusing on reliability, accountability, technical advancement, adaptability, and socio-technical harmony. By grounding the TOAST framework in healthcare case studies, this paper provides a robust evaluation of its practicality and theoretical soundness in addressing operational, ethical, and regulatory challenges in high-stakes environments, demonstrating how adaptable AI systems can enhance institutional efficiency, mitigate risks like bias and data privacy, and offer a replicable model for other sectors requiring ethically aligned and efficient AI integration.
This research seeks to understand the business enterprise chatbots’ function in the complete transformation of customer support, diagnosing the customer support industry, as well as the implementation difficulties and ethics of utilizing these bot services. This includes understanding how these artificial intelligence chatbots are transforming customer services in enterprises, as well as identifying the unique issues posed by particular industries and their operational ethical standards. The research analyzes deployment plans, IT architecture requirements, and operational consequences by examining 50 enterprise case studies and conducting interviews with leaders in healthcare, finance, and retail. The findings have shown that with the use of AI chatbots, companies have reported a reduction of 45% in response times and 30% in the cost of customer support as compared to previous figures, while customer satisfaction remained considerably above 85%. On the other hand, healthcare has stricter compliance controls, and finance requires more complex security measures, which resulted in major differences across industries. The study also reveals underlying ethical issues like data privacy, algorithmic bias, and the need for open human monitoring. Good AI chatbot implementation requires a mix of technical expertise, industry-specific needs, and governance frameworks. This research provides a combination of such strategies for companies deciding to build AI chatbots, focusing on the requirement of sector-specific strategies and ethical considerations to maximize outcomes.
The rapid integration of artificial intelligence (AI) in Nepal's banking sector presents a critical paradox. While adopted to enhance operational efficiency, its implementation often outpaces the development of essential ethical and regulatory frameworks. This empirical study of 400 banking professionals across 28 commercial banks identifies a four-construct framework—AI infrastructure, model governance, service integration, and ethics capacity—that influences performance. Regression analysis reveals that ethics training (β = 0.216, p < 0.001) and AI-enabled services (β = 2.012, p < 0.001) significantly boost operational performance. Conversely, opaque model governance (β = -0.860) and subpar infrastructure (β = -0.788) detrimentally affect efficiency. The findings suggest that hybrid governance systems and regulatory sandboxes can bridge this implementation gap by striking a balance between innovation and responsibility. This study contributes to the understanding of AI adoption in resource-constrained environments, offering critical insights for financial institutions and policymakers.
The integration of Artificial Intelligence (AI) into Human Resource (HR) functions is increasingly transforming how organizations manage talent and design people-centric strategies. Advances in AI technologies have enabled organizations to improve efficiency, enhance decision-making, and strengthen workforce management across the talent lifecycle (Davenport &Ronanki, 2018). This study explores the application of AI in key HR functions such as recruitment and selection, onboarding, learning and development, performance management, employee engagement, and retention. By automating repetitive administrative tasks and leveraging predictive analytics, AI allows HR professionals to focus on strategic roles while offering more personalized and data-driven employee experiences (Bersin, 2019).However, the adoption of AI in HR presents several organizational and ethical challenges. Integrating AI tools with existing HR systems, managing employee resistance, and ensuring digital readiness remain significant implementation barriers (Marler & Boudreau, 2017). In addition, concerns related to data privacy, algorithmic transparency, and the risk of embedded biases in AI-driven decision-making have gained increasing scholarly attention (O’Neil, 2016; Raghavan et al., 2020). These concerns raise critical questions regarding fairness, accountability, and the long-term impact of AI on workforce diversity and inclusion.This study synthesizes insights from academic literature, industry reports, and organizational case evidence to provide a comprehensive understanding of AI-driven HR transformation. By examining both opportunities and challenges, the research offers valuable guidance for HR practitioners, business leaders, and policymakers. The study emphasizes the importance of balancing AI-enabled efficiency with human judgment, advocating for responsible and ethical AI adoption that supports transparent, inclusive, and sustainable talent management practices.
The integration of Artificial Intelligence (AI) into Monitoring and Evaluation (M&E) frameworks represents a significant transformation in how organizations assess program effectiveness and impact. This mixed-methods study examined the opportunities and ethical challenges associated with AI-driven predictive analytics in M&E through systematic literature review, case study analysis, and stakeholder consultation. The systematic review analyzed 89 peer-reviewed articles from 2015-2024, while three detailed case studies from Kenya, Colombia, and Southeast Asia provided implementation insights. Semi-structured interviews with 24 M&E practitioners, technology specialists, and ethics experts informed the analysis. Results demonstrated substantial improvements in program targeting (60% increase in effectiveness), resource allocation (30% cost reduction), and predictive accuracy (85-92% across contexts). However, significant ethical challenges emerged, including algorithmic bias affecting 67% of implementations, data privacy concerns in 78% of cases, and accountability gaps in 85% of current implementations. The study concludes with evidence-based recommendations for responsible AI integration in M&E, emphasizing phased implementation, robust governance frameworks, and continuous stakeholder engagement to maximize benefits while addressing ethical concerns.
Abstract Artificial intelligence surveillance can be used to diagnose individual cases, track the spread of Covid‐19, and help provide care. The use of AI for surveillance purposes (such as detecting new Covid‐19 cases and gathering data from healthy and ill individuals) in a pandemic raises multiple concerns ranging from privacy to discrimination to access to care. Luckily, there exist several frameworks that can help guide stakeholders, especially physicians but also AI developers and public health officials, as they navigate these treacherous shoals. While these frameworks were not explicitly designed for AI surveillance during a pandemic, they can be adapted to help address concerns regarding privacy, human rights, and due process and equality. In a time where the rapid implementation of all tools available is critical to ending a pandemic, physicians, public health officials, and technology companies should understand the criteria for the ethical implementation of AI surveillance.
The integration of artificial intelligence and machine learning technologies has revolutionized credit underwriting processes, marking a significant transformation in financial services. This technical analysis explores the architectural components, implementation frameworks, and performance metrics of AI-driven credit assessment systems. The article examines how advanced machine learning models, including gradient boosting machines, deep neural networks, and ensemble methods, have enhanced credit risk evaluation while promoting financial inclusion. The article investigates the multi-tiered architecture of modern credit assessment systems, encompassing data ingestion, feature engineering, and network effect analysis. It further evaluates statistical performance indicators, business metrics, and ethical considerations in AI implementation. The article demonstrates substantial improvements in credit decision accuracy, operational efficiency, and fairness across demographic groups, while highlighting the importance of explainable AI and robust monitoring systems in maintaining transparency and regulatory compliance.
The rapid deployment of Artificial Intelligence (AI) in Anti-Money Laundering (AML) practices within the financial industry presents significant ethical and governance challenges that must be navigated effectively. As financial institutions increasingly adopt AI technologies to enhance their AML efforts, concerns regarding data privacy, algorithmic bias, and transparency emerge. This review explores the ethical implications of AI in AML and offers governance strategies to mitigate risks while ensuring compliance with regulatory frameworks. One of the primary ethical challenges in deploying AI for AML is the potential for algorithmic bias. AI systems trained on historical data may inadvertently perpetuate existing biases, leading to discriminatory practices in transaction monitoring and customer profiling. This raises serious concerns about fairness and equity in the financial sector. Addressing algorithmic bias requires the implementation of rigorous testing and validation processes to ensure AI systems function impartially across diverse populations. Data privacy is another critical issue. The extensive data collection required for effective AML monitoring raises questions about the protection of sensitive customer information. Financial institutions must establish robust data governance frameworks that prioritize privacy and comply with regulations such as the General Data Protection Regulation (GDPR). Ensuring transparency in how data is used and providing clear communication to customers about data practices is essential for building trust. Effective governance frameworks are crucial in navigating these ethical challenges. Financial institutions should adopt a multi-disciplinary approach that includes ethical guidelines, compliance measures, and risk management strategies. Establishing oversight committees can help ensure that AI deployment aligns with ethical standards and regulatory requirements. Furthermore, ongoing training for employees on the ethical use of AI in AML can foster a culture of responsibility and accountability. This review highlights the need for a balanced approach to AI deployment in AML, emphasizing the importance of ethical considerations and governance structures. As the financial industry continues to evolve, addressing these challenges will be essential for maintaining trust, ensuring compliance, and leveraging AI’s potential to enhance AML practices effectively.
This theoretical examination explores the challenges and approaches to establishing ethical frameworks for the integration of artificial intelligence (AI) in healthcare entrepreneurship. As AI technologies continue to advance, their applications in healthcare hold immense potential for improving patient outcomes and driving innovation. However, ethical considerations are paramount to ensure the responsible and equitable deployment of AI-driven solutions. This paper delves into key ethical dimensions including privacy and data security, bias and fairness, accountability and transparency, and patient autonomy and consent. It identifies challenges such as technological limitations, regulatory complexities, and organizational barriers that impede the implementation of ethical frameworks. Additionally, it proposes approaches including collaborative governance models, ethical design practices, and continuous monitoring and evaluation to address these challenges. Through case studies and examples, the paper illustrates successful implementations of ethical frameworks in AI healthcare startups, highlighting lessons learned and their impact on patient outcomes and trust. Ultimately, this examination underscores the critical importance of ethical considerations in shaping the future of AI in healthcare entrepreneurship and provides insights for researchers, practitioners, and policymakers navigating this rapidly evolving landscape.
The proliferation of Artificial Intelligence (AI) in business and employment contexts necessitates a critical examination of the ethical and legal implications surrounding privacy, bias, and accountability. As AI systems become integral to decision-making processes, concerns about data privacy violations and algorithmic biases have heightened. This paper delves into these challenges, presenting a comprehensive framework to address the ethical and legal intricacies associated with AI deployment. Drawing on a thorough literature survey, we identify the gaps in current practices and propose a multifaceted approach to mitigate privacy infringements, combat bias, and establish accountability mechanisms. Our methodology combines quantitative and qualitative analyses, examining existing AI systems to gauge their impact on privacy and bias. The proposed implementation model integrates advanced encryption for privacy preservation, bias-detection algorithms for algorithmic fairness, and transparent decision-making processes to enhance accountability. The results showcase significant advancements in each domain, providing a foundation for responsible AI deployment in business and employment. This study contributes to the ongoing discourse on ethical AI by offering practical solutions to the evolving challenges, ultimately promoting a harmonious integration of AI technologies that align with societal values and legal standards.
Artificial Intelligence (AI) has an increasing impact on all areas of people’s livelihoods. A detailed look at existing interdisciplinary and transdisciplinary metrics frameworks could bring new insights and enable practitioners to navigate the challenge of understanding and assessing the impact of Autonomous and Intelligent Systems (A/IS). There has been emerging consensus on fundamental ethical and rights-based AI principles proposed by scholars, governments, civil rights organizations, and technology companies. In order to move from principles to real-world implementation, we adopt a lens motivated by regulatory impact assessments and the well-being movement in public policy. Similar to public policy interventions, outcomes of AI systems implementation may have far-reaching complex impacts. In public policy, indicators are only part of a broader toolbox, as metrics inherently lead to gaming and dissolution of incentives and objectives. Similarly, in the case of A/IS, there’s a need for a larger toolbox that allows for the iterative assessment of identified impacts, inclusion of new impacts in the analysis, and identification of emerging trade-offs. In this paper, we propose the practical application of an enhanced well-being impact assessment framework for A/IS that could be employed to address ethical and rights-based normative principles in AI. This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems. Finally, we propose a new testing infrastructure which would allow for governments, civil rights organizations, and others, to engage in cooperating with A/IS developers towards implementation of enhanced well-being impact assessments.
AI policymakers are responsible for delivering effective governance mechanisms that can provide safe, aligned and trustworthy AI development. However, the information environment offered to policymakers is characterised by an unnecessarily low Signal-To-Noise Ratio, favouring regulatory capture and creating deep uncertainty and divides on which risks should be prioritised from a governance perspective. We posit that the current publication speeds in AI combined with the lack of strong scientific standards, via weak reproducibility protocols, effectively erodes the power of policymakers to enact meaningful policy and governance protocols. Our paper outlines how AI research could adopt stricter reproducibility guidelines to assist governance endeavours and improve consensus on the AI risk landscape. We evaluate the forthcoming reproducibility crisis within AI research through the lens of crises in other scientific domains; providing a commentary on how adopting preregistration, increased statistical power and negative result publication reproducibility protocols can enable effective AI governance. While we maintain that AI governance must be reactive due to AI's significant societal implications we argue that policymakers and governments must consider reproducibility protocols as a core tool in the governance arsenal and demand higher standards for AI research. Code to replicate data and figures: https://github.com/IFMW01/reproducibility-the-new-frontier-in-ai-governance
AI models are rapidly becoming embedded in all aspects of nuclear energy research and work but the safety, security, and safeguards consequences of this embedding are not well understood. In this paper, we call for the creation of an anticipatory system of governance for AI in the nuclear sector as well as the creation of a global AI observatory as a means for operationalizing anticipatory governance. The paper explores the contours of the nuclear AI observatory and an anticipatory system of governance by drawing on work in science and technology studies, public policy, and foresight studies.
This chapter examines the global governance of artificial intelligence (AI) through the lens of the Global AI Divide, focusing on disparities in AI development, innovation, and regulation. It highlights systemic inequities in education, digital infrastructure, and access to decision-making processes, perpetuating a dependency and exclusion cycle for Global Majority countries. The analysis also explores the dominance of Western nations and corporations in shaping AI governance frameworks, which often sideline the unique priorities and contexts of the Global Majority. Additionally, this chapter identifies emerging countertrends, such as national and regional AI strategies, as potential avenues for fostering equity and inclusivity in global AI governance. The chapter concludes with actionable recommendations to democratize AI governance for Majority World countries, emphasizing the importance of systemic reforms, resource redistribution, and meaningful participation. It calls for collaborative action to ensure AI governance becomes a catalyst for shared prosperity, addressing global disparities rather than deepening them.
To realize the potential benefits and mitigate potential risks of AI, it is necessary to develop a framework of governance that conforms to ethics and fundamental human values. Although several organizations have issued guidelines and ethical frameworks for trustworthy AI, without a mediating governance structure, these ethical principles will not translate into practice. In this paper, we propose a multilevel governance approach that involves three groups of interdependent stakeholders: governments, corporations, and citizens. We examine their interrelationships through dimensions of trust, such as competence, integrity, and benevolence. The levels of governance combined with the dimensions of trust in AI provide practical insights that can be used to further enhance user experiences and inform public policy related to AI.
While the role of states, corporations, and international organizations in AI governance has been extensively theorized, the role of workers has received comparatively little attention. This chapter looks at the role that workers play in identifying and mitigating harms from AI technologies. Harms are the causally assessed impacts of technologies. They arise despite technical reliability and are not a result of technical negligence but rather of normative uncertainty around questions of safety and fairness in complex social systems. There is high consensus in the AI ethics community on the benefits of reducing harms but less consensus on mechanisms for determining or addressing harms. This lack of consensus has resulted in a number of collective actions by workers protesting how harms are identified and addressed in their workplace. We theorize the role of workers within AI governance and construct a model of harm reporting processes in AI workplaces. The harm reporting process involves three steps, identification, the governance decision, and the response. Workers draw upon three types of claims to argue for jurisdiction over questions of AI governance, subjection, control over the product of labor, and proximate knowledge of systems. Examining the past decade of AI related worker activism allows us to understand how different types of workers are positioned within a workplace that produces AI systems, how their position informs their claims, and the place of collective action in staking their claims. This chapter argues that workers occupy a unique role in identifying and mitigating harms caused by AI systems.
AI is transforming the existing technology landscape at a rapid phase enabling data-informed decision making and autonomous decision making. Unlike any other technology, because of the decision-making ability of AI, ethics and governance became a key concern. There are many emerging AI risks for humanity, such as autonomous weapons, automation-spurred job loss, socio-economic inequality, bias caused by data and algorithms, privacy violations and deepfakes. Social diversity, equity and inclusion are considered key success factors of AI to mitigate risks, create values and drive social justice. Sustainability became a broad and complex topic entangled with AI. Many organizations (government, corporate, not-for-profits, charities and NGOs) have diversified strategies driving AI for business optimization and social-and-environmental justice. Partnerships and collaborations become important more than ever for equity and inclusion of diversified and distributed people, data and capabilities. Therefore, in our journey towards an AI-enabled sustainable future, we need to address AI ethics and governance as a priority. These AI ethics and governance should be underpinned by human ethics.
Introduction. AI Ethics is framed distinctly across actors and stakeholder groups. We report results from a case study of OpenAI analysing ethical AI discourse. Method. Research addressed: How has OpenAI's public discourse leveraged 'ethics', 'safety', 'alignment' and adjacent related concepts over time, and what does discourse signal about framing in practice? A structured corpus, differentiating between communication for a general audience and communication with an academic audience, was assembled from public documentation. Analysis. Qualitative content analysis of ethical themes combined inductively derived and deductively applied codes. Quantitative analysis leveraged computational content analysis methods via NLP to model topics and quantify changes in rhetoric over time. Visualizations report aggregate results. For reproducible results, we have released our code at https://github.com/famous-blue-raincoat/AI_Ethics_Discourse. Results. Results indicate that safety and risk discourse dominate OpenAI's public communication and documentation, without applying academic and advocacy ethics frameworks or vocabularies. Conclusions. Implications for governance are presented, along with discussion of ethics-washing practices in industry.
The rapid expansion of generative artificial intelligence (AI) is transforming work, creativity, and economic security in ways that extend beyond automation and productivity. This paper examines four interconnected dimensions of contemporary AI deployment: (1) transformations in employment and task composition (2) unequal diffusion of AI across sectors and socio-demographic groups (3) the role of universal basic income (UBI) as a stabilising response to AI-induced volatility (4) the effects of model alignment and content governance on human creativity, autonomy, and decision-making Using a hybrid approach that integrates labour market task exposure modelling, sectoral diffusion analysis, policy review, and qualitative discourse critique, the study develops an Inclusive AI Governance Framework. It introduces Level 1.5 autonomy as a human centred design principle that preserves evaluative authority while enabling partial automation, and highlights evidence of creative regression and emergent sycophancy in newer model generations. The paper argues that UBI should be embedded within a broader socio-technical governance ecosystem encompassing skills development, proportional regulation, and creativity preservation.
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD's values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for the Australian Defence Organisation (ADO); Defence Science and Technology Group (DSTG) has published 'A Method for Ethical AI in Defence' (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI. Australia can play a leadership role by integrating legal and ethical considerations into its ADO AI capability acquisition process. This requires a policy framework that defines its legal and ethical requirements, is informed by Defence industry stakeholders, and provides a practical methodology to integrate legal and ethical risk mitigation strategies into the acquisition process.
Agentic AI systems present both significant opportunities and novel risks due to their capacity for autonomous action, encompassing tasks such as code execution, internet interaction, and file modification. This poses considerable challenges for effective organizational governance, particularly in comprehensively identifying, assessing, and mitigating diverse and evolving risks. To tackle this, we introduce the Agentic Risk \& Capability (ARC) Framework, a technical governance framework designed to help organizations identify, assess, and mitigate risks arising from agentic AI systems. The framework's core contributions are: (1) it develops a novel capability-centric perspective to analyze a wide range of agentic AI systems; (2) it distills three primary sources of risk intrinsic to agentic AI systems - components, design, and capabilities; (3) it establishes a clear nexus between each risk source, specific materialized risks, and corresponding technical controls; and (4) it provides a structured and practical approach to help organizations implement the framework. This framework provides a robust and adaptable methodology for organizations to navigate the complexities of agentic AI, enabling rapid and effective innovation while ensuring the safe, secure, and responsible deployment of agentic AI systems. Our framework is open-sourced \href{https://govtech-responsibleai.github.io/agentic-risk-capability-framework/}{here}.
This paper tackles practical challenges in governing child centered artificial intelligence: policy texts state principles and requirements but often lack reproducible evidence anchors, explicit causal pathways, executable governance toolchains, and computable audit metrics. We propose Graph-GAP, a methodology that decomposes requirements from authoritative policy texts into a four layer graph of evidence, mechanism, governance, and indicator, and that computes two metrics, GAP score and mitigation readiness, to identify governance gaps and prioritise actions. Using the UNICEF Innocenti Guidance on AI and Children 3.0 as primary material, we define reproducible extraction units, coding manuals, graph patterns, scoring scales, and consistency checks, and we demonstrate exemplar gap profiles and governance priority matrices for ten requirements. Results suggest that compared with privacy and data protection, requirements related to child well being and development, explainability and accountability, and cross agency implementation and resource allocation are more prone to indicator gaps and mechanism gaps. We recommend translating requirements into auditable closed loop governance that integrates child rights impact assessments, continuous monitoring metrics, and grievance redress procedures. At the coding level, we introduce a multi algorithm review aggregation revision workflow that runs rule based encoders, statistical or machine learning evaluators, and large model evaluators with diverse prompt configurations as parallel coders. Each extraction unit outputs evidence, mechanism, governance, and indicator labels plus readiness scores with evidence anchors. Reliability, stability, and uncertainty are assessed using Krippendorff alpha, weighted kappa, intraclass correlation, and bootstrap confidence intervals.
Purpose: The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation. Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement. This paper addresses this critical gap in AI governance. Methodology/Approach: A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes. By narrowing its scope through progressively focused layers, the framework provides a structured pathway to meet technical, regulatory, and ethical requirements. Its applicability is validated through two case studies on AI fairness and AI incident reporting. Findings: The case studies demonstrate the framework's ability to identify gaps in legal mandates, standardization, and implementation. It adapts to both global and region-specific AI governance needs, mapping regulatory mandates with practical applications to improve compliance and risk management. Practical Implications - By offering a clear and actionable roadmap, this work contributes to global AI governance by equipping policymakers, regulators, and industry stakeholders with a model to enhance compliance and risk management. Social Implications: The framework supports the development of policies that build public trust and promote the ethical use of AI for the benefit of society. Originality/Value: This study proposes a five-layer AI governance framework that bridges high-level regulatory mandates and implementation guidelines. Validated through case studies on AI fairness and incident reporting, it identifies gaps such as missing standardized assessment procedures and reporting mechanisms, providing a structured foundation for targeted governance measures.
Industry actors in the United States have gained extensive influence in conversations about the regulation of general-purpose artificial intelligence (AI) systems. Although industry participation is an important part of the policy process, it can also cause regulatory capture, whereby industry co-opts regulatory regimes to prioritize private over public welfare. Capture of AI policy by AI developers and deployers could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems. In this paper, we first introduce different models of regulatory capture from the social science literature. We then present results from interviews with 17 AI policy experts on what policy outcomes could compose regulatory capture in US AI policy, which AI industry actors are influencing the policy process, and whether and how AI industry actors attempt to achieve outcomes of regulatory capture. Experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals over others. Experts most commonly identified agenda-setting (15 of 17 interviews), advocacy (13), academic capture (10), information management (9), cultural capture through status (7), and media capture (7) as channels for industry influence. To mitigate these particular forms of industry influence, we recommend systemic changes in developing technical expertise in government and civil society, independent funding streams for the AI ecosystem, increased transparency and ethics requirements, greater civil society access to policy, and various procedural safeguards.
This paper provides an overview and critique of the risk based model of artificial intelligence (AI) governance that has become a popular approach to AI regulation across multiple jurisdictions. The 'AI Policy Landscape in Europe, North America and Australia' section summarises the existing AI policy efforts across these jurisdictions, with a focus of the EU AI Act and the Australian Department of Industry, Science and Regulation's (DISR) safe and responsible AI consultation. The 'Analysis' section of this paper proposes several criticisms of the risk based approach to AI governance, arguing that the construction and calculation of risks that they use reproduces existing inequalities. Drawing on the work of Julia Black, it argues that risk and harm should be distinguished clearly and that the notion of risk is problematic as its inherent normativity reproduces dominant and harmful narratives about whose interests matter, and risk categorizations should be subject to deep scrutiny. This paper concludes with the suggestion that existing risk governance scholarship can provide valuable insights toward the improvement of the risk based AI governance, and that the use of multiple regulatory implements and responsive risk regulation should be considered in the continuing development of the model.
How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.
In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, colloquially known as "deep fake pornography." We identify a "malicious technical ecosystem" or "MTE," comprising of open-source face-swapping models and nearly 200 "nudifying" software programs that allow non-technical users to create AIG-NCII within minutes. Then, using the National Institute of Standards and Technology (NIST) AI 100-4 report as a reflection of current synthetic content governance methods, we show how the current landscape of practices fails to effectively regulate the MTE for adult AIG-NCII, as well as flawed assumptions explaining these gaps.
Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement process between the thing we purport to be measuring and the thing we actually measure. However, the measurement process -- where social, cultural, and political values are implicitly encoded in sociotechnical systems -- is almost always obscured. Furthermore, this obscured process is where important governance decisions are encoded: governance about which systems are fair, which individuals belong in which categories, and so on. We can then use the language of measurement, and the tools of construct validity and reliability, to uncover hidden governance decisions. In particular, we highlight two types of construct validity, content validity and consequential validity, that are useful to elicit and characterize the feedback loops between the measurement, social construction, and enforcement of social categories. We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI. Together, these perspectives help us unpack how measurement acts as a hidden governance process in sociotechnical systems. Understanding measurement as governance supports a richer understanding of the governance processes already happening in AI -- responsible or otherwise -- revealing paths to more effective interventions.
Like other social media, TikTok is embracing its use as a search engine, developing search products to steer users to produce searchable content and engage in content discovery. Their recently developed product search recommendations are preformulated search queries recommended to users on videos. However, TikTok provides limited transparency about how search recommendations are generated and moderated, despite requirements under regulatory frameworks like the European Union's Digital Services Act. By suggesting that the platform simply aggregates comments and common searches linked to videos, it sidesteps responsibility and issues that arise from contextually problematic recommendations, reigniting long-standing concerns about platform liability and moderation. This position paper addresses the novelty of search recommendations on TikTok by highlighting the challenges that this feature poses for platform governance and offering a computational research agenda, drawing on preliminary qualitative analysis. It sets out the need for transparency in platform documentation, data access and research to study search recommendations.
Requirements Engineering (RE) is the discipline for identifying, analyzing, as well as ensuring the implementation and delivery of user, technical, and societal requirements. Recently reported issues concerning the acceptance of Artificial Intelligence (AI) solutions after deployment, e.g. in the medical, automotive, or scientific domains, stress the importance of RE for designing and delivering Responsible AI systems. In this paper, we argue that RE should not only be carefully conducted but also tailored for Responsible AI. We outline related challenges for research and practice.
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI's potential threats and use cases. Unfortunately, it's difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) co-hosted two public consultations with the Partnership on AI in May 2020. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers. In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a "black market" for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
This report prepared by the Montreal AI Ethics Institute provides recommendations in response to the National Security Commission on Artificial Intelligence (NSCAI) Key Considerations for Responsible Development and Fielding of Artificial Intelligence document. The report centres on the idea that Responsible AI should be made the Norm rather than an Exception. It does so by utilizing the guiding principles of: (1) alleviating friction in existing workflows, (2) empowering stakeholders to get buy-in, and (3) conducting an effective translation of abstract standards into actionable engineering practices. After providing some overarching comments on the document from the NSCAI, the report dives into the primary contribution of an actionable framework to help operationalize the ideas presented in the document from the NSCAI. The framework consists of: (1) a learning, knowledge, and information exchange (LKIE), (2) the Three Ways of Responsible AI, (3) an empirically-driven risk-prioritization matrix, and (4) achieving the right level of complexity. All components reinforce each other to move from principles to practice in service of making Responsible AI the norm rather than the exception.
This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams. Although militaries may have some bad apples responsible for war crimes and some mad apples unable to be responsible for their actions during a conflict, increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments through the processes of replacing human decision-making with AI determinations in war making. Responsibility for civilian harm in human-AI military teams may be contested, risking operators becoming detached, being extreme moral witnesses, becoming moral crumple zones or suffering moral injury from being part of larger human-AI systems authorised by the state. Acknowledging military ethics, human factors and AI work to date as well as critical case studies, this chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams. These include: 1) new decision responsibility prompts for critical decision method in a cognitive task analysis, and 2) applying an AI workplace health and safety framework for identifying cognitive and psychological risks relevant to attributions of moral responsibility in targeting decisions. Mechanisms such as these enable militaries to design human-centred AI systems for responsible deployment.
Artificial Intelligence (AI) systems exert a growing influence on our society. As they become more ubiquitous, their potential negative impacts also become evident through various real-world incidents. Following such early incidents, academic and public discussion on AI ethics has highlighted the need for implementing ethics in AI system development. However, little currently exists in the way of frameworks for understanding the practical implementation of AI ethics. In this paper, we discuss a research framework for implementing AI ethics in industrial settings. The framework presents a starting point for empirical studies into AI ethics but is still being developed further based on its practical utilization.
The development of Artificial Intelligence (AI), including AI in Science (AIS), should be done following the principles of responsible AI. Progress in responsible AI is often quantified through evaluation metrics, yet there has been less work on assessing the robustness and reliability of the metrics themselves. We reflect on prior work that examines the robustness of fairness metrics for recommender systems as a type of AI application and summarise their key takeaways into a set of non-exhaustive guidelines for developing reliable metrics of responsible AI. Our guidelines apply to a broad spectrum of AI applications, including AIS.
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what "responsibility" means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation "Actor A is responsible for Occurrence O," the framework unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, senses in which they are responsible, and aspects of events they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.
AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
Organizations of all sizes, across all industries and domains are leveraging artificial intelligence (AI) technologies to solve some of their biggest challenges around operations, customer experience, and much more. However, due to the probabilistic nature of AI, the risks associated with it are far greater than traditional technologies. Research has shown that these risks can range anywhere from regulatory, compliance, reputational, and user trust, to financial and even societal risks. Depending on the nature and size of the organization, AI technologies can pose a significant risk, if not used in a responsible way. This position paper seeks to present a brief introduction to AI governance, which is a framework designed to oversee the responsible use of AI with the goal of preventing and mitigating risks. Having such a framework will not only manage risks but also gain maximum value out of AI projects and develop consistency for organization-wide adoption of AI.
Participatory Artificial Intelligence (PAI) has recently gained interest by researchers as means to inform the design of technology through collective's lived experience. PAI has a greater promise than that of providing useful input to developers, it can contribute to the process of democratizing the design of technology, setting the focus on what should be designed. However, in the process of PAI there existing institutional power dynamics that hinder the realization of expansive dreams and aspirations of the relevant stakeholders. In this work we propose co-design principals for AI that address institutional power dynamics focusing on Participatory AI with youth.
Organizations investing in artificial intelligence face a fundamental challenge: traditional return on investment calculations fail to capture the dual nature of AI implementations, which simultaneously reduce certain operational risks while introducing novel exposures related to algorithmic malfunction, adversarial attacks, and regulatory liability. This research presents a comprehensive financial framework for quantifying AI project returns that explicitly integrates changes in organizational risk profiles. The methodology addresses a critical gap in current practice where investment decisions rely on optimistic benefit projections without accounting for the probabilistic costs of AI-specific threats including model drift, bias-related litigation, and compliance failures under emerging regulations such as the European Union Artificial Intelligence Act and ISO/IEC 42001. Drawing on established risk quantification methods, including annual loss expectancy calculations and Monte Carlo simulation techniques, this framework enables practitioners to compute net benefits that incorporate both productivity gains and the delta between pre-implementation and post-implementation risk exposures. The analysis demonstrates that accurate AI investment evaluation requires explicit modeling of control effectiveness, reserve requirements for algorithmic failures, and the ongoing operational costs of maintaining model performance. Practical implications include specific guidance for establishing governance structures, conducting phased validations, and integrating risk-adjusted metrics into capital allocation decisions, ultimately enabling evidence-based AI portfolio management that satisfies both fiduciary responsibilities and regulatory mandates.
White-box AI (WAI), or explainable AI (XAI) model, a novel tool to achieve the reasoning behind decisions and predictions made by the AI algorithms, makes it more understandable and transparent. It offers a new approach to address key challenges of interpretability and mathematical validation in traditional black-box models. In this paper, WAI-aided wireless communication systems are proposed and investigated thoroughly to utilize the promising capabilities. First, we introduce the fundamental principles of WAI. Then, a detailed comparison between WAI and traditional black-box model is conducted in terms of optimization objectives and architecture design, with a focus on deep neural networks (DNNs) and transformer networks. Furthermore, in contrast to the traditional black-box methods, WAI leverages theory-driven causal modeling and verifiable optimization paths, thereby demonstrating potential advantages in areas such as signal processing and resource allocation. Finally, we outline future research directions for the integration of WAI in wireless communication systems.
As Artificial Intelligence (AI) technology gets more intertwined with every system, people are using AI to make decisions on their everyday activities. In simple contexts, such as Netflix recommendations, or in more complex context like in judicial scenarios, AI is part of people's decisions. People make decisions and usually, they need to explain their decision to others or in some matter. It is particularly critical in contexts where human expertise is central to decision-making. In order to explain their decisions with AI support, people need to understand how AI is part of that decision. When considering the aspect of fairness, the role that AI has on a decision-making process becomes even more sensitive since it affects the fairness and the responsibility of those people making the ultimate decision. We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'. In this position paper, we discuss our approach for AI systems using fairness sensitive cases in the literature.
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization's values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
Academic and policy proposals on algorithmic accountability often seek to understand algorithmic systems in their socio-technical context, recognising that they are produced by 'many hands'. Increasingly, however, algorithmic systems are also produced, deployed, and used within a supply chain comprising multiple actors tied together by flows of data between them. In such cases, it is the working together of an algorithmic supply chain of different actors who contribute to the production, deployment, use, and functionality that drives systems and produces particular outcomes. We argue that algorithmic accountability discussions must consider supply chains and the difficult implications they raise for the governance and accountability of algorithmic systems. In doing so, we explore algorithmic supply chains, locating them in their broader technical and political economic context and identifying some key features that should be understood in future work on algorithmic governance and accountability (particularly regarding general purpose AI services). To highlight ways forward and areas warranting attention, we further discuss some implications raised by supply chains: challenges for allocating accountability stemming from distributed responsibility for systems between actors, limited visibility due to the accountability horizon, service models of use and liability, and cross-border supply chains and regulatory arbitrage
In 1996, Accountability in a Computerized Society [95] issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems. Nissenbaum [95] described four barriers to accountability that computerization presented, which we revisit in relation to the ascendance of data-driven algorithmic systems--i.e., machine learning or artificial intelligence--to uncover new challenges for accountability that these systems present. Nissenbaum's original paper grounded discussion of the barriers in moral philosophy; we bring this analysis together with recent scholarship on relational accountability frameworks and discuss how the barriers present difficulties for instantiating a unified moral, relational framework in practice for data-driven algorithmic systems. We conclude by discussing ways of weakening the barriers in order to do so.
Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.
One of the most concrete measures to take towards meaningful AI accountability is to consequentially assess and report the systems' performance and impact. However, the practical nature of the "AI audit" ecosystem is muddled and imprecise, making it difficult to work through various concepts and map out the stakeholders involved in the practice. First, we taxonomize current AI audit practices as completed by regulators, law firms, civil society, journalism, academia, consulting agencies. Next, we assess the impact of audits done by stakeholders within each domain. We find that only a subset of AI audit studies translate to desired accountability outcomes. We thus assess and isolate practices necessary for effective AI audit results, articulating the observed connections between AI audit design, methodology and institutional context on its effectiveness as a meaningful mechanism for accountability.
Large language models (LLMs) are increasingly embedded in consequential decisions across healthcare, finance, employment, and public services. Yet accountability remains fragile because process transparency is rarely recorded in a durable and reviewable form. We propose LLM audit trails as a sociotechnical mechanism for continuous accountability. An audit trail is a chronological, tamper-evident, context-rich ledger of lifecycle events and decisions that links technical provenance (models, data, training and evaluation runs, deployments, monitoring) with governance records (approvals, waivers, and attestations), so organizations can reconstruct what changed, when, and who authorized it. This paper contributes: (1) a lifecycle framework that specifies event types, required metadata, and governance rationales; (2) a reference architecture with lightweight emitters, append only audit stores, and an auditor interface supporting cross organizational traceability; and (3) a reusable, open-source Python implementation that instantiates this audit layer in LLM workflows with minimal integration effort. We conclude by discussing limitations and directions for adoption.
Reinforcement learning is increasingly used to transform large language models into agentic systems that act over long horizons, invoke tools, and manage memory under partial observability. While recent work has demonstrated performance gains through tool learning, verifiable rewards, and continual training, deployed self-improving agents raise unresolved security and governance challenges: optimization pressure can incentivize reward hacking, behavioral drift is difficult to audit or reproduce, and improvements are often entangled in opaque parameter updates rather than reusable, verifiable artifacts. This paper proposes Audited Skill-Graph Self-Improvement (ASG-SI), a framework that treats self-improvement as iterative compilation of an agent into a growing, auditable skill graph. Each candidate improvement is extracted from successful trajectories, normalized into a skill with an explicit interface, and promoted only after passing verifier-backed replay and contract checks. Rewards are decomposed into reconstructible components derived from replayable evidence, enabling independent audit of promotion decisions and learning signals. ASG-SI further integrates experience synthesis for scalable stress testing and continual memory control to preserve long-horizon performance under bounded context. We present a complete system architecture, threat model, and security analysis, and provide a fully runnable reference implementation that demonstrates verifier-backed reward construction, skill compilation, audit logging, and measurable improvement under continual task streams. ASG-SI reframes agentic self-improvement as accumulation of verifiable, reusable capabilities, offering a practical path toward reproducible evaluation and operational governance of self-improving AI agents.
Algorithmic decision making is now widespread, ranging from health care allocation to more common actions such as recommendation or information ranking. The aim to audit these algorithms has grown alongside. In this paper, we focus on external audits that are conducted by interacting with the user side of the target algorithm, hence considered as a black box. Yet, the legal framework in which these audits take place is mostly ambiguous to researchers developing them: on the one hand, the legal value of the audit outcome is uncertain; on the other hand the auditors' rights and obligations are unclear. The contribution of this paper is to articulate two canonical audit forms to law, to shed light on these aspects: 1) the first audit form (we coin the Bobby audit form) checks a predicate against the algorithm, while the second (Sherlock) is more loose and opens up to multiple investigations. We find that: Bobby audits are more amenable to prosecution, yet are delicate as operating on real user data. This can lead to reject by a court (notion of admissibility). Sherlock audits craft data for their operation, most notably to build surrogates of the audited algorithm. It is mostly used for acts for whistleblowing, as even if accepted as a proof, the evidential value will be low in practice. 2) these two forms require the prior respect of a proper right to audit, granted by law or by the platform being audited; otherwise the auditor will be also prone to prosecutions regardless of the audit outcome. This article thus highlights the relation of current audits with law, in order to structure the growing field of algorithm auditing.
We describe and implement a policy language. In our system, agents can distribute data along with usage policies in a decentralized architecture. Our language supports the specification of conditions and obligations, and also the possibility to refine policies. In our framework, the compliance with usage policies is not actively enforced. However, agents are accountable for their actions, and may be audited by an authority requiring justifications.
Algorithmic audits have been embraced as tools to investigate the functioning and consequences of sociotechnical systems. Though the term is used somewhat loosely in the algorithmic context and encompasses a variety of methods, it maintains a close connection to audit studies in the social sciences--which have, for decades, used experimental methods to measure the prevalence of discrimination across domains like housing and employment. In the social sciences, audit studies originated in a strong tradition of social justice and participatory action, often involving collaboration between researchers and communities; but scholars have argued that, over time, social science audits have become somewhat distanced from these original goals and priorities. We draw from this history in order to highlight difficult tensions that have shaped the development of social science audits, and to assess their implications in the context of algorithmic auditing. In doing so, we put forth considerations to assist in the development of robust and engaged assessments of sociotechnical systems that draw from auditing's roots in racial equity and social justice.
This paper introduces reviewability as a framework for improving the accountability of automated and algorithmic decision-making (ADM) involving machine learning. We draw on an understanding of ADM as a socio-technical process involving both human and technical elements, beginning before a decision is made and extending beyond the decision itself. While explanations and other model-centric mechanisms may assist some accountability concerns, they often provide insufficient information of these broader ADM processes for regulatory oversight and assessments of legal compliance. Reviewability involves breaking down the ADM process into technical and organisational elements to provide a systematic framework for determining the contextually appropriate record-keeping mechanisms to facilitate meaningful review - both of individual decisions and of the process as a whole. We argue that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
For almost a decade now, scholarship in and beyond the ACM FAccT community has been focusing on novel and innovative ways and methodologies to audit the functioning of algorithmic systems. Over the years, this research idea and technical project has matured enough to become a regulatory mandate. Today, the Digital Services Act (DSA) and the Online Safety Act (OSA) have established the framework within which technology corporations and (traditional) auditors will develop the `practice' of algorithmic auditing thereby presaging how this `ecosystem' will develop. In this paper, we systematically review the auditing provisions in the DSA and the OSA in light of observations from the emerging industry of algorithmic auditing. Who is likely to occupy this space? What are some political and ethical tensions that are likely to arise? How are the mandates of `independent auditing' or `the evaluation of the societal context of an algorithmic function' likely to play out in practice? By shaping the picture of the emerging political economy of algorithmic auditing, we draw attention to strategies and cultures of traditional auditors that risk eroding important regulatory pillars of the DSA and the OSA. Importantly, we warn that ambitious research ideas and technical projects of/for algorithmic auditing may end up crashed by the standardising grip of traditional auditors and/or diluted within a complex web of (sub-)contractual arrangements, diverse portfolios, and tight timelines.
This paper reframes algorithmic systems as intimately connected to and part of social and ecological systems, and proposes a first-of-its-kind methodology for environmental justice-oriented algorithmic audits. How do we consider environmental and climate justice dimensions of the way algorithmic systems are designed, developed, and deployed? These impacts are inherently emergent and can only be understood and addressed at the level of relations between an algorithmic system and the social (including institutional) and ecological components of the broader ecosystem it operates in. As a result, we claim that in absence of an integral ontology for algorithmic systems, we cannot do justice to the emergent nature of broader environmental impacts of algorithmic systems and their underlying computational infrastructure. We propose to define algorithmic systems as ontologically indistinct from Social-Ecological-Technological Systems (SETS), framing emergent implications as couplings between social, ecological, and technical components of the broader fabric in which algorithms are integrated and operate. We draw upon prior work on SETS analysis as well as emerging themes in the literature and practices of Environmental Justice (EJ) to conceptualize and assess algorithmic impact. We then offer three policy recommendations to help establish a SETS-based EJ approach to algorithmic audits: (1) broaden the inputs and open-up the outputs of an audit, (2) enable meaningful access to redress, and (3) guarantee a place-based and relational approach to the process of evaluating impact. We operationalize these as a qualitative framework of questions for a spectrum of stakeholders. Doing so, this article aims to inspire stronger and more frequent interactions across policymakers, researchers, practitioners, civil society, and grassroots communities.
Increasingly, individuals who engage in online activities are expected to interact with large language model (LLM)-based chatbots. Prior work has shown that LLMs can display dialect bias, which occurs when they produce harmful responses when prompted with text written in minoritized dialects. However, whether and how this bias propagates to systems built on top of LLMs, such as chatbots, is still unclear. We conduct a review of existing approaches for auditing LLMs for dialect bias and show that they cannot be straightforwardly adapted to audit LLM-based chatbots due to issues of substantive and ecological validity. To address this, we present a framework for auditing LLM-based chatbots for dialect bias by measuring the extent to which they produce quality-of-service harms, which occur when systems do not work equally well for different people. Our framework has three key characteristics that make it useful in practice. First, by leveraging dynamically generated instead of pre-existing text, our framework enables testing over any dialect, facilitates multi-turn conversations, and represents how users are likely to interact with chatbots in the real world. Second, by measuring quality-of-service harms, our framework aligns audit results with the real-world outcomes of chatbot use. Third, our framework requires only query access to an LLM-based chatbot, meaning that it can be leveraged equally effectively by internal auditors, external auditors, and even individual users in order to promote accountability. To demonstrate the efficacy of our framework, we conduct a case study audit of Amazon Rufus, a widely-used LLM-based chatbot in the customer service domain. Our results reveal that Rufus produces lower-quality responses to prompts written in minoritized English dialects, and that these quality-of-service harms are exacerbated by the presence of typos in prompts.
Auditing social-media algorithms has become a focus of public-interest research and policymaking to ensure their fairness across demographic groups such as race, age, and gender in consequential domains such as the presentation of employment opportunities. However, such demographic attributes are often unavailable to auditors and platforms. When demographics data is unavailable, auditors commonly infer them from other available information. In this work, we study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads. We show that inference error, if not accounted for, causes auditing to falsely miss skew that exists. We then propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms. Our method works by adjusting for expected error due to demographic inference, and it makes skew detection more sensitive when attributes must be inferred. Because inference is increasingly used for auditing, our results provide an important addition to the auditing toolbox to promote correct audits of ad delivery algorithms for bias. While the impact of attribute inference on accuracy has been studied in other domains, our work is the first to consider it for black-box evaluation of ad delivery bias, when only aggregate data is available to the auditor.
Accountability is widely understood as a goal for well governed computer systems, and is a sought-after value in many governance contexts. But how can it be achieved? Recent work on standards for governable artificial intelligence systems offers a related principle: traceability. Traceability requires establishing not only how a system worked but how it was created and for what purpose, in a way that explains why a system has particular dynamics or behaviors. It connects records of how the system was constructed and what the system did mechanically to the broader goals of governance, in a way that highlights human understanding of that mechanical operation and the decision processes underlying it. We examine the various ways in which the principle of traceability has been articulated in AI principles and other policy documents from around the world, distill from these a set of requirements on software systems driven by the principle, and systematize the technologies available to meet those requirements. From our map of requirements to supporting tools, techniques, and procedures, we identify gaps and needs separating what traceability requires from the toolbox available for practitioners. This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals and thereby improve the normative fidelity of systems and their development processes.
Algorithm audits are powerful tools for studying black-box systems. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users as an integral and dynamic part of the system. Addressing this gap, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring resulting attitudes and behaviors. To instantiate this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online and coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N=244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure.
Approximately 50% of tweets in X's user timelines are personalized recommendations from accounts they do not follow. This raises a critical question: What political content are users exposed to beyond their established networks, and what implications does this have for democratic discourse online? In this paper, we present a six-week audit of X's algorithmic content recommendations during the 2024 U.S. Presidential Election by deploying 120 sock-puppet monitoring accounts to capture tweets from their personalized "For You" timelines. Our objective is to quantify out-of-network content exposure for right- and left-leaning user profiles and assess any potential inequalities and biases in political exposure. Our findings indicate that X's algorithm skews exposure toward a few high-popularity accounts across all users, with right-leaning users experiencing the highest level of exposure inequality. Both left- and right-leaning users encounter amplified exposure to accounts aligned with their own political views and reduced exposure to opposing viewpoints. Additionally, we observe that new accounts experience a right-leaning bias in exposure within their default timelines. Our work contributes to understanding how content recommendation systems may induce and reinforce biases while exacerbating vulnerabilities among politically polarized user groups. We underscore the importance of transparency-aware algorithms in addressing critical issues such as safeguarding election integrity and fostering a more informed digital public sphere.
Digitalization and Smart systems are part of our everyday lives today. So far the development has been rapid and all the implications that comes after the deployment has not been able to foresee or even assess during the development, especially when ethics or trustworthiness is concerned. Artificial Intelligence (AI) and Autonomous Systems (AS) are the direction that software systems are taking today. It is witnessed in banks, stores, internet and it is proceeding to transportation as well as on traveling. Autonomous maritime industry has also taking this direction when taking under development in digitalization on fairway and port terminals. AI ethics has advanced profoundly since the machine learning develop during the last decade and is now being implemented in AI development and workflow of software engineers. It is not an easy task and tools are needed to make the ethical assessment easier. This paper will review a research in an industrial setting, where Ethically Aligned Design practice, Ethical User Stories are used to transfer ethical requirements to ethical user stories to form practical solutions for project use. This project is in the field of maritime industry and concentrates on digitalization of port terminals and this particular paper focuses on the passenger flow. Results are positive towards the practice of Ethical User Stories, drawn from a large empirical data set.
Society's increasing dependence on Artificial Intelligence (AI) and AI-enabled systems require a more practical approach from software engineering (SE) executives in middle and higher-level management to improve their involvement in implementing AI ethics by making ethical requirements part of their management practices. However, research indicates that most work on implementing ethical requirements in SE management primarily focuses on technical development, with scarce findings for middle and higher-level management. We investigate this by interviewing ten Finnish SE executives in middle and higher-level management to examine how they consider and implement ethical requirements. We use ethical requirements from the European Union (EU) Trustworthy Ethics guidelines for Trustworthy AI as our reference for ethical requirements and an Agile portfolio management framework to analyze implementation. Our findings reveal a general consideration of privacy and data governance ethical requirements as legal requirements with no other consideration for ethical requirements identified. The findings also show practicable consideration of ethical requirements as technical robustness and safety for implementation as risk requirements and societal and environmental well-being for implementation as sustainability requirements. We examine a practical approach to implementing ethical requirements using the ethical risk requirements stack employing the Agile portfolio management framework.
In the last few years, AI continues demonstrating its positive impact on society while sometimes with ethically questionable consequences. Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation. This chapter discusses the challenges related to operationalizing ethical AI principles and presents an integrated view that covers high-level ethical AI principles, the general notion of trust/trustworthiness, and product/process support in the context of responsible AI, which helps improve both trust and trustworthiness of AI for a wider set of stakeholders.
Artificial Intelligence has rapidly become a cornerstone technology, significantly influencing Europe's societal and economic landscapes. However, the proliferation of AI also raises critical ethical, legal, and regulatory challenges. The CERTAIN (Certification for Ethical and Regulatory Transparency in Artificial Intelligence) project addresses these issues by developing a comprehensive framework that integrates regulatory compliance, ethical standards, and transparency into AI systems. In this position paper, we outline the methodological steps for building the core components of this framework. Specifically, we present: (i) semantic Machine Learning Operations (MLOps) for structured AI lifecycle management, (ii) ontology-driven data lineage tracking to ensure traceability and accountability, and (iii) regulatory operations (RegOps) workflows to operationalize compliance requirements. By implementing and validating its solutions across diverse pilots, CERTAIN aims to advance regulatory compliance and to promote responsible AI innovation aligned with European standards.
AI Ethics is now a global topic of discussion in academic and policy circles. At least 84 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.
In this position paper, I argue that the best way to help and protect humans using AI technology is to make them aware of the intrinsic limitations and problems of AI algorithms. To accomplish this, I suggest three ethical guidelines to be used in the presentation of results, mandating AI systems to expose uncertainty, to instill distrust, and, contrary to traditional views, to avoid explanations. The paper does a preliminary discussion of the guidelines and provides some arguments for their adoption, aiming to start a debate in the community about AI ethics in practice.
Trustworthy Artificial Intelligence (TAI) integrates ethics that align with human values, looking at their influence on AI behaviour and decision-making. Primarily dependent on self-assessment, TAI evaluation aims to ensure ethical standards and safety in AI development and usage. This paper reviews the current TAI evaluation methods in the literature and offers a classification, contributing to understanding self-assessment methods in this field.
In this chapter we argue that discourses on AI must transcend the language of 'ethics' and engage with power and political economy in order to constitute 'Good Data'. In particular, we must move beyond the depoliticised language of 'ethics' currently deployed (Wagner 2018) in determining whether AI is 'good' given the limitations of ethics as a frame through which AI issues can be viewed. In order to circumvent these limits, we use instead the language and conceptualisation of 'Good Data', as a more expansive term to elucidate the values, rights and interests at stake when it comes to AI's development and deployment, as well as that of other digital technologies. Good Data considerations move beyond recurring themes of data protection/privacy and the FAT (fairness, transparency and accountability) movement to include explicit political economy critiques of power. Instead of yet more ethics principles (that tend to say the same or similar things anyway), we offer four 'pillars' on which Good Data AI can be built: community, rights, usability and politics. Overall we view AI's 'goodness' as an explicly political (economy) question of power and one which is always related to the degree which AI is created and used to increase the wellbeing of society and especially to increase the power of the most marginalized and disenfranchised. We offer recommendations and remedies towards implementing 'better' approaches towards AI. Our strategies enable a different (but complementary) kind of evaluation of AI as part of the broader socio-technical systems in which AI is built and deployed.
Artificial Intelligence (AI) is transforming our daily life with several applications in healthcare, space exploration, banking and finance. These rapid progresses in AI have brought increasing attention to the potential impacts of AI technologies on society, with ethically questionable consequences. In recent years, several ethical principles have been released by governments, national and international organisations. These principles outline high-level precepts to guide the ethical development, deployment, and governance of AI. However, the abstract nature, diversity, and context-dependency of these principles make them difficult to implement and operationalize, resulting in gaps between principles and their execution. Most recent work analysed and summarized existing AI principles and guidelines but they did not provide findings on principle-implementation gaps and how to mitigate them. These findings are particularly important to ensure that AI implementations are aligned with ethical principles and values. In this paper, we provide a contextual and global evaluation of current ethical AI principles for all continents, with the aim to identify potential principle characteristics tailored to specific countries or applicable across countries. Next, we analyze the current level of AI readiness and current implementations of ethical AI principles in different countries, to identify gaps in the implementation of AI principles and their causes. Finally, we propose recommendations to mitigate the principle-implementation gaps.
合并后的分组全面覆盖了平台AI治理的核心领域。研究从底层的“算法审计与技术测评”工具出发,构建了“企业治理框架与生命周期管理”的系统论;并在金融、电商、人力资源等“高风险垂直平台”中进行了深度实践剖析。同时,针对“生成式AI”等前沿技术提出了法律规制的新范式,并最终延伸至“宏观政治经济学”与“组织伦理领导力”的深层社会影响,形成了一个从技术工具到组织实践,再到法律规制的立体化治理知识体系。