人工智能治理的叙事比较
地缘政治、数字主权与全球包容性叙事
该组文献探讨了AI作为国家战略资产的属性,分析了美中竞争、数字主权商业化及去殖民化重构。同时,它整合了全球南方、原住民权益及性别正义等视角,批判了技术殖民主义,提倡更具包容性的全球治理框架。
- The Commodification of AI Sovereignty: Lessons from the Fight for Sovereign Oil(Rui-Jie Yew, Kate Elizabeth Creasey, Taylor Lynn Curtis, Suresh Venkatasubramanian, 2026, ArXiv Preprint)
- The Treachery of Images in the Digital Sovereignty Debate(Jukka Ruohonen, 2020, ArXiv Preprint)
- Māori algorithmic sovereignty: idea, principles, and use(Paul T. Brown, Daniel Wilson, Kiri West, Kirita-Rose Escott, Kiya Basabas, Ben Ritchie, Danielle Lucas, Ivy Taia, Natalie Kusabs, Te Taka Keegan, 2023, ArXiv Preprint)
- Owning the Intelligence: Global AI Patents Landscape and Europe's Quest for Technological Sovereignty(Lapo Santarlasci, Armando Rungi, Loredana Fattorini, Nestor Maslej, 2025, ArXiv Preprint)
- Promising Topics for U.S.-China Dialogues on AI Risks and Governance(Saad Siddiqui, Lujain Ibrahim, Kristy Loke, Stephen Clare, Marianne Lu, Aris Richardson, Conor McGlynn, Jeffrey Ding, 2025, ArXiv Preprint)
- Generative AI as a Geopolitical Factor in Industry 5.0: Sovereignty, Access, and Control(Azmine Toushik Wasi, Enjamamul Haque Eram, Sabrina Afroz Mitu, Md Manjurul Ahsan, 2025, ArXiv Preprint)
- The Global Majority in International AI Governance(Chinasa T. Okolo, Mubarak Raji, 2026, ArXiv Preprint)
- Digital Sovereignty Strategies for Every Nation(Ali Shoker, 2023, ArXiv Preprint)
- Sovereign Agents: Towards Infrastructural Sovereignty and Diffused Accountability in Decentralized AI(Botao Amber Hu, Helena Rong, 2026, ArXiv Preprint)
- AI, Global Governance, and Digital Sovereignty(Swati Srivastava, Justin Bullock, 2024, ArXiv Preprint)
- The ghost of AI governance past, present and future: AI governance in the European Union(Charlotte Stix, 2021, ArXiv Preprint)
- Sovereignty in the digital era: the quest for continuous access to dependable technological capabilities(Roberto Baldoni, Giuseppe Di Luna, 2025, ArXiv Preprint)
- Artificial Intelligence Ethics: An Inclusive Global Discourse?(Cathy Roche, Dave Lewis, P. J. Wall, 2021, ArXiv Preprint)
- In Consideration of Indigenous Data Sovereignty: Data Mining as a Colonial Practice(Jennafer Shae Roberts, Laura N Montoya, 2023, ArXiv Preprint)
- My Voice, Your Voice, Our Voice: Attitudes Towards Collective Governance of a Choral AI Dataset(Jennifer Ding, Eva Jäger, Victoria Ivanova, Mercedes Bunz, 2024, ArXiv Preprint)
- Non-Western Perspectives on Web Inclusivity: A Study of Accessibility Practices in the Global South(Masudul Hasan Masud Bhuiyan, Matteo Varvello, Cristian-Alexandru Staicu, Yasir Zaki, 2025, ArXiv Preprint)
- The Gender Code: Gendering the Global Governance of Artificial Intelligence(Jelena Cupac, 2025, ArXiv Preprint)
- Bias Amplification in Artificial Intelligence Systems(Kirsten Lloyd, 2018, ArXiv Preprint)
- Expansive Participatory AI: Supporting Dreaming within Inequitable Institutions(Michael Alan Chang, Shiran Dudy, 2022, ArXiv Preprint)
- AI Governance and Ethics Framework for Sustainable AI and Sustainability(Mahendra Samarawickrama, 2022, ArXiv Preprint)
- The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults(Michelle L. Ding, Harini Suresh, 2025, ArXiv Preprint)
法律规制、监管模式与跨区域政策比较
集中于AI治理的具体政策工具与监管演进,包括欧盟《AI法案》、基于风险的监管模式、类FDA许可制度及网络安全框架。研究还对比了不同政体(如欧、美、中)在法律官僚逻辑与市场自由逻辑上的差异及政策博弈。
- Linking Artificial Intelligence Principles(Yi Zeng, Enmeng Lu, Cunqing Huangfu, 2018, ArXiv Preprint)
- Mapping Industry Practices to the EU AI Act's GPAI Code of Practice Safety and Security Measures(Lily Stelling, Mick Yang, Rokas Gipiškis, Leon Staufer, Ze Shen Chin, Siméon Campos, Ariel Gil, Michael Chen, 2025, ArXiv Preprint)
- An Overview of the Risk-based Model of AI Governance(Veve Fry, 2025, ArXiv Preprint)
- Uncovering AI Governance Themes in EU Policies using BERTopic and Thematic Analysis(Delaram Golpayegani, Marta Lasek-Markey, Arjumand Younus, Aphra Kerr, Dave Lewis, 2025, ArXiv Preprint)
- A cybersecurity AI agent selection and decision support framework(Masike Malatji, 2025, ArXiv Preprint)
- Computable Gap Assessment of Artificial Intelligence Governance in Children's Centres: Evidence-Mechanism-Governance-Indicator Modelling of UNICEF's Guidance on AI and Children 3.0 Based on the Graph-GAP Framework(Wei Meng, 2025, ArXiv Preprint)
- Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation(Jakob Mokander, Maria Axente, Federico Casolari, Luciano Floridi, 2021, ArXiv Preprint)
- Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI(Mst Rafia Islam, Azmine Toushik Wasi, 2024, ArXiv Preprint)
- An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence(Daniel Carpenter, Carson Ezell, 2024, ArXiv Preprint)
- From Abstract Threats to Institutional Realities: A Comparative Semantic Network Analysis of AI Securitisation in the US, EU, and China(Ruiyi Guo, Bodong Zhang, 2026, ArXiv Preprint)
- Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China(Amir Al-Maamari, 2025, ArXiv Preprint)
- Comparative Global AI Regulation: Policy Perspectives from the EU, China, and the US(Jon Chun, Christian Schroeder de Witt, Katherine Elkins, 2024, ArXiv Preprint)
- Pluralism in AI Governance: Toward Sociotechnical Alignment and Normative Coherence(Mike Wa Nkongolo, 2026, ArXiv Preprint)
- An Assessment of the AI Regulation Proposed by the European Commission(Patrick Glauner, 2021, ArXiv Preprint)
- How Do AI Companies "Fine-Tune" Policy? Examining Regulatory Capture in AI Governance(Kevin Wei, Carson Ezell, Nick Gabrieli, Chinmay Deshpande, 2024, ArXiv Preprint)
- AI and the EU Digital Markets Act: Addressing the Risks of Bigness in Generative AI(Ayse Gizem Yasar, Andrew Chong, Evan Dong, Thomas Krendl Gilbert, Sarah Hladikova, Roland Maio, Carlos Mougan, Xudong Shen, Shubham Singh, Ana-Andreea Stoica, Savannah Thais, Miri Zilka, 2023, ArXiv Preprint)
- Regulating Multifunctionality(Cary Coglianese, Colton R. Crum, 2025, ArXiv Preprint)
- The Risk-Adjusted Intelligence Dividend: A Quantitative Framework for Measuring AI Return on Investment Integrating ISO 42001 and Regulatory Exposure(Hernan Huwyler, 2025, ArXiv Preprint)
- Distributed and Decentralised Training: Technical Governance Challenges in a Shifting AI Landscape(Jakub Kryś, Yashvardhan Sharma, Janet Egan, 2025, ArXiv Preprint)
- A multilevel framework for AI governance(Hyesun Choung, Prabu David, John S. Seberger, 2023, ArXiv Preprint)
社会技术想象、话语结构与叙事分析方法
从叙事学和计算语言学角度研究治理,探讨如“核能类比”、存在主义风险等社会技术想象如何塑造政策。同时包含用于分析文本中因果叙事、话语分割及角色属性的技术方法,揭示了治理背后的语言学结构。
- The Nuclear Analogy in AI Governance Research(Sophia Hatz, 2025, ArXiv Preprint)
- The Stories We Govern By: AI, Risk, and the Power of Imaginaries(Ninell Oldenburg, Gleb Papyshev, 2025, ArXiv Preprint)
- Sociotechnical Imaginaries of ChatGPT in Higher Education: The Evolving Media Discourse(Yinan Sun, Ali Unlu, Aditya Johri, 2025, ArXiv Preprint)
- AI Governance in the Context of the EU AI Act: A Bibliometric and Literature Review Approach(Byeong-Je Kim, Seunghoo Jeong, Bong-Kyung Cho, Ji-Bum Chung, 2025, ArXiv Preprint)
- CHATTER: A Character Attribution Dataset for Narrative Understanding(Sabyasachee Baruah, Shrikanth Narayanan, 2024, ArXiv Preprint)
- Improving Topic Segmentation by Injecting Discourse Dependencies(Linzi Xing, Patrick Huber, Giuseppe Carenini, 2022, ArXiv Preprint)
- Discourse Structure in Machine Translation Evaluation(Shafiq Joty, Francisco Guzmán, Lluís Màrquez, Preslav Nakov, 2017, ArXiv Preprint)
- Anchoring a Lexicalized Tree-Adjoining Grammar for Discourse(Bonnie Lynn Webber, Aravind K. Joshi, 1998, ArXiv Preprint)
- The distribution of discourse relations within and across turns in spontaneous conversation(S. Magalí López Cortez, Cassandra L. Jacobs, 2023, ArXiv Preprint)
- A Novel Corpus of Discourse Structure in Humans and Computers(Babak Hemmatian, Sheridan Feucht, Rachel Avram, Alexander Wey, Muskaan Garg, Kate Spitalnic, Carsten Eickhoff, Ellie Pavlick, Bjorn Sandstede, Steven Sloman, 2021, ArXiv Preprint)
- Causal Micro-Narratives(Mourad Heddaya, Qingcheng Zeng, Chenhao Tan, Rob Voigt, Alexander Zentefis, 2024, ArXiv Preprint)
- GumDrop at the DISRPT2019 Shared Task: A Model Stacking Approach to Discourse Unit Segmentation and Connective Detection(Yue Yu, Yilun Zhu, Yang Liu, Yan Liu, Siyao Peng, Mackenzie Gong, Amir Zeldes, 2019, ArXiv Preprint)
- Exploring aspects of similarity between spoken personal narratives by disentangling them into narrative clause types(Belen Saldias, Deb Roy, 2020, ArXiv Preprint)
价值对齐、伦理批判与人类主体性重构
深入分析AI对齐的本质(如规范性陷阱、不可判定性)与人类控制。探讨了从抽象伦理原则向实践对齐的转变,以及在社会技术系统中如何重构劳动者、军事集成者和用户的权利与主体性。
- The Specification Trap: Why Content-Based AI Value Alignment Cannot Produce Robust Alignment(Austin Spizzirri, 2025, ArXiv Preprint)
- On the Undecidability of Artificial Intelligence Alignment: Machines that Halt(Gabriel Adriano de Melo, Marcos Ricardo Omena De Albuquerque Maximo, Nei Yoshihiro Soma, Paulo Andre Lima de Castro, 2024, ArXiv Preprint)
- Aligned with Whom? Direct and social goals for AI systems(Anton Korinek, Avital Balwit, 2022, ArXiv Preprint)
- Competing Visions of Ethical AI: A Case Study of OpenAI(Melissa Wilfley, Mengting Ai, Madelyn Rose Sanfilippo, 2026, ArXiv Preprint)
- Tracing the Techno-Supremacy Doctrine: A Critical Discourse Analysis of the AI Executive Elite(Héctor Pérez-Urbina, 2025, ArXiv Preprint)
- AI Ethics in Industry: A Research Framework(Ville Vakkuri, Kai-Kristian Kemell, Pekka Abrahamsson, 2019, ArXiv Preprint)
- Towards a Theory of AI Personhood(Francis Rhys Ward, 2025, ArXiv Preprint)
- Beyond principlism: Practical strategies for ethical AI use in research practices(Zhicheng Lin, 2024, ArXiv Preprint)
- Requisite Variety in Ethical Utility Functions for AI Value Alignment(Nadisha-Marie Aliman, Leon Kester, 2019, ArXiv Preprint)
- Foundational Moral Values for AI Alignment(Betty Li Hou, Brian Patrick Green, 2023, ArXiv Preprint)
- Deliberative Technology for Alignment(Andrew Konya, Deger Turan, Aviv Ovadya, Lina Qui, Daanish Masood, Flynn Devine, Lisa Schirch, Isabella Roberts, Deliberative Alignment Forum, 2023, ArXiv Preprint)
- AI Challenges for Society and Ethics(Jess Whittlestone, Sam Clarke, 2022, ArXiv Preprint)
- Meaningful human control: actionable properties for AI system development(Luciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert-Jan Houben, Catholijn M. Jonker, Jeroen van den Hoven, Deborah Forster, Reginald L. Lagendijk, 2021, ArXiv Preprint)
- With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems(Shaun Khoo, Jessica Foo, Roy Ka-Wei Lee, 2025, ArXiv Preprint)
- Normative Conflicts and Shallow AI Alignment(Raphaël Millière, 2025, ArXiv Preprint)
- Designing for Human-Agent Alignment: Understanding what humans want from their agents(Nitesh Goyal, Minsuk Chang, Michael Terry, 2024, ArXiv Preprint)
- Being Considerate as a Pathway Towards Pluralistic Alignment for Agentic AI(Parand A. Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte, Sheila A. McIlraith, 2024, ArXiv Preprint)
- Measurement as governance in and for responsible AI(Abigail Z. Jacobs, 2021, ArXiv Preprint)
- AI Phenomenology for Understanding Human-AI Experiences Across Eras(Bhada Yun, Evgenia Taranova, Dana Feng, Renn Su, April Yi Wang, 2026, ArXiv Preprint)
- Beyond Automation: Rethinking Work, Creativity, and Governance in the Age of Generative AI(Haocheng Lin, 2025, ArXiv Preprint)
- Integrators at War: Mediating in AI-assisted Resort-to-Force Decisions(Dennis Müller, Maurice Chiodo, Mitja Sienknecht, 2025, ArXiv Preprint)
- In Oxford Handbook on AI Governance: The Role of Workers in AI Ethics and Governance(Natalia Luka, JS Tan, 2021, ArXiv Preprint)
- Navigating the Sociotechnical Imaginaries of Brazilian Tech Workers(Kenzo Soares Seto, 2026, ArXiv Preprint)
- You Can't Get There From Here: Redefining Information Science to address our sociotechnical futures(Scott Humr, Mustafa Canan, 2025, ArXiv Preprint)
- Axes for Sociotechnical Inquiry in AI Research(Sarah Dean, Thomas Krendl Gilbert, Nathan Lambert, Tom Zick, 2021, ArXiv Preprint)
- Designing AI Systems that Augment Human Performed vs. Demonstrated Critical Thinking(Katelyn Xiaoying Mei, Nic Weber, 2025, ArXiv Preprint)
行业特定领域的治理实践与社会影响
探讨AI在医疗、教育、科学发现、经济理论、景观建筑及军事调度等具体领域的应用叙事。研究这些应用如何改变专业流程,并带来虚假信息、预印本趋势等社会技术挑战。
- Decoding the Sociotechnical Dimensions of Digital Misinformation: A Comprehensive Literature Review(Alisson Andrey Puska, Luiz Adolpho Baroni, Roberto Pereira, 2024, ArXiv Preprint)
- Towards The Ultimate Brain: Exploring Scientific Discovery with ChatGPT AI(Gerardo Adesso, 2023, ArXiv Preprint)
- Good AI for the Present of Humanity Democratizing AI Governance(Nicholas Kluge Corrêa, Nythamar de Oliveira, 2020, ArXiv Preprint)
- Artificial Intelligence and Economic Theories(Tshilidzi Marwala, Evan Hurwitz, 2017, ArXiv Preprint)
- Blue Sky Ideas in Artificial Intelligence Education from the EAAI 2017 New and Future AI Educator Program(Eric Eaton, Sven Koenig, Claudia Schulz, Francesco Maurelli, John Lee, Joshua Eckroth, Mark Crowley, Richard G. Freedman, Rogelio E. Cardona-Rivera, Tiago Machado, Tom Williams, 2017, ArXiv Preprint)
- Problematizing AI Omnipresence in Landscape Architecture(Phillip Fernberg, Zihao Zhang, 2024, ArXiv Preprint)
- Supporting Data-Frame Dynamics in AI-assisted Decision Making(Chengbo Zheng, Tim Miller, Alina Bialkowski, H Peter Soyer, Monika Janda, 2025, ArXiv Preprint)
- The Shift Towards Preprints in AI Policy Research: A Comparative Study of Preprint Trends in the U.S., Europe, and South Korea(Simon Suh, 2025, ArXiv Preprint)
- Study on the Helpfulness of Explainable Artificial Intelligence(Tobias Labarta, Elizaveta Kulicheva, Ronja Froelian, Christian Geißler, Xenia Melman, Julian von Klitzing, 2024, ArXiv Preprint)
- Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)(Supriya Manna, Niladri Sett, 2024, ArXiv Preprint)
- Augmented Computational Design: Methodical Application of Artificial Intelligence in Generative Design(Pirouz Nourian, Shervin Azadi, Roy Uijtendaal, Nan Bai, 2023, ArXiv Preprint)
- Design Generative AI for Practitioners: Exploring Interaction Approaches Aligned with Creative Practice(Xiaohan Peng, Wendy E. Mackay, Janin Koch, 2026, ArXiv Preprint)
- AI-Driven Mobility Management for High-Speed Railway Communications: Compressed Measurements and Proactive Handover(Wen Li, Wei Chen, Shiyue Wang, Yuanyuan Zhang, Michail Matthaiou, Bo Ai, 2024, ArXiv Preprint)
- Who Will Win Practical Artificial Intelligence? AI Engineerings in China(Huai-Yu Wu, Feiyue Wang, Chunhong Pan, 2017, ArXiv Preprint)
- Artificial Intelligence Framework for Simulating Clinical Decision-Making: A Markov Decision Process Approach(Casey C. Bennett, Kris Hauser, 2013, ArXiv Preprint)
- Enabling Integration and Interaction for Decentralized Artificial Intelligence in Airline Disruption Management(Kolawole Ogunsina, Daniel DeLaurentis, 2021, ArXiv Preprint)
技术评估、算法优化与底层方法论
侧重于治理的技术底层支撑,包括可解释性(XAI)、数据隐私保护、科研复现性、系统认证方法以及基础理论(如本体论、神经符号集成)。这些研究为治理提供了可操作的技术标准和评估工具。
- Why Artificial Intelligence Needs a Task Theory --- And What It Might Look Like(Kristinn R. Thórisson, Jordi Bieger, Thröstur Thorarensen, Jóna S. Sigurðardóttir, Bas R. Steunebrink, 2016, ArXiv Preprint)
- Foundations of GenIR(Qingyao Ai, Jingtao Zhan, Yiqun Liu, 2025, ArXiv Preprint)
- Artificial Collective Intelligence Engineering: a Survey of Concepts and Perspectives(Roberto Casadei, 2023, ArXiv Preprint)
- When Do Discourse Markers Affect Computational Sentence Understanding?(Ruiqi Li, Liesbeth Allein, Damien Sileo, Marie-Francine Moens, 2023, ArXiv Preprint)
- Expert-Guided LLM Reasoning for Battery Discovery: From AI-Driven Hypothesis to Synthesis and Characterization(Shengchao Liu, Hannan Xu, Yan Ai, Huanxin Li, Yoshua Bengio, Harry Guo, 2025, ArXiv Preprint)
- Conceptual Modeling and Artificial Intelligence: Mutual Benefits from Complementary Worlds(Dominik Bork, 2021, ArXiv Preprint)
- Dungeon Crawl Stone Soup as an Evaluation Domain for Artificial Intelligence(Dustin Dannenhauer, Michael W. Floyd, Jonathan Decker, David W. Aha, 2019, ArXiv Preprint)
- Certifiable Artificial Intelligence Through Data Fusion(Erik Blasch, Junchi Bin, Zheng Liu, 2021, ArXiv Preprint)
- BERT4beam: Large AI Model Enabled Generalized Beamforming Optimization(Yuhang Li, Yang Lu, Wei Chen, Bo Ai, Zhiguo Ding, Dusit Niyato, 2025, ArXiv Preprint)
- A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?(Subrato Bharati, M. Rubaiyat Hossain Mondal, Prajoy Podder, 2023, ArXiv Preprint)
- The Artificial Scientist: Logicist, Emergentist, and Universalist Approaches to Artificial General Intelligence(Michael Timothy Bennett, Yoshihiro Maruyama, 2021, ArXiv Preprint)
- Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective(Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu, 2023, ArXiv Preprint)
- Games for Artificial Intelligence Research: A Review and Perspectives(Chengpeng Hu, Yunlong Zhao, Ziqi Wang, Haocheng Du, Jialin Liu, 2023, ArXiv Preprint)
- Creative Problem Solving in Artificially Intelligent Agents: A Survey and Framework(Evana Gizzi, Lakshmi Nair, Sonia Chernova, Jivko Sinapov, 2022, ArXiv Preprint)
- Generalized Traveling Salesman Problem Reduction Algorithms(Gregory Gutin, Daniel Karapetyan, 2008, ArXiv Preprint)
- Correlation between tides and seismicity in Northwestern South America: the case of Colombia(Gloria A. Moncayo, Jorge I. Zuluaga, Gaspar Monsalve, 2018, ArXiv Preprint)
- A philosophical and ontological perspective on Artificial General Intelligence and the Metaverse(Martin Schmalzried, 2024, ArXiv Preprint)
- From Statistical Relational to Neurosymbolic Artificial Intelligence: a Survey(Giuseppe Marra, Sebastijan Dumančić, Robin Manhaeve, Luc De Raedt, 2021, ArXiv Preprint)
- Watershed of Artificial Intelligence: Human Intelligence, Machine Intelligence, and Biological Intelligence(Li Weigang, Liriam Enamoto, Denise Leyi Li, Geraldo Pereira Rocha Filho, 2021, ArXiv Preprint)
- Modular Belief Updates and Confusion about Measures of Certainty in Artificial Intelligence Research(Eric J. Horvitz, David Heckerman, 2014, ArXiv Preprint)
- Modeling Belief in Dynamic Systems, Part II: Revision and Update(N Friedman, J. Y. Halpern, 1999, ArXiv Preprint)
- AAAI-2019 Workshop on Games and Simulations for Artificial Intelligence(Marwan Mattar, Roozbeh Mottaghi, Julian Togelius, Danny Lange, 2019, ArXiv Preprint)
- Reproducibility: The New Frontier in AI Governance(Israel Mason-Williams, Gabryel Mason-Williams, 2025, ArXiv Preprint)
合并后的分组构建了一个从宏观叙事到微观技术的全方位AI治理框架。研究揭示了AI治理正从抽象的伦理原则转向由地缘政治主权、跨区域法律博弈、社会技术想象以及可计算的审计指标共同驱动的复杂生态系统。治理叙事不仅关乎技术对齐,更深植于全球权力重构、话语权争夺以及对人类主体性的重新定义中。
总计119篇相关文献
This chapter examines the global governance of artificial intelligence (AI) through the lens of the Global AI Divide, focusing on disparities in AI development, innovation, and regulation. It highlights systemic inequities in education, digital infrastructure, and access to decision-making processes, perpetuating a dependency and exclusion cycle for Global Majority countries. The analysis also explores the dominance of Western nations and corporations in shaping AI governance frameworks, which often sideline the unique priorities and contexts of the Global Majority. Additionally, this chapter identifies emerging countertrends, such as national and regional AI strategies, as potential avenues for fostering equity and inclusivity in global AI governance. The chapter concludes with actionable recommendations to democratize AI governance for Majority World countries, emphasizing the importance of systemic reforms, resource redistribution, and meaningful participation. It calls for collaborative action to ensure AI governance becomes a catalyst for shared prosperity, addressing global disparities rather than deepening them.
The analogy between Artificial Intelligence (AI) and nuclear weapons is prominent in academic and policy discourse on AI governance. This chapter reviews 43 scholarly works which explicitly draw on the nuclear domain to derive lessons for AI governance. We identify four problem areas where researchers apply nuclear precedents: (1) early development and governance of transformative technologies; (2) international security risks and strategy; (3) international institutions and agreements; and (4) domestic safety regulation. While nuclear-inspired AI proposals are often criticised due to differences across domains, this review clarifies how historical analogies can inform policy development even when technological domains differ substantially. Valuable functions include providing conceptual frameworks for analyzing strategic dynamics, offering cautionary lessons about unsuccessful governance approaches, and expanding policy imagination by legitimizing radical proposals. Given that policymakers already invoke the nuclear analogy, continued critical engagement with these historical precedents remains essential for shaping effective global AI governance.
Introduction. AI Ethics is framed distinctly across actors and stakeholder groups. We report results from a case study of OpenAI analysing ethical AI discourse. Method. Research addressed: How has OpenAI's public discourse leveraged 'ethics', 'safety', 'alignment' and adjacent related concepts over time, and what does discourse signal about framing in practice? A structured corpus, differentiating between communication for a general audience and communication with an academic audience, was assembled from public documentation. Analysis. Qualitative content analysis of ethical themes combined inductively derived and deductively applied codes. Quantitative analysis leveraged computational content analysis methods via NLP to model topics and quantify changes in rhetoric over time. Visualizations report aggregate results. For reproducible results, we have released our code at https://github.com/famous-blue-raincoat/AI_Ethics_Discourse. Results. Results indicate that safety and risk discourse dominate OpenAI's public communication and documentation, without applying academic and advocacy ethics frameworks or vocabularies. Conclusions. Implications for governance are presented, along with discussion of ethics-washing practices in industry.
This paper provides an overview and critique of the risk based model of artificial intelligence (AI) governance that has become a popular approach to AI regulation across multiple jurisdictions. The 'AI Policy Landscape in Europe, North America and Australia' section summarises the existing AI policy efforts across these jurisdictions, with a focus of the EU AI Act and the Australian Department of Industry, Science and Regulation's (DISR) safe and responsible AI consultation. The 'Analysis' section of this paper proposes several criticisms of the risk based approach to AI governance, arguing that the construction and calculation of risks that they use reproduces existing inequalities. Drawing on the work of Julia Black, it argues that risk and harm should be distinguished clearly and that the notion of risk is problematic as its inherent normativity reproduces dominant and harmful narratives about whose interests matter, and risk categorizations should be subject to deep scrutiny. This paper concludes with the suggestion that existing risk governance scholarship can provide valuable insights toward the improvement of the risk based AI governance, and that the use of multiple regulatory implements and responsive risk regulation should be considered in the continuing development of the model.
This paper examines how competing sociotechnical imaginaries of artificial intelligence (AI) risk shape governance decisions and regulatory constraints. Drawing on concepts from science and technology studies, we analyse three dominant narrative groups: existential risk proponents, who emphasise catastrophic AGI scenarios; accelerationists, who portray AI as a transformative force to be unleashed; and critical AI scholars, who foreground present-day harms rooted in systemic inequality. Through an analysis of representative manifesto-style texts, we explore how these imaginaries differ across four dimensions: normative visions of the future, diagnoses of the present social order, views on science and technology, and perceived human agency in managing AI risks. Our findings reveal how these narratives embed distinct assumptions about risk and have the potential to progress into policy-making processes by narrowing the space for alternative governance approaches. We argue against speculative dogmatism and for moving beyond deterministic imaginaries toward regulatory strategies that are grounded in pragmatism.
What do Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek to explore future social and ethical problems that our technological advances may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and debated, and several ethical principles and guides have been suggested as governance policies for the tech industry. However, would this be the role of AI Ethics? To serve as a soft and ambiguous version of the law? We would like to advocate in this article for a more Cyberpunk way of doing AI Ethics, with a more democratic way of governance. In this study, we will seek to expose some of the deficits of the underlying power structures of the AI industry, and suggest that AI governance be subject to public opinion, so that good AI can become good AI for all.
Artificial Intelligence principles define social and ethical considerations to develop future AI. They come from research institutes, government organizations and industries. All versions of AI principles are with different considerations covering different perspectives and making different emphasis. None of them can be considered as complete and can cover the rest AI principle proposals. Here we introduce LAIP, an effort and platform for linking and analyzing different Artificial Intelligence Principles. We want to explicitly establish the common topics and links among AI Principles proposed by different organizations and investigate on their uniqueness. Based on these efforts, for the long-term future of AI, instead of directly adopting any of the AI principles, we argue for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
In April 2021, the European Commission published a proposed regulation on AI. It intends to create a uniform legal framework for AI within the European Union (EU). In this chapter, we analyze and assess the proposal. We show that the proposed regulation is actually not needed due to existing regulations. We also argue that the proposal clearly poses the risk of overregulation. As a consequence, this would make the use or development of AI applications in safety-critical application areas, such as in healthcare, almost impossible in the EU. This would also likely further strengthen Chinese and US corporations in their technology leadership. Our assessment is based on the oral evidence we gave in May 2021 to the joint session of the European Union affairs committees of the German federal parliament and the French National Assembly.
This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.
This essay examines how Artificial Intelligence (AI) systems are becoming more integral to international affairs by affecting how global governors exert power and pursue digital sovereignty. We first introduce a taxonomy of multifaceted AI payoffs for governments and corporations related to instrumental, structural, and discursive power in the domains of violence, markets, and rights. We next leverage different institutional and practice perspectives on sovereignty to assess how digital sovereignty is variously implicated in AI-empowered global governance. States both seek sovereign control over AI infrastructures in the institutional approach, while establishing sovereign competence through AI infrastructures in the practice approach. Overall, we present the digital sovereignty stakes of AI as related to entanglements of public and private power. Rather than foreseeing technology companies as replacing states, we argue that AI systems will embed in global governance to create dueling dynamics of public/private cooperation and contestation. We conclude with sketching future directions for IR research on AI and global governance.
Artificial intelligence governance exhibits a striking paradox: while major jurisdictions converge rhetorically around concepts such as safety, risk, and accountability, their regulatory frameworks remain fundamentally divergent and mutually unintelligible. This paper argues that this fragmentation cannot be explained solely by geopolitical rivalry, institutional complexity, or instrument selection. Instead, it stems from how AI is constituted as an object of governance through distinct institutional logics. Integrating securitisation theory with the concept of the dispositif, we demonstrate that jurisdictions govern ontologically different objects under the same vocabulary. Using semantic network analysis of official policy texts from the European Union, the United States, and China (2023-2025), we trace how concepts like safety are embedded within divergent semantic architectures. Our findings reveal that the EU juridifies AI as a certifiable product through legal-bureaucratic logic; the US operationalises AI as an optimisable system through market-liberal logic; and China governs AI as socio-technical infrastructure through holistic state logic. We introduce the concept of structural incommensurability to describe this condition of ontological divergence masked by terminological convergence. This reframing challenges ethics-by-principles approaches to global AI governance, suggesting that coordination failures arise not from disagreement over values but from the absence of a shared reference object.
As a powerful and rapidly advancing dual-use technology, AI offers both immense benefits and worrisome risks. In response, governing bodies around the world are developing a range of regulatory AI laws and policies. This paper compares three distinct approaches taken by the EU, China and the US. Within the US, we explore AI regulation at both the federal and state level, with a focus on California's pending Senate Bill 1047. Each regulatory system reflects distinct cultural, political and economic perspectives. Each also highlights differing regional perspectives on regulatory risk-benefit tradeoffs, with divergent judgments on the balance between safety versus innovation and cooperation versus competition. Finally, differences between regulatory frameworks reflect contrastive stances in regards to trust in centralized authority versus trust in a more decentralized free market of self-interested stakeholders. Taken together, these varied approaches to AI innovation and regulation influence each other, the broader international community, and the future of AI regulation.
Cooperation between the United States and China, the world's leading artificial intelligence (AI) powers, is crucial for effective global AI governance and responsible AI development. Although geopolitical tensions have emphasized areas of conflict, in this work, we identify potential common ground for productive dialogue by conducting a systematic analysis of more than 40 primary AI policy and corporate governance documents from both nations. Specifically, using an adapted version of the AI Governance and Regulatory Archive (AGORA) - a comprehensive repository of global AI governance documents - we analyze these materials in their original languages to identify areas of convergence in (1) sociotechnical risk perception and (2) governance approaches. We find strong and moderate overlap in several areas such as on concerns about algorithmic transparency, system reliability, agreement on the importance of inclusive multi-stakeholder engagement, and AI's role in enhancing safety. These findings suggest that despite strategic competition, there exist concrete opportunities for bilateral U.S.-China cooperation in the development of responsible AI. Thus, we present recommendations for furthering diplomatic dialogues that can facilitate such cooperation. Our analysis contributes to understanding how different international governance frameworks might be harmonized to promote global responsible AI development.
The received wisdom is that artificial intelligence (AI) is a competition between the US and China. In this chapter, the author will examine how the European Union (EU) fits into that mix and what it can offer as a third way to govern AI. The chapter presents this by exploring the past, present and future of AI governance in the EU. Section 1 serves to explore and evidence the EUs coherent and comprehensive approach to AI governance. In short, the EU ensures and encourages ethical, trustworthy and reliable technological development. This will cover a range of key documents and policy tools that lead to the most crucial effort of the EU to date: to regulate AI. Section 2 maps the EUs drive towards digital sovereignty through the lens of regulation and infrastructure. This covers topics such as the trustworthiness of AI systems, cloud, compute and foreign direct investment. In Section 3, the chapter concludes by offering several considerations to achieve good AI governance in the EU.
As artificial intelligence (AI) technologies increasingly enter important sectors like healthcare, transportation, and finance, the development of effective governance frameworks is crucial for dealing with ethical, security, and societal risks. This paper conducts a comparative analysis of AI risk management strategies across the European Union (EU), United States (U.S.), United Kingdom (UK), and China. A multi-method qualitative approach, including comparative policy analysis, thematic analysis, and case studies, investigates how these regions classify AI risks, implement compliance measures, structure oversight, prioritize transparency, and respond to emerging innovations. Examples from high-risk contexts like healthcare diagnostics, autonomous vehicles, fintech, and facial recognition demonstrate the advantages and limitations of different regulatory models. The findings show that the EU implements a structured, risk-based framework that prioritizes transparency and conformity assessments, while the U.S. uses decentralized, sector-specific regulations that promote innovation but may lead to fragmented enforcement. The flexible, sector-specific strategy of the UK facilitates agile responses but may lead to inconsistent coverage across domains. China's centralized directives allow rapid large-scale implementation while constraining public transparency and external oversight. These insights show the necessity for AI regulation that is globally informed yet context-sensitive, aiming to balance effective risk management with technological progress. The paper concludes with policy recommendations and suggestions for future research aimed at enhancing effective, adaptive, and inclusive AI governance globally.
This paper critically analyzes the discourse of the 'AI executive elite,' a group of highly influential individuals shaping the way AI is funded, developed, and deployed worldwide. The primary objective is to examine the presence and dynamics of the 'Techno-Supremacy Doctrine' (TSD), a term introduced in this study to describe a belief system characterized by an excessive trust in technology's alleged inherent superiority in solving complex societal problems. This study integrates quantitative heuristics with in-depth qualitative investigations. Its methodology is operationalized in a two-phase critical discourse analysis of 14 texts published by elite members between 2017 and 2025. The findings demonstrate that the elite is not a monolithic bloc but exhibits a broad spectrum of stances. The discourse is highly dynamic, showing a marked polarization and general increase in pro-TSD discourse following the launch of ChatGPT. The analysis identifies key discursive patterns, including a dominant pro-TSD narrative that combines utopian promises with claims of inevitable progress, and the common tactic of acknowledging risks only as a strategic preamble to proposing further technological solutions. This paper presents TSD as a comprehensive analytical framework and provides a 'diagnostic toolkit' for identifying its manifestations, from insidious to benign. It argues that fostering critical awareness of these discursive patterns is essential for AI practitioners, policymakers, and the public to actively navigate the future of AI.
It is widely accepted that technology is ubiquitous across the planet and has the potential to solve many of the problems existing in the Global South. Moreover, the rapid advancement of artificial intelligence (AI) brings with it the potential to address many of the challenges outlined in the Sustainable Development Goals (SDGs) in ways which were never before possible. However, there are many questions about how such advanced technologies should be managed and governed, and whether or not the emerging ethical frameworks and standards for AI are dominated by the Global North. This research examines the growing body of documentation on AI ethics to examine whether or not there is equality of participation in the ongoing global discourse. Specifically, it seeks to discover if both countries in the Global South and women are underrepresented in this discourse. Findings indicate a dearth of references to both of these themes in the AI ethics documents, suggesting that the associated ethical implications and risks are being neglected. Without adequate input from both countries in the Global South and from women, such ethical frameworks and standards may be discriminatory with the potential to reinforce marginalisation.
This chapter examines the sociotechnical imaginaries of Brazilian tech workers, a group often overlooked in digital labor research despite their role in designing the digital systems that shape everyday life. Grounded in the idea of sociotechnical imaginaries as collectively constructed visions that guide technology development and governance, the chapter argues that looking from the Global South helps challenge data universalism and foregrounds locally situated values, constraints, and futures. Drawing on semi-structured interviews with 26 Brazilian professionals conducted between July and December 2023, it maps how workers make sense of responsibility, bias, and power in AI and platform development. The findings highlight recurring tensions between academic and industry discourse on algorithmic bias, the limits of corporate accountability regarding user harm and surveillance, and the contested meanings of digital sovereignty, including grassroots initiatives that seek alternative technological futures aligned with marginalized communities needs.
This paper examines the challenge of embedding public values into national artificial intelligence (AI) governance frameworks, a task complicated by the sociotechnical nature of contemporary systems. As AI permeates domains such as healthcare, justice, and public administration, legitimacy depends not only on technical correctness but on alignment with societal norms, democratic principles, and human dignity. Traditional paradigms focused on model safety or market efficiency neglect broader institutional contexts. To address this, the study synthesises frameworks including Full-Stack Alignment, Thick Models of Value, Value Sensitive Design, and Public Constitutional AI, alongside comparative analysis of jurisdictions such as the EU, US, China, UK, Brazil, and South Africa (SA). It introduces a layered framework linking values, mechanisms, and strategies, and maps tensions such as fairness versus efficiency, transparency versus security, and privacy versus equity. Findings reveal a pluralism of regulatory philosophies, with SA sovereignty-oriented approach offering a distinctive counterpoint. The study contributes a holistic, value-sensitive model of AI governance, reframing regulation as a proactive mechanism for embedding public values into sociotechnical systems.
In an era where economies and societies are deeply integrated into cyberspace, achieving a robust level of digital sovereignty has become an essential goal for nations aiming to preserve their security and strategic political autonomy, particularly during turbulent geopolitical times marked by complex global supply chains of critical technologies that ties systemic rivals. Digital sovereignty is a multifaceted, interdisciplinary, and dynamic pursuit that fundamentally relies on a nation's ability to have continuous access to dependable technological capabilities (CTCs) for storing, transferring, and processing domestically produced data. This paper identifies how access continuity or technological dependability could be threatened by several malicious actions from cyberattacks, supply chain tamperings, political or economic actions. By examining different approaches adopted by countries like the United States, China, and the European Union, we highlight different strategies to get access to CTCs depending on their political, economic and institutional nature.
Digital Sovereignty must be on the agenda of every modern nation. Digital technology is becoming part of our life details, from the vital essentials, like food and water management, to transcendence in the Metaverse and Space. Protecting these digital assets will, therefore, be inevitable for a modern country to live, excel and lead. Digital Sovereignty is a strategic necessity to protect these digital assets from the monopoly of friendly rational states, and the threats of unfriendly Malicious states and behaviors. In this work, we revisit the definition and scope of digital sovereignty through extending it to cover the entire value chain of using, owning, and producing digital assets. We emphasize the importance of protecting the operational resources, both raw materials and human expertise, in addition to research and innovation necessary to achieve sustainable sovereignty. We also show that digital sovereignty by autonomy is often impossible, and by mutual cooperation is not always sustainable. To this end, we propose implementing digital sovereignty using Nash Equilibrium, often studied in Game Theory, to govern the relation with Rational states. Finally, we propose a digital sovereignty agenda for different country's digital profiles, based on their status quo, priorities, and capabilities. We survey state-of-the-art digital technology that is useful to make the current digital assets sovereign. Additionally, we propose a roadmap that aims to develop a sovereign digital nation, as close as possible to autonomy. Finally, we draw attention to the need of more research to better understand and implement digital sovereignty from different perspectives: technological, economic, and geopolitical.
Artificial intelligence has become a key arena of global technological competition and a central concern for Europe's quest for technological sovereignty. This paper analyzes global AI patenting from 2010 to 2023 to assess Europe's position in an increasingly bipolar innovation landscape dominated by the United States and China. Using linked patent, firm, ownership, and citation data, we examine the geography, specialization, and international diffusion of AI innovation. We find a highly concentrated patent landscape: China leads in patent volumes, while the United States dominates in citation impact and technological influence. Europe accounts for a limited share of AI patents but exhibits signals of relatively high patent quality. Technological proximity reveals global convergence toward U.S. innovation trajectories, with Europe remaining fragmented rather than forming an autonomous pole. Gravity-model estimates show that cross-border AI knowledge flows are driven primarily by technological capability and specialization, while geographic and institutional factors play a secondary role. EU membership does not significantly enhance intra-European knowledge diffusion, suggesting that technological capacity, rather than political integration, underpins participation in global AI innovation networks.
"Sovereignty" is increasingly a part of national AI policies and strategies. At the same time that "sovereignty" is invoked as a priority for global AI policy, it is also being commodified along the AI stack. Companies now sell "sovereign" AI factories, clouds, and language models to governments, enterprises, and communities -- turning a contested value into a commercial commodity. This shift risks allowing private technology providers to define sovereignty on their own terms. By analyzing the history of sovereignty and parallels in global oil production, this paper aims to open avenues to interrogate the implications of this value's commercialization. The contributions of this paper lie in a disentangling of the facets of sovereignty being appealed to through the AI stack and a case for how analogizing oil and AI can be generative in thinking through what is achieved and what can be achieved through the commodification of AI sovereignty.
This short theoretical and argumentative essay contributes to the ongoing deliberation about the so-called digital sovereignty, as pursued particularly in the European Union (EU). Drawing from classical political science literature, the essay approaches the debate through paradoxes that arise from applying classical notions of sovereignty to the digital domain. With these paradoxes and a focus on the Peace of Westphalia in 1648, the essay develops a viewpoint distinct from the conventional territorial notion of sovereignty. Accordingly, the lesson from Westphalia has more to do with the capacity of a state to govern. It is also this capacity that is argued to enable the sovereignty of individuals within the digital realm. With this viewpoint, the essay further advances another, broader, and more pressing debate on politics and democracy in the digital era.
AI agents deployed on decentralized infrastructures are beginning to exhibit properties that extend beyond autonomy toward what we describe as agentic sovereignty-the capacity of an operational agent to persist, act, and control resources with non-overrideability inherited from the infrastructures in which they are embedded. We propose infrastructural sovereignty as an analytic lens for understanding how cryptographic self-custody, decentralized execution environments, and protocol-mediated continuity scaffold agentic sovereignty. While recent work on digital and network sovereignty has moved beyond state-centric and juridical accounts, these frameworks largely examine how sovereignty is exercised through technical systems by human collectives and remain less equipped to account for forms of sovereignty that emerge as operational properties of decentralized infrastructures themselves, particularly when instantiated in non-human sovereign agents. We argue that sovereignty in such systems exists on a spectrum determined by infrastructural hardness-the degree to which underlying technical systems resist intervention or collapse. While infrastructural sovereignty may increase resilience, it also produces a profound accountability gap: responsibility diffuses across designers, infrastructure providers, protocol governance, and economic participants, undermining traditional oversight mechanisms such as human-in-the-loop control or platform moderation. Drawing on examples like Trusted Execution Environments (TEEs), decentralized physical infrastructure networks (DePIN), and agent key continuity protocols, we analyze the governance challenges posed by non-terminable AI agents and outline infrastructure-aware accountability strategies for emerging decentralized AI systems.
Due to the emergence of data-driven technologies in Aotearoa New Zealand that use Māori data, there is a need for values-based frameworks to guide thinking around balancing the tension between the opportunities these create, and the inherent risks that these technologies can impose. Algorithms can be framed as a particular use of data, therefore data frameworks that currently exist can be extended to include algorithms. Māori data sovereignty principles are well-known and are used by researchers and government agencies to guide the culturally appropriate use of Māori data. Extending these principles to fit the context of algorithms, and re-working the underlying sub-principles to address issues related to responsible algorithms from a Māori perspective leads to the Māori algorithmic sovereignty principles. We define this idea, present the updated principles and subprinciples, and highlight how these can be used to decolonise algorithms currently in use, and argue that these ideas could potentially be used to developed Indigenised algorithms.
Data mining reproduces colonialism, and Indigenous voices are being left out of the development of technology that relies on data, such as artificial intelligence. This research stresses the need for the inclusion of Indigenous Data Sovereignty and centers on the importance of Indigenous rights over their own data. Inclusion is necessary in order to integrate Indigenous knowledge into the design, development, and implementation of data-reliant technology. To support this hypothesis and address the problem, the CARE Principles for Indigenous Data Governance (Collective Benefit, Authority to Control, Responsibility, and Ethics) are applied. We cover how the colonial practices of data mining do not align with Indigenous convictions. The included case studies highlight connections to Indigenous rights in relation to the protection of data and environmental ecosystems, thus establishing how data governance can serve both the people and the Earth. By applying the CARE Principles to the issues that arise from data mining and neocolonialism, our goal is to provide a framework that can be used in technological development. The theory is that this could reflect outwards to promote data sovereignty generally and create new relationships between people and data that are ethical as opposed to driven by speed and profit.
Industry 5.0 marks a new phase in industrial evolution, emphasizing human-centricity, sustainability, and resilience through the integration of advanced technologies. Within this evolving landscape, Generative AI (GenAI) and autonomous systems are not only transforming industrial processes but also emerging as pivotal geopolitical instruments. We examine strategic implications of GenAI in Industry 5.0, arguing that these technologies have become national assets central to sovereignty, access, and global influence. As countries compete for AI supremacy, growing disparities in talent, computational infrastructure, and data access are reshaping global power hierarchies and accelerating the fragmentation of the digital economy. The human-centric ethos of Industry 5.0, anchored in collaboration between humans and intelligent systems, increasingly conflicts with the autonomy and opacity of GenAI, raising urgent governance challenges related to meaningful human control, dual-use risks, and accountability. We analyze how these dynamics influence defense strategies, industrial competitiveness, and supply chain resilience, including the geopolitical weaponization of export controls and the rise of data sovereignty. Our contribution synthesizes technological, economic, and ethical perspectives to propose a comprehensive framework for navigating the intersection of GenAI and geopolitics. We call for governance models that balance national autonomy with international coordination while safeguarding human-centric values in an increasingly AI-driven world.
AI policymakers are responsible for delivering effective governance mechanisms that can provide safe, aligned and trustworthy AI development. However, the information environment offered to policymakers is characterised by an unnecessarily low Signal-To-Noise Ratio, favouring regulatory capture and creating deep uncertainty and divides on which risks should be prioritised from a governance perspective. We posit that the current publication speeds in AI combined with the lack of strong scientific standards, via weak reproducibility protocols, effectively erodes the power of policymakers to enact meaningful policy and governance protocols. Our paper outlines how AI research could adopt stricter reproducibility guidelines to assist governance endeavours and improve consensus on the AI risk landscape. We evaluate the forthcoming reproducibility crisis within AI research through the lens of crises in other scientific domains; providing a commentary on how adopting preregistration, increased statistical power and negative result publication reproducibility protocols can enable effective AI governance. While we maintain that AI governance must be reactive due to AI's significant societal implications we argue that policymakers and governments must consider reproducibility protocols as a core tool in the governance arsenal and demand higher standards for AI research. Code to replicate data and figures: https://github.com/IFMW01/reproducibility-the-new-frontier-in-ai-governance
We present a novel approach to classify causal micro-narratives from text. These narratives are sentence-level explanations of the cause(s) and/or effect(s) of a target subject. The approach requires only a subject-specific ontology of causes and effects, and we demonstrate it with an application to inflation narratives. Using a human-annotated dataset spanning historical and contemporary US news articles for training, we evaluate several large language models (LLMs) on this multi-label classification task. The best-performing model--a fine-tuned Llama 3.1 8B--achieves F1 scores of 0.87 on narrative detection and 0.71 on narrative classification. Comprehensive error analysis reveals challenges arising from linguistic ambiguity and highlights how model errors often mirror human annotator disagreements. This research establishes a framework for extracting causal micro-narratives from real-world data, with wide-ranging applications to social science research.
To realize the potential benefits and mitigate potential risks of AI, it is necessary to develop a framework of governance that conforms to ethics and fundamental human values. Although several organizations have issued guidelines and ethical frameworks for trustworthy AI, without a mediating governance structure, these ethical principles will not translate into practice. In this paper, we propose a multilevel governance approach that involves three groups of interdependent stakeholders: governments, corporations, and citizens. We examine their interrelationships through dimensions of trust, such as competence, integrity, and benevolence. The levels of governance combined with the dimensions of trust in AI provide practical insights that can be used to further enhance user experiences and inform public policy related to AI.
While the role of states, corporations, and international organizations in AI governance has been extensively theorized, the role of workers has received comparatively little attention. This chapter looks at the role that workers play in identifying and mitigating harms from AI technologies. Harms are the causally assessed impacts of technologies. They arise despite technical reliability and are not a result of technical negligence but rather of normative uncertainty around questions of safety and fairness in complex social systems. There is high consensus in the AI ethics community on the benefits of reducing harms but less consensus on mechanisms for determining or addressing harms. This lack of consensus has resulted in a number of collective actions by workers protesting how harms are identified and addressed in their workplace. We theorize the role of workers within AI governance and construct a model of harm reporting processes in AI workplaces. The harm reporting process involves three steps, identification, the governance decision, and the response. Workers draw upon three types of claims to argue for jurisdiction over questions of AI governance, subjection, control over the product of labor, and proximate knowledge of systems. Examining the past decade of AI related worker activism allows us to understand how different types of workers are positioned within a workplace that produces AI systems, how their position informs their claims, and the place of collective action in staking their claims. This chapter argues that workers occupy a unique role in identifying and mitigating harms caused by AI systems.
Advances in low-communication training algorithms are enabling a shift from centralised model training to compute setups that are either distributed across multiple clusters or decentralised via community-driven contributions. This paper distinguishes these two scenarios - distributed and decentralised training - which are little understood and often conflated in policy discourse. We discuss how they could impact technical AI governance through an increased risk of compute structuring, capability proliferation, and the erosion of detectability and shutdownability. While these trends foreshadow a possible new paradigm that could challenge key assumptions of compute governance, we emphasise that certain policy levers, like export controls, remain relevant. We also acknowledge potential benefits of decentralised AI, including privacy-preserving training runs that could unlock access to more data, and mitigating harmful power concentration. Our goal is to support more precise policymaking around compute, capability proliferation, and decentralised AI development.
AI is transforming the existing technology landscape at a rapid phase enabling data-informed decision making and autonomous decision making. Unlike any other technology, because of the decision-making ability of AI, ethics and governance became a key concern. There are many emerging AI risks for humanity, such as autonomous weapons, automation-spurred job loss, socio-economic inequality, bias caused by data and algorithms, privacy violations and deepfakes. Social diversity, equity and inclusion are considered key success factors of AI to mitigate risks, create values and drive social justice. Sustainability became a broad and complex topic entangled with AI. Many organizations (government, corporate, not-for-profits, charities and NGOs) have diversified strategies driving AI for business optimization and social-and-environmental justice. Partnerships and collaborations become important more than ever for equity and inclusion of diversified and distributed people, data and capabilities. Therefore, in our journey towards an AI-enabled sustainable future, we need to address AI ethics and governance as a priority. These AI ethics and governance should be underpinned by human ethics.
The chapter discusses the foundational impact of modern generative AI models on information access (IA) systems. In contrast to traditional AI, the large-scale training and superior data modeling of generative AI models enable them to produce high-quality, human-like responses, which brings brand new opportunities for the development of IA paradigms. In this chapter, we identify and introduce two of them in details, i.e., information generation and information synthesis. Information generation allows AI to create tailored content addressing user needs directly, enhancing user experience with immediate, relevant outputs. Information synthesis leverages the ability of generative AI to integrate and reorganize existing information, providing grounded responses and mitigating issues like model hallucination, which is particularly valuable in scenarios requiring precision and external knowledge. This chapter delves into the foundational aspects of generative models, including architecture, scaling, and training, and discusses their applications in multi-modal scenarios. Additionally, it examines the retrieval-augmented generation paradigm and other methods for corpus modeling and understanding, demonstrating how generative AI can enhance information access systems. It also summarizes potential challenges and fruitful directions for future studies.
The rapid expansion of generative artificial intelligence (AI) is transforming work, creativity, and economic security in ways that extend beyond automation and productivity. This paper examines four interconnected dimensions of contemporary AI deployment: (1) transformations in employment and task composition (2) unequal diffusion of AI across sectors and socio-demographic groups (3) the role of universal basic income (UBI) as a stabilising response to AI-induced volatility (4) the effects of model alignment and content governance on human creativity, autonomy, and decision-making Using a hybrid approach that integrates labour market task exposure modelling, sectoral diffusion analysis, policy review, and qualitative discourse critique, the study develops an Inclusive AI Governance Framework. It introduces Level 1.5 autonomy as a human centred design principle that preserves evaluative authority while enabling partial automation, and highlights evidence of creative regression and emergent sycophancy in newer model generations. The paper argues that UBI should be embedded within a broader socio-technical governance ecosystem encompassing skills development, proportional regulation, and creativity preservation.
This paper tackles practical challenges in governing child centered artificial intelligence: policy texts state principles and requirements but often lack reproducible evidence anchors, explicit causal pathways, executable governance toolchains, and computable audit metrics. We propose Graph-GAP, a methodology that decomposes requirements from authoritative policy texts into a four layer graph of evidence, mechanism, governance, and indicator, and that computes two metrics, GAP score and mitigation readiness, to identify governance gaps and prioritise actions. Using the UNICEF Innocenti Guidance on AI and Children 3.0 as primary material, we define reproducible extraction units, coding manuals, graph patterns, scoring scales, and consistency checks, and we demonstrate exemplar gap profiles and governance priority matrices for ten requirements. Results suggest that compared with privacy and data protection, requirements related to child well being and development, explainability and accountability, and cross agency implementation and resource allocation are more prone to indicator gaps and mechanism gaps. We recommend translating requirements into auditable closed loop governance that integrates child rights impact assessments, continuous monitoring metrics, and grievance redress procedures. At the coding level, we introduce a multi algorithm review aggregation revision workflow that runs rule based encoders, statistical or machine learning evaluators, and large model evaluators with diverse prompt configurations as parallel coders. Each extraction unit outputs evidence, mechanism, governance, and indicator labels plus readiness scores with evidence anchors. Reliability, stability, and uncertainty are assessed using Krippendorff alpha, weighted kappa, intraclass correlation, and bootstrap confidence intervals.
Agentic AI systems present both significant opportunities and novel risks due to their capacity for autonomous action, encompassing tasks such as code execution, internet interaction, and file modification. This poses considerable challenges for effective organizational governance, particularly in comprehensively identifying, assessing, and mitigating diverse and evolving risks. To tackle this, we introduce the Agentic Risk \& Capability (ARC) Framework, a technical governance framework designed to help organizations identify, assess, and mitigate risks arising from agentic AI systems. The framework's core contributions are: (1) it develops a novel capability-centric perspective to analyze a wide range of agentic AI systems; (2) it distills three primary sources of risk intrinsic to agentic AI systems - components, design, and capabilities; (3) it establishes a clear nexus between each risk source, specific materialized risks, and corresponding technical controls; and (4) it provides a structured and practical approach to help organizations implement the framework. This framework provides a robust and adaptable methodology for organizations to navigate the complexities of agentic AI, enabling rapid and effective innovation while ensuring the safe, secure, and responsible deployment of agentic AI systems. Our framework is open-sourced \href{https://govtech-responsibleai.github.io/agentic-risk-capability-framework/}{here}.
Industry actors in the United States have gained extensive influence in conversations about the regulation of general-purpose artificial intelligence (AI) systems. Although industry participation is an important part of the policy process, it can also cause regulatory capture, whereby industry co-opts regulatory regimes to prioritize private over public welfare. Capture of AI policy by AI developers and deployers could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems. In this paper, we first introduce different models of regulatory capture from the social science literature. We then present results from interviews with 17 AI policy experts on what policy outcomes could compose regulatory capture in US AI policy, which AI industry actors are influencing the policy process, and whether and how AI industry actors attempt to achieve outcomes of regulatory capture. Experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals over others. Experts most commonly identified agenda-setting (15 of 17 interviews), advocacy (13), academic capture (10), information management (9), cultural capture through status (7), and media capture (7) as channels for industry influence. To mitigate these particular forms of industry influence, we recommend systemic changes in developing technical expertise in government and civil society, independent funding streams for the AI ecosystem, increased transparency and ethics requirements, greater civil society access to policy, and various procedural safeguards.
Computational narrative understanding studies the identification, description, and interaction of the elements of a narrative: characters, attributes, events, and relations. Narrative research has given considerable attention to defining and classifying character types. However, these character-type taxonomies do not generalize well because they are small, too simple, or specific to a domain. We require robust and reliable benchmarks to test whether narrative models truly understand the nuances of the character's development in the story. Our work addresses this by curating the CHATTER dataset that labels whether a character portrays some attribute for 88124 character-attribute pairs, encompassing 2998 characters, 12967 attributes and 660 movies. We validate a subset of CHATTER, called CHATTEREVAL, using human annotations to serve as a benchmark to evaluate the character attribution task in movie scripts. \evaldataset{} also assesses narrative understanding and the long-context modeling capacity of language models.
This paper presents a novel approach to scientific discovery using an artificial intelligence (AI) environment known as ChatGPT, developed by OpenAI. This is the first paper entirely generated with outputs from ChatGPT. We demonstrate how ChatGPT can be instructed through a gamification environment to define and benchmark hypothetical physical theories. Through this environment, ChatGPT successfully simulates the creation of a new improved model, called GPT$^4$, which combines the concepts of GPT in AI (generative pretrained transformer) and GPT in physics (generalized probabilistic theory). We show that GPT$^4$ can use its built-in mathematical and statistical capabilities to simulate and analyze physical laws and phenomena. As a demonstration of its language capabilities, GPT$^4$ also generates a limerick about itself. Overall, our results demonstrate the promising potential for human-AI collaboration in scientific discovery, as well as the importance of designing systems that effectively integrate AI's capabilities with human intelligence.
How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.
The rapid adoption of generative artificial intelligence (AI) in scientific research, particularly large language models (LLMs), has outpaced the development of ethical guidelines, leading to a "Triple-Too" problem: too many high-level ethical initiatives, too abstract principles lacking contextual and practical relevance, and too much focus on restrictions and risks over benefits and utilities. Existing approaches--principlism (reliance on abstract ethical principles), formalism (rigid application of rules), and technological solutionism (overemphasis on technological fixes)--offer little practical guidance for addressing ethical challenges of AI in scientific research practices. To bridge the gap between abstract principles and day-to-day research practices, a user-centered, realism-inspired approach is proposed here. It outlines five specific goals for ethical AI use: 1) understanding model training and output, including bias mitigation strategies; 2) respecting privacy, confidentiality, and copyright; 3) avoiding plagiarism and policy violations; 4) applying AI beneficially compared to alternatives; and 5) using AI transparently and reproducibly. Each goal is accompanied by actionable strategies and realistic cases of misuse and corrective measures. I argue that ethical AI application requires evaluating its utility against existing alternatives rather than isolated performance metrics. Additionally, I propose documentation guidelines to enhance transparency and reproducibility in AI-assisted research. Moving forward, we need targeted professional development, training programs, and balanced enforcement mechanisms to promote responsible AI use while fostering innovation. By refining these ethical guidelines and adapting them to emerging AI capabilities, we can accelerate scientific progress without compromising research integrity.
In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, colloquially known as "deep fake pornography." We identify a "malicious technical ecosystem" or "MTE," comprising of open-source face-swapping models and nearly 200 "nudifying" software programs that allow non-technical users to create AIG-NCII within minutes. Then, using the National Institute of Standards and Technology (NIST) AI 100-4 report as a reflection of current synthetic content governance methods, we show how the current landscape of practices fails to effectively regulate the MTE for adult AIG-NCII, as well as flawed assumptions explaining these gaps.
Measurement of social phenomena is everywhere, unavoidably, in sociotechnical systems. This is not (only) an academic point: Fairness-related harms emerge when there is a mismatch in the measurement process between the thing we purport to be measuring and the thing we actually measure. However, the measurement process -- where social, cultural, and political values are implicitly encoded in sociotechnical systems -- is almost always obscured. Furthermore, this obscured process is where important governance decisions are encoded: governance about which systems are fair, which individuals belong in which categories, and so on. We can then use the language of measurement, and the tools of construct validity and reliability, to uncover hidden governance decisions. In particular, we highlight two types of construct validity, content validity and consequential validity, that are useful to elicit and characterize the feedback loops between the measurement, social construction, and enforcement of social categories. We then explore the constructs of fairness, robustness, and responsibility in the context of governance in and for responsible AI. Together, these perspectives help us unpack how measurement acts as a hidden governance process in sociotechnical systems. Understanding measurement as governance supports a richer understanding of the governance processes already happening in AI -- responsible or otherwise -- revealing paths to more effective interventions.
Sharing personal narratives is a fundamental aspect of human social behavior as it helps share our life experiences. We can tell stories and rely on our background to understand their context, similarities, and differences. A substantial effort has been made towards developing storytelling machines or inferring characters' features. However, we don't usually find models that compare narratives. This task is remarkably challenging for machines since they, as sometimes we do, lack an understanding of what similarity means. To address this challenge, we first introduce a corpus of real-world spoken personal narratives comprising 10,296 narrative clauses from 594 video transcripts. Second, we ask non-narrative experts to annotate those clauses under Labov's sociolinguistic model of personal narratives (i.e., action, orientation, and evaluation clause types) and train a classifier that reaches 84.7% F-score for the highest-agreed clauses. Finally, we match stories and explore whether people implicitly rely on Labov's framework to compare narratives. We show that actions followed by the narrator's evaluation of these are the aspects non-experts consider the most. Our approach is intended to help inform machine learning methods aimed at studying or representing personal narratives.
Modern Education is not \textit{Modern} without AI. However, AI's complex nature makes understanding and fixing problems challenging. Research worldwide shows that a parent's income greatly influences a child's education. This led us to explore how AI, especially complex models, makes important decisions using Explainable AI tools. Our research uncovered many complexities linked to parental income and offered reasonable explanations for these decisions. However, we also found biases in AI that go against what we want from AI in education: clear transparency and equal access for everyone. These biases can impact families and children's schooling, highlighting the need for better AI solutions that offer fair opportunities to all. This chapter tries to shed light on the complex ways AI operates, especially concerning biases. These are the foundational steps towards better educational policies, which include using AI in ways that are more reliable, accountable, and beneficial for everyone involved.
We attempt to define what is necessary to construct an Artificial Scientist, explore and evaluate several approaches to artificial general intelligence (AGI) which may facilitate this, conclude that a unified or hybrid approach is necessary and explore two theories that satisfy this requirement to some degree.
Creative Problem Solving (CPS) is a sub-area within Artificial Intelligence (AI) that focuses on methods for solving off-nominal, or anomalous problems in autonomous systems. Despite many advancements in planning and learning, resolving novel problems or adapting existing knowledge to a new context, especially in cases where the environment may change in unpredictable ways post deployment, remains a limiting factor in the safe and useful integration of intelligent systems. The emergence of increasingly autonomous systems dictates the necessity for AI agents to deal with environmental uncertainty through creativity. To stimulate further research in CPS, we present a definition and a framework of CPS, which we adopt to categorize existing AI methods in this field. Our framework consists of four main components of a CPS problem, namely, 1) problem formulation, 2) knowledge representation, 3) method of knowledge manipulation, and 4) method of evaluation. We conclude our survey with open research questions, and suggested directions for the future.
Artificial intelligence (AI) models are increasingly finding applications in the field of medicine. Concerns have been raised about the explainability of the decisions that are made by these AI models. In this article, we give a systematic analysis of explainable artificial intelligence (XAI), with a primary focus on models that are currently being used in the field of healthcare. The literature search is conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards for relevant work published from 1 January 2012 to 02 February 2022. The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed. We investigate the why, how, and when of the uses of these XAI models and their implications. We present a comprehensive examination of XAI methodologies as well as an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields. The discussion of this work will contribute to the formalization of the XAI field.
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This serves two potential functions: 1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and 2) the basis for clinical artificial intelligence - an AI that can think like a doctor. This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans. This framework was evaluated using real patient data from an electronic health record. Such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare (Cost per Unit Change: $189 vs. $497) while obtaining a 30-35% increase in patient outcomes. Tweaking certain model parameters further enhances this advantage, obtaining roughly 50% more improvement for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine.
The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper (Friedman & Halpern, 1997), we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revision and update.
Explainable Artificial Intelligence (XAI) is essential for building advanced machine learning-powered applications, especially in critical domains such as medical diagnostics or autonomous driving. Legal, business, and ethical requirements motivate using effective XAI, but the increasing number of different methods makes it challenging to pick the right ones. Further, as explanations are highly context-dependent, measuring the effectiveness of XAI methods without users can only reveal a limited amount of information, excluding human factors such as the ability to understand it. We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task, designed such that a good performance is an indicator for the explanation to provide helpful information. In other words, we address the helpfulness of XAI for human decision-making. Further, a user study on state-of-the-art methods was conducted, showing differences in their ability to generate trust and skepticism and the ability to judge the rightfulness of an AI decision correctly. Based on the results, we highly recommend using and extending this approach for more objective-based human-centered user studies to measure XAI performance in an end-to-end fashion.
This article reviews the "Once learning" mechanism that was proposed 23 years ago and the subsequent successes of "One-shot learning" in image classification and "You Only Look Once - YOLO" in objective detection. Analyzing the current development of Artificial Intelligence (AI), the proposal is that AI should be clearly divided into the following categories: Artificial Human Intelligence (AHI), Artificial Machine Intelligence (AMI), and Artificial Biological Intelligence (ABI), which will also be the main directions of theory and application development for AI. As a watershed for the branches of AI, some classification standards and methods are discussed: 1) Human-oriented, machine-oriented, and biological-oriented AI R&D; 2) Information input processed by Dimensionality-up or Dimensionality-reduction; 3) The use of one/few or large samples for knowledge learning.
Games have been the perfect test-beds for artificial intelligence research for the characteristics that widely exist in real-world scenarios. Learning and optimisation, decision making in dynamic and uncertain environments, game theory, planning and scheduling, design and education are common research areas shared between games and real-world problems. Numerous open-source games or game-based environments have been implemented for studying artificial intelligence. In addition to single- or multi-player, collaborative or adversarial games, there has also been growing interest in implementing platforms for creative design in recent years. Those platforms provide ideal benchmarks for exploring and comparing artificial intelligence ideas and techniques. This paper reviews the games and game-based platforms for artificial intelligence research, provides guidance on matching particular types of artificial intelligence with suitable games for testing and matching particular needs in games with suitable artificial intelligence techniques, discusses the research trend induced by the evolution of those games and platforms, and gives an outlook.
The advent of artificial intelligence has changed many disciplines such as engineering, social science and economics. Artificial intelligence is a computational technique which is inspired by natural intelligence such as the swarming of birds, the working of the brain and the pathfinding of the ants. These techniques have impact on economic theories. This book studies the impact of artificial intelligence on economic theories, a subject that has not been extensively studied. The theories that are considered are: demand and supply, asymmetrical information, pricing, rational choice, rational expectation, game theory, efficient market hypotheses, mechanism design, prospect, bounded rationality, portfolio theory, rational counterfactual and causality. The benefit of this book is that it evaluates existing theories of economics and update them based on the developments in artificial intelligence field.
As Artificial Intelligence (AI) technologies proliferate, concern has centered around the long-term dangers of job loss or threats of machines causing harm to humans. All of this concern, however, detracts from the more pertinent and already existing threats posed by AI today: its ability to amplify bias found in training datasets, and swiftly impact marginalized populations at scale. Government and public sector institutions have a responsibility to citizens to establish a dialogue with technology developers and release thoughtful policy around data standards to ensure diverse representation in datasets to prevent bias amplification and ensure that AI systems are built with inclusion in mind.
This volume represents the accepted submissions from the AAAI-2019 Workshop on Games and Simulations for Artificial Intelligence held on January 29, 2019 in Honolulu, Hawaii, USA. https://www.gamesim.ai
This survey explores the integration of learning and reasoning in two different fields of artificial intelligence: neurosymbolic and statistical relational artificial intelligence. Neurosymbolic artificial intelligence (NeSy) studies the integration of symbolic reasoning and neural networks, while statistical relational artificial intelligence (StarAI) focuses on integrating logic with probabilistic graphical models. This survey identifies seven shared dimensions between these two subfields of AI. These dimensions can be used to characterize different NeSy and StarAI systems. They are concerned with (1) the approach to logical inference, whether model or proof-based; (2) the syntax of the used logical theories; (3) the logical semantics of the systems and their extensions to facilitate learning; (4) the scope of learning, encompassing either parameter or structure learning; (5) the presence of symbolic and subsymbolic representations; (6) the degree to which systems capture the original logic, probabilistic, and neural paradigms; and (7) the classes of learning tasks the systems are applied to. By positioning various NeSy and StarAI systems along these dimensions and pointing out similarities and differences between them, this survey contributes fundamental concepts for understanding the integration of learning and reasoning.
Modular Belief Updates and Confusion about Measures of Certainty in Artificial Intelligence Research
Over the last decade, there has been growing interest in the use or measures or change in belief for reasoning with uncertainty in artificial intelligence research. An important characteristic of several methodologies that reason with changes in belief or belief updates, is a property that we term modularity. We call updates that satisfy this property modular updates. Whereas probabilistic measures of belief update - which satisfy the modularity property were first discovered in the nineteenth century, knowledge and discussion of these quantities remains obscure in artificial intelligence research. We define modular updates and discuss their inappropriate use in two influential expert systems.
This chapter presents methodological reflections on the necessity and utility of artificial intelligence in generative design. Specifically, the chapter discusses how generative design processes can be augmented by AI to deliver in terms of a few outcomes of interest or performance indicators while dealing with hundreds or thousands of small decisions. The core of the performance-based generative design paradigm is about making statistical or simulation-driven associations between these choices and consequences for mapping and navigating such a complex decision space. This chapter will discuss promising directions in Artificial Intelligence for augmenting decision-making processes in architectural design for mapping and navigating complex design spaces.
Conceptual modeling (CM) applies abstraction to reduce the complexity of a system under study (e.g., an excerpt of reality). As a result of the conceptual modeling process a human interpretable, formalized representation (i.e., a conceptual model) is derived which enables understanding and communication among humans, and processing by machines. Artificial Intelligence (AI) algorithms are also applied to complex realities (regularly represented by vast amounts of data) to identify patterns or to classify entities in the data. Aside from the commonalities of both approaches, a significant difference can be observed by looking at the results. While conceptual models are comprehensible, reproducible, and explicit knowledge representations, AI techniques are capable of efficiently deriving an output from a given input while acting as a black box. AI solutions often lack comprehensiveness and reproducibility. Even the developers of AI systems can't explain why a certain output is derived. In the Conceptual Modeling meets Artificial Intelligence (CMAI) workshop, we are interested in tackling the intersection of the two, thus far, mostly isolated approached disciplines of CM and AI. The workshop embraces the assumption, that manifold mutual benefits can be realized by i) investigating what Conceptual Modeling (CM) can contribute to AI, and ii) the other way around, what Artificial Intelligence (AI) can contribute to CM.
This paper leverages various philosophical and ontological frameworks to explore the concept of embodied artificial general intelligence (AGI), its relationship to human consciousness, and the key role of the metaverse in facilitating this relationship. Several theoretical frameworks underpin this exploration, such as embodied cognition, Michael Levin's computational boundary of a "Self," and Donald D. Hoffman's Interface Theory of Perception, which lead to considering human perceived outer reality as a symbolic representation of alternate inner states of being, and where AGI could embody a different form of consciousness with a larger computational boundary. The paper further discusses the necessary architecture for the emergence of an embodied AGI, how to calibrate an AGI's symbolic interface, and the key role played by the Metaverse, decentralized systems and open-source blockchain technology. The paper concludes by emphasizing the importance of achieving a certain degree of harmony in human relations and recognizing the interconnectedness of humanity at a global level, as key prerequisites for the emergence of a stable embodied AGI.
Collectiveness is an important property of many systems--both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals, or even to produce intelligent collective behaviour out of not-so-intelligent individuals. Indeed, collective intelligence, namely the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems--motivated by recent techno-scientific trends like the Internet of Things, swarm robotics, and crowd computing, just to name a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognised research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this paper considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering.
Airline disruption management traditionally seeks to address three problem dimensions: aircraft scheduling, crew scheduling, and passenger scheduling, in that order. However, current efforts have, at most, only addressed the first two problem dimensions concurrently and do not account for the propagative effects that uncertain scheduling outcomes in one dimension can have on another dimension. In addition, existing approaches for airline disruption management include human specialists who decide on necessary corrective actions for airline schedule disruptions on the day of operation. However, human specialists are limited in their ability to process copious amounts of information imperative for making robust decisions that simultaneously address all problem dimensions during disruption management. Therefore, there is a need to augment the decision-making capabilities of a human specialist with quantitative and qualitative tools that can rationalize complex interactions amongst all dimensions in airline disruption management, and provide objective insights to the specialists in the airline operations control center. To that effect, we provide a discussion and demonstration of an agnostic and systematic paradigm for enabling expeditious simultaneously-integrated recovery of all problem dimensions during airline disruption management, through an intelligent multi-agent system that employs principles from artificial intelligence and distributed ledger technology. Results indicate that our paradigm for simultaneously-integrated recovery executes in polynomial time and is effective when all the flights in the airline route network are disrupted.
The concept of "task" is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane's performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial *general* intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A *task theory* would enable addressing tasks at the *class* level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.
This paper reviews and proposes concerns in adopting, fielding, and maintaining artificial intelligence (AI) systems. While the AI community has made rapid progress, there are challenges in certifying AI systems. Using procedures from design and operational test and evaluation, there are opportunities towards determining performance bounds to manage expectations of intended use. A notional use case is presented with image data fusion to support AI object recognition certifiability considering precision versus distance.
The 7th Symposium on Educational Advances in Artificial Intelligence (EAAI'17, co-chaired by Sven Koenig and Eric Eaton) launched the EAAI New and Future AI Educator Program to support the training of early-career university faculty, secondary school faculty, and future educators (PhD candidates or postdocs who intend a career in academia). As part of the program, awardees were asked to address one of the following "blue sky" questions: * How could/should Artificial Intelligence (AI) courses incorporate ethics into the curriculum? * How could we teach AI topics at an early undergraduate or a secondary school level? * AI has the potential for broad impact to numerous disciplines. How could we make AI education more interdisciplinary, specifically to benefit non-engineering fields? This paper is a collection of their responses, intended to help motivate discussion around these issues in AI education.
Dungeon Crawl Stone Soup is a popular, single-player, free and open-source rogue-like video game with a sufficiently complex decision space that makes it an ideal testbed for research in cognitive systems and, more generally, artificial intelligence. This paper describes the properties of Dungeon Crawl Stone Soup that are conducive to evaluating new approaches of AI systems. We also highlight an ongoing effort to build an API for AI researchers in the spirit of recent game APIs such as MALMO, ELF, and the Starcraft II API. Dungeon Crawl Stone Soup's complexity offers significant opportunities for evaluating AI and cognitive systems, including human user studies. In this paper we provide (1) a description of the state space of Dungeon Crawl Stone Soup, (2) a description of the components for our API, and (3) the potential benefits of evaluating AI agents in the Dungeon Crawl Stone Soup video game.
This paper presents a novel, structured decision support framework that systematically aligns diverse artificial intelligence (AI) agent architectures, reactive, cognitive, hybrid, and learning, with the comprehensive National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) 2.0. By integrating agent theory with industry guidelines, this framework provides a transparent and stepwise methodology for selecting and deploying AI solutions to address contemporary cyber threats. Employing a granular decomposition of NIST CSF 2.0 functions into specific tasks, the study links essential AI agent properties such as autonomy, adaptive learning, and real-time responsiveness to each subcategory's security requirements. In addition, it outlines graduated levels of autonomy (assisted, augmented, and fully autonomous) to accommodate organisations at varying stages of cybersecurity maturity. This holistic approach transcends isolated AI applications, providing a unified detection, incident response, and governance strategy. Through conceptual validation, the framework demonstrates how tailored AI agent deployments can align with real-world constraints and risk profiles, enhancing situational awareness, accelerating response times, and fortifying long-term resilience via adaptive risk management. Ultimately, this research bridges the gap between theoretical AI constructs and operational cybersecurity demands, establishing a foundation for robust, empirically validated multi-agent systems that adhere to industry standards.
Foundation models and generative artificial intelligence (AI) exacerbate a core regulatory challenge associated with AI: its heterogeneity. By their very nature, foundation models and generative AI can perform multiple functions for their users, thus presenting a vast array of different risks. This multifunctionality means that prescriptive, one-size-fits-all regulation will not be a viable option. Even performance standards and ex post liability - regulatory approaches that usually afford flexibility - are unlikely to be strong candidates for responding to multifunctional AI's risks, given challenges in monitoring and enforcement. Regulators will do well instead to promote proactive risk management on the part of developers and users by using management-based regulation, an approach that has proven effective in other contexts of heterogeneity. Regulators will also need to maintain ongoing vigilance and agility. More than in other contexts, regulators of multifunctional AI will need sufficient resources, top human talent and leadership, and organizational cultures committed to regulatory excellence.
An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence
Observers and practitioners of artificial intelligence (AI) have proposed an FDA-style licensing regime for the most advanced AI models, or 'frontier' models. In this paper, we explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI. There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks. Domains of weak fit include the difficulty of defining the regulated product, the presence of Knightian uncertainty or deep ambiguity about harms from AI, the potentially transmissible nature of risks, and distributed activities among actors involved in the AI lifecycle. We conclude by highlighting the role of policy learning and experimentation in regulatory development, describing how learning from other forms of AI regulation and improvements in evaluation and testing methods can help to overcome some of the challenges we identify.
Artificial Intelligence (AI) systems exert a growing influence on our society. As they become more ubiquitous, their potential negative impacts also become evident through various real-world incidents. Following such early incidents, academic and public discussion on AI ethics has highlighted the need for implementing ethics in AI system development. However, little currently exists in the way of frameworks for understanding the practical implementation of AI ethics. In this paper, we discuss a research framework for implementing AI ethics in industrial settings. The framework presents a starting point for empirical studies into AI ethics but is still being developed further based on its practical utilization.
AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
Organizations investing in artificial intelligence face a fundamental challenge: traditional return on investment calculations fail to capture the dual nature of AI implementations, which simultaneously reduce certain operational risks while introducing novel exposures related to algorithmic malfunction, adversarial attacks, and regulatory liability. This research presents a comprehensive financial framework for quantifying AI project returns that explicitly integrates changes in organizational risk profiles. The methodology addresses a critical gap in current practice where investment decisions rely on optimistic benefit projections without accounting for the probabilistic costs of AI-specific threats including model drift, bias-related litigation, and compliance failures under emerging regulations such as the European Union Artificial Intelligence Act and ISO/IEC 42001. Drawing on established risk quantification methods, including annual loss expectancy calculations and Monte Carlo simulation techniques, this framework enables practitioners to compute net benefits that incorporate both productivity gains and the delta between pre-implementation and post-implementation risk exposures. The analysis demonstrates that accurate AI investment evaluation requires explicit modeling of control effectiveness, reserve requirements for algorithmic failures, and the ongoing operational costs of maintaining model performance. Practical implications include specific guidance for establishing governance structures, conducting phased validations, and integrating risk-adjusted metrics into capital allocation decisions, ultimately enabling evidence-based AI portfolio management that satisfies both fiduciary responsibilities and regulatory mandates.
Participatory Artificial Intelligence (PAI) has recently gained interest by researchers as means to inform the design of technology through collective's lived experience. PAI has a greater promise than that of providing useful input to developers, it can contribute to the process of democratizing the design of technology, setting the focus on what should be designed. However, in the process of PAI there existing institutional power dynamics that hinder the realization of expansive dreams and aspirations of the relevant stakeholders. In this work we propose co-design principals for AI that address institutional power dynamics focusing on Participatory AI with youth.
Artificial intelligence (AI) is anticipated to emerge as a pivotal enabler for the forthcoming sixth-generation (6G) wireless communication systems. However, current research efforts regarding large AI models for wireless communications primarily focus on fine-tuning pre-trained large language models (LLMs) for specific tasks. This paper investigates the large-scale AI model designed for beamforming optimization to adapt and generalize to diverse tasks defined by system utilities and scales. We propose a novel framework based on bidirectional encoder representations from transformers (BERT), termed BERT4beam. We aim to formulate the beamforming optimization problem as a token-level sequence learning task, perform tokenization of the channel state information, construct the BERT model, and conduct task-specific pre-training and fine-tuning strategies. Based on the framework, we propose two BERT-based approaches for single-task and multi-task beamforming optimization, respectively. Both approaches are generalizable for varying user scales. Moreover, the former can adapt to varying system utilities and antenna configurations by re-configuring the input and output module of the BERT model, while the latter, termed UBERT, can directly generalize to diverse tasks, due to a finer-grained tokenization strategy. Extensive simulation results demonstrate that the two proposed approaches can achieve near-optimal performance and outperform existing AI models across various beamforming optimization tasks, showcasing strong adaptability and generalizability.
Large language models (LLMs) leverage chain-of-thought (CoT) techniques to tackle complex problems, representing a transformative breakthrough in artificial intelligence (AI). However, their reasoning capabilities have primarily been demonstrated in solving math and coding problems, leaving their potential for domain-specific applications-such as battery discovery-largely unexplored. Inspired by the idea that reasoning mirrors a form of guided search, we introduce ChatBattery, a novel agentic framework that integrates domain knowledge to steer LLMs toward more effective reasoning in materials design. Using ChatBattery, we successfully identify, synthesize, and characterize three novel lithium-ion battery cathode materials, which achieve practical capacity improvements of 28.8%, 25.2%, and 18.5%, respectively, over the widely used cathode material, LiNi0.8Mn0.1Co0.1O2 (NMC811). Beyond this discovery, ChatBattery paves a new path by showing a successful LLM-driven and reasoning-based platform for battery materials invention. This complete AI-driven cycle-from design to synthesis to characterization-demonstrates the transformative potential of AI-driven reasoning in revolutionizing materials discovery.
High stakes decision-making often requires a continuous interplay between evolving evidence and shifting hypotheses, a dynamic that is not well supported by current AI decision support systems. In this paper, we introduce a mixed-initiative framework for AI assisted decision making that is grounded in the data-frame theory of sensemaking and the evaluative AI paradigm. Our approach enables both humans and AI to collaboratively construct, validate, and adapt hypotheses. We demonstrate our framework with an AI-assisted skin cancer diagnosis prototype that leverages a concept bottleneck model to facilitate interpretable interactions and dynamic updates to diagnostic hypotheses.
The advent of Generative AI has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in generating realistic images, texts, and data patterns. However, these advancements come with heightened concerns over data privacy and copyright infringement, primarily due to the reliance on vast datasets for model training. Traditional approaches like differential privacy, machine unlearning, and data poisoning only offer fragmented solutions to these complex issues. Our paper delves into the multifaceted challenges of privacy and copyright protection within the data lifecycle. We advocate for integrated approaches that combines technical innovation with ethical foresight, holistically addressing these concerns by investigating and devising solutions that are informed by the lifecycle perspective. This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.
High-speed railway (HSR) communications are pivotal for ensuring rail safety, operations, maintenance, and delivering passenger information services. The high speed of trains creates rapidly time-varying wireless channels, increases the signaling overhead, and reduces the system throughput, making it difficult to meet the growing and stringent needs of HSR applications. In this article, we explore artificial intelligence (AI)-based beam-level and cell-level mobility management suitable for HSR communications. Particularly, we propose a compressed spatial multi-beam measurements scheme via compressive sensing for beam-level mobility management in HSR communications. In comparison to traditional down-sampling spatial beam measurements, this method leads to improved spatial-temporal beam prediction accuracy with the same measurement overhead. Moreover, we propose a novel AI-based proactive handover scheme to predict handover events and reduce radio link failure (RLF) rates in HSR communications. Compared with the traditional event A3-based handover mechanism, the proposed approach significantly reduces the RLF rates which saves 50% beam measurement overhead.
The recent rapid advancement of LLM-based AI systems has accelerated our search and production of information. While the advantages brought by these systems seemingly improve the performance or efficiency of human activities, they do not necessarily enhance human capabilities. Recent research has started to examine the impact of generative AI on individuals' cognitive abilities, especially critical thinking. Based on definitions of critical thinking across psychology and education, this position paper proposes the distinction between demonstrated and performed critical thinking in the era of generative AI and discusses the implication of this distinction in research and development of AI systems that aim to augment human critical thinking.
This position paper argues for, and offers, a critical lens through which to examine the current AI frenzy in the landscape architecture profession. In it, the authors propose five archetypes or mental modes that landscape architects might inhabit when thinking about AI. Rather than limiting judgments of AI use to a single axis of acceleration, these archetypes and corresponding narratives exist along a relational spectrum and are permeable, allowing LAs to take on and switch between them according to context. We model these relationships between the archetypes and their contributions to AI advancement using a causal loop diagram (CLD), and with those interactions argue that more nuanced ways of approaching AI might also open new modes of practice in the new digital economy.
The upsurge of policies and guidelines that aim to ensure Artificial Intelligence (AI) systems are safe and trustworthy has led to a fragmented landscape of AI governance. The European Union (EU) is a key actor in the development of such policies and guidelines. Its High-Level Expert Group (HLEG) issued an influential set of guidelines for trustworthy AI, followed in 2024 by the adoption of the EU AI Act. While the EU policies and guidelines are expected to be aligned, they may differ in their scope, areas of emphasis, degrees of normativity, and priorities in relation to AI. To gain a broad understanding of AI governance from the EU perspective, we leverage qualitative thematic analysis approaches to uncover prevalent themes in key EU documents, including the AI Act and the HLEG Ethics Guidelines. We further employ quantitative topic modelling approaches, specifically through the use of the BERTopic model, to enhance the results and increase the document sample to include EU AI policy documents published post-2018. We present a novel perspective on EU policies, tracking the evolution of its approach to addressing AI governance.
The rapid advancement of artificial intelligence (AI) has brought about significant societal changes, necessitating robust AI governance frameworks. This study analyzed the research trends in AI governance within the framework of the EU AI Act. This study conducted a bibliometric analysis to examine the publications indexed in the Web of Science database. Our findings reveal that research on AI governance, particularly concerning AI systems regulated by the EU AI Act, remains relatively limited compared to the broader AI research landscape. Nonetheless, a growing interdisciplinary interest in AI governance is evident, with notable contributions from multi-disciplinary journals and open-access publications. Dominant research themes include ethical considerations, privacy concerns, and the growing impact of generative AI, such as ChatGPT. Notably, education, healthcare, and worker management are prominent application domains. Keyword network analysis highlights education, ethics, and ChatGPT as central keywords, underscoring the importance of these areas in current AI governance research. Subsequently, a comprehensive literature review was undertaken based on the bibliometric analysis findings to identify research trends, challenges, and insights within the categories of the EU AI Act. The findings provide valuable insights for researchers and policymakers, informing future research directions and contributing to developing comprehensive AI governance frameworks beyond the EU AI Act.
As AI technology advances rapidly, concerns over the risks of bigness in digital markets are also growing. The EU's Digital Markets Act (DMA) aims to address these risks. Still, the current framework may not adequately cover generative AI systems that could become gateways for AI-based services. This paper argues for integrating certain AI software as core platform services and classifying certain developers as gatekeepers under the DMA. We also propose an assessment of gatekeeper obligations to ensure they cover generative AI services. As the EU considers generative AI-specific rules and possible DMA amendments, this paper provides insights towards diversity and openness in generative AI services.
Currently, Artificial Intelligence (AI) has won unprecedented attention and is becoming the increasingly popular focus in China. This change can be judged by the impressive record of academic publications, the amount of state-level investment and the presence of nation-wide participation and devotion. In this paper, we place emphasis on discussing the progress of artificial intelligence engineerings in China. We first introduce the focus on AI in Chinese academia, including the supercomputing brain system, Cambrian Period supercomputer of neural networks, and biometric systems. Then, the development of AI in industrial circles and the latest layout of AI products in companies, such as Baidu, Tencent, and Alibaba, are introduced. Last, we bring in the opinions and arguments of the main intelligentsia of China about the future development of AI, including how to examine the relationship between humanity on one side and science and technology on the other.
This report provides a detailed comparison between the Safety and Security measures proposed in the EU AI Act's General-Purpose AI (GPAI) Code of Practice (Third Draft) and the current commitments and practices voluntarily adopted by leading AI companies. As the EU moves toward enforcing binding obligations for GPAI model providers, the Code of Practice will be key for bridging legal requirements with concrete technical commitments. Our analysis focuses on the draft's Safety and Security section (Commitments II.1-II.16), documenting excerpts from current public-facing documents that are relevant to each individual measure. We systematically reviewed different document types, such as companies' frontier safety frameworks and model cards, from over a dozen companies, including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, Amazon, and others. This report is not meant to be an indication of legal compliance, nor does it take any prescriptive viewpoint about the Code of Practice or companies' policies. Instead, it aims to inform the ongoing dialogue between regulators and General-Purpose AI model providers by surfacing evidence of industry precedent for various measures. Nonetheless, we were able to find relevant quotes from at least 5 companies' documents for the majority of the measures in Commitments II.1-II.16.
The adoption of open science has quickly changed how artificial intelligence (AI) policy research is distributed globally. This study examines the regional trends in the citation of preprints, specifically focusing on the impact of two major disruptive events: the COVID-19 pandemic and the release of ChatGPT, on research dissemination patterns in the United States, Europe, and South Korea from 2015 to 2024. Using bibliometrics data from the Web of Science, this study tracks how global disruptive events influenced the adoption of preprints in AI policy research and how such shifts vary by region. By marking the timing of these disruptive events, the analysis reveals that while all regions experienced growth in preprint citations, the magnitude and trajectory of change varied significantly. The United States exhibited sharp, event-driven increases; Europe demonstrated institutional growth; and South Korea maintained consistent, linear growth in preprint adoption. These findings suggest that global disruptions may have accelerated preprint adoption, but the extent and trajectory are shaped by local research cultures, policy environments, and levels of open science maturity. This paper emphasizes the need for future AI governance strategies to consider regional variability in research dissemination and highlights opportunities for further longitudinal and comparative research to deepen our understanding of open-access adoption in AI policy development.
The Global South faces unique challenges in achieving digital inclusion due to a heavy reliance on mobile devices for internet access and the prevalence of slow or unreliable networks. While numerous studies have investigated web accessibility within specific sectors such as education, healthcare, and government services, these efforts have been largely constrained to individual countries or narrow contexts, leaving a critical gap in cross-regional, large-scale analysis. This paper addresses this gap by conducting the first large-scale comparative study of mobile web accessibility across the Global South. In this work, we evaluate 100,000 websites from 10 countries in the Global South to provide a comprehensive understanding of accessibility practices in these regions. Our findings reveal that websites from countries with strict accessibility regulations and enforcement tend to adhere better to Web Content Accessibility Guidelines (WCAG) guidelines. However, accessibility violations impact different disability groups in varying ways. Blind and low-vision individuals in the Global South are disproportionately affected, as only 40% of the evaluated websites meet critical accessibility guidelines. This significant shortfall is largely due to developers frequently neglecting to implement valid alt text for images and ARIA descriptions, which are essential specification mechanisms in the HTML standard for the effective operation of screen readers.
In this article, we explore the potential of using sentence-level discourse structure for machine translation evaluation. We first design discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory (RST). Then, we show that a simple linear combination with these measures can help improve various existing machine translation evaluation metrics regarding correlation with human judgments both at the segment- and at the system-level. This suggests that discourse information is complementary to the information used by many of the existing evaluation metrics, and thus it could be taken into account when developing richer evaluation metrics, such as the WMT-14 winning combined metric DiscoTKparty. We also provide a detailed analysis of the relevance of various discourse elements and relations from the RST parse trees for machine translation evaluation. In particular we show that: (i) all aspects of the RST tree are relevant, (ii) nuclearity is more useful than relation type, and (iii) the similarity of the translation RST tree to the reference tree is positively correlated with translation quality.
The capabilities and use cases of automatic natural language processing (NLP) have grown significantly over the last few years. While much work has been devoted to understanding how humans deal with discourse connectives, this phenomenon is understudied in computational systems. Therefore, it is important to put NLP models under the microscope and examine whether they can adequately comprehend, process, and reason within the complexity of natural language. In this chapter, we introduce the main mechanisms behind automatic sentence processing systems step by step and then focus on evaluating discourse connective processing. We assess nine popular systems in their ability to understand English discourse connectives and analyze how context and language understanding tasks affect their connective comprehension. The results show that NLP systems do not process all discourse connectives equally well and that the computational processing complexity of different connective kinds is not always consistently in line with the presumed complexity order found in human processing. In addition, while humans are more inclined to be influenced during the reading procedure but not necessarily in the final comprehension performance, discourse connectives have a significant impact on the final accuracy of NLP systems. The richer knowledge of connectives a system learns, the more negative effect inappropriate connectives have on it. This suggests that the correct explicitation of discourse connectives is important for computational natural language processing.
We present the first systematic exploration of earth tides-seismicity correlation in northwestern South America, with a special emphasis in Colombia. For this purpose, we use a dataset of ~167,000 earthquakes, gathered by the Colombian Seismological Network between 1993 and 2017. Most of the events are intermediate-depth earthquakes from the Bucaramanga seismic nest and the Cauca seismic cluster. For this purpose, we implemented a novel approach for the calculation of tidal phases that considers the relative positions of the Earth-Moon-Sun system at the time of the events. After applying the standard Schuster test to the whole dataset and to several earthquake samples (classified by time, location, magnitude and depth), we found strong correlation anomalies with the diurnal and monthly components of the tide (global log(p) values around -7.0 for the diurnal constituent and -12.1 for the monthly constituent), especially for the intermediate depth events. These anomalies suggest that around 16% of the deep earthquakes in Colombia may be triggered by tides, especially when the monthly phase is between 350$^\circ$-10$^\circ$. We attribute our positive results, which favor the tidal-triggering hypothesis, in contrast to previous negative ones to: 1) the size of our dataset, and 2) the method we used to calculate tidal phases. Anyone willing to reproduce our results or to apply our methodology to custom datasets can use the public information system tQuakes that we developed for this work.
We present a novel corpus of 445 human- and computer-generated documents, comprising about 27,000 clauses, annotated for semantic clause types and coherence relations that allow for nuanced comparison of artificial and natural discourse modes. The corpus covers both formal and informal discourse, and contains documents generated using fine-tuned GPT-2 (Zellers et al., 2019) and GPT-3(Brown et al., 2020). We showcase the usefulness of this corpus for detailed discourse analysis of text generation by providing preliminary evidence that less numerous, shorter and more often incoherent clause relations are associated with lower perceived quality of computer-generated narratives and arguments.
In this paper we present GumDrop, Georgetown University's entry at the DISRPT 2019 Shared Task on automatic discourse unit segmentation and connective detection. Our approach relies on model stacking, creating a heterogeneous ensemble of classifiers, which feed into a metalearner for each final task. The system encompasses three trainable component stacks: one for sentence splitting, one for discourse unit segmentation and one for connective detection. The flexibility of each ensemble allows the system to generalize well to datasets of different sizes and with varying levels of homogeneity.
Time pressure and topic negotiation may impose constraints on how people leverage discourse relations (DRs) in spontaneous conversational contexts. In this work, we adapt a system of DRs for written language to spontaneous dialogue using crowdsourced annotations from novice annotators. We then test whether discourse relations are used differently across several types of multi-utterance contexts. We compare the patterns of DR annotation within and across speakers and within and across turns. Ultimately, we find that different discourse contexts produce distinct distributions of discourse relations, with single-turn annotations creating the most uncertainty for annotators. Additionally, we find that the discourse relation annotations are of sufficient quality to predict from embeddings of discourse units.
We here explore a ``fully'' lexicalized Tree-Adjoining Grammar for discourse that takes the basic elements of a (monologic) discourse to be not simply clauses, but larger structures that are anchored on variously realized discourse cues. This link with intra-sentential grammar suggests an account for different patterns of discourse cues, while the different structures and operations suggest three separate sources for elements of discourse meaning: (1) a compositional semantics tied to the basic trees and operations; (2) a presuppositional semantics carried by cue phrases that freely adjoin to trees; and (3) general inference, that draws additional, defeasible conclusions that flesh out what is conveyed compositionally.
Recent neural supervised topic segmentation models achieve distinguished superior effectiveness over unsupervised methods, with the availability of large-scale training corpora sampled from Wikipedia. These models may, however, suffer from limited robustness and transferability caused by exploiting simple linguistic cues for prediction, but overlooking more important inter-sentential topical consistency. To address this issue, we present a discourse-aware neural topic segmentation model with the injection of above-sentence discourse dependency structures to encourage the model make topic boundary prediction based more on the topical consistency between sentences. Our empirical study on English evaluation datasets shows that injecting above-sentence discourse structures to a neural topic segmenter with our proposed strategy can substantially improve its performances on intra-domain and out-of-domain data, with little increase of model's complexity.
I argue that content-based AI value alignment--any approach that treats alignment as optimizing toward a formal value-object (reward function, utility function, constitutional principles, or learned preference representation)--cannot, by itself, produce robust alignment under capability scaling, distributional shift, and increasing autonomy. This limitation arises from three philosophical results: Hume's is-ought gap (behavioral data cannot entail normative conclusions), Berlin's value pluralism (human values are irreducibly plural and incommensurable), and the extended frame problem (any value encoding will misfit future contexts that advanced AI creates). I show that RLHF, Constitutional AI, inverse reinforcement learning, and cooperative assistance games each instantiate this specification trap, and that their failure modes are structural, not engineering limitations. Proposed escape routes--continual updating, meta-preferences, moral realism--relocate the trap rather than exit it. Drawing on Fischer and Ravizza's compatibilist theory, I argue that behavioral compliance does not constitute alignment: there is a principled distinction between simulated value-following and genuine reasons-responsiveness, and specification-based methods cannot produce the latter. The specification trap establishes a ceiling on content-based approaches, not their uselessness--but this ceiling becomes safety-critical at the capability frontier. The alignment problem must be reframed from value specification to value emergence.
For humanity to maintain and expand its agency into the future, the most powerful systems we create must be those which act to align the future with the will of humanity. The most powerful systems today are massive institutions like governments, firms, and NGOs. Deliberative technology is already being used across these institutions to help align governance and diplomacy with human will, and modern AI is poised to make this technology significantly better. At the same time, the race to superhuman AGI is already underway, and the AI systems it gives rise to may become the most powerful systems of the future. Failure to align the impact of such powerful AI with the will of humanity may lead to catastrophic consequences, while success may unleash abundance. Right now, there is a window of opportunity to use deliberative technology to align the impact of powerful AI with the will of humanity. Moreover, it may be possible to engineer a symbiotic coupling between powerful AI and deliberative alignment systems such that the quality of alignment improves as AI capabilities increase.
As artificial intelligence (AI) becomes more powerful and widespread, the AI alignment problem - how to ensure that AI systems pursue the goals that we want them to pursue - has garnered growing attention. This article distinguishes two types of alignment problems depending on whose goals we consider, and analyzes the different solutions necessitated by each. The direct alignment problem considers whether an AI system accomplishes the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or on society more broadly. In particular, it also considers whether the system imposes externalities on others. Whereas solutions to the direct alignment problem center around more robust implementation, social alignment problems typically arise because of conflicts between individual and group-level goals, elevating the importance of AI governance to mediate such conflicts. Addressing the social alignment problem requires both enforcing existing norms on their developers and operators and designing new norms that apply directly to AI systems.
Design is a non-linear, reflective process in which practitioners engage with visual, semantic, and other expressive materials to explore, iterate, and refine ideas. As Generative AI (GenAI) becomes integrated into professional design practice, traditional interaction approaches focusing on prompts or whole-image manipulation can misalign AI output with designers' intent, forcing visual thinkers into verbal reasoning or post-hoc adjustments. We present three interaction approaches from DesignPrompt, FusAIn, and DesignTrace that distribute control across intent, input, and process, enabling designers to guide AI alignment at different stages of interaction. We further argue that alignment is a dynamic negotiation, with AI adopting proactive or reactive roles according to designers' instrumental and inspirational needs and the creative stage.
The progress of AI systems such as large language models (LLMs) raises increasingly pressing concerns about their safe deployment. This paper examines the value alignment problem for LLMs, arguing that current alignment strategies are fundamentally inadequate to prevent misuse. Despite ongoing efforts to instill norms such as helpfulness, honesty, and harmlessness in LLMs through fine-tuning based on human preferences, they remain vulnerable to adversarial attacks that exploit conflicts between these norms. I argue that this vulnerability reflects a fundamental limitation of existing alignment methods: they reinforce shallow behavioral dispositions rather than endowing LLMs with a genuine capacity for normative deliberation. Drawing from on research in moral psychology, I show how humans' ability to engage in deliberative reasoning enhances their resilience against similar adversarial tactics. LLMs, by contrast, lack a robust capacity to detect and rationally resolve normative conflicts, leaving them susceptible to manipulation; even recent advances in reasoning-focused LLMs have not addressed this vulnerability. This ``shallow alignment'' problem carries significant implications for AI safety and regulation, suggesting that current approaches are insufficient for mitigating potential harms posed by increasingly capable AI systems.
Solving the AI alignment problem requires having clear, defensible values towards which AI systems can align. Currently, targets for alignment remain underspecified and do not seem to be built from a philosophically robust structure. We begin the discussion of this problem by presenting five core, foundational values, drawn from moral philosophy and built on the requisites for human existence: survival, sustainable intergenerational existence, society, education, and truth. We show that these values not only provide a clearer direction for technical alignment work, but also serve as a framework to highlight threats and opportunities from AI systems to both obtain and sustain these values.
There is no 'ordinary' when it comes to AI. The human-AI experience is extraordinarily complex and specific to each person, yet dominant measures such as usability scales and engagement metrics flatten away nuance. We argue for AI phenomenology: a research stance that asks "How did it feel?" beyond the standard questions of "How well did it perform?" when interacting with AI systems. AI phenomenology acts as a paradigm for bidirectional human-AI alignment as it foregrounds users' first-person perceptions and interpretations of AI systems over time. We motivate AI phenomenology as a framework that captures how alignment is experienced, negotiated, and updated between users and AI systems. Tracing a lineage from Husserl through postphenomenology to Actor-Network Theory, and grounding our argument in three studies-two longitudinal studies with "Day", an AI companion, and a multi-method study of agentic AI in software engineering-we contribute a set of replicable methodological toolkits for conducting AI phenomenology research: instruments for capturing lived experience across personal and professional contexts, three design concepts (translucent design, agency-aware value alignment, temporal co-evolution tracking), and a concrete research agenda. We offer this toolkit not as a new paradigm but as a practical scaffold that researchers can adapt as AI systems-and the humans who live alongside them-continue to co-evolve.
Being a complex subject of major importance in AI Safety research, value alignment has been studied from various perspectives in the last years. However, no final consensus on the design of ethical utility functions facilitating AI value alignment has been achieved yet. Given the urgency to identify systematic solutions, we postulate that it might be useful to start with the simple fact that for the utility function of an AI not to violate human ethical intuitions, it trivially has to be a model of these intuitions and reflect their variety $ - $ whereby the most accurate models pertaining to human entities being biological organisms equipped with a brain constructing concepts like moral judgements, are scientific models. Thus, in order to better assess the variety of human morality, we perform a transdisciplinary analysis applying a security mindset to the issue and summarizing variety-relevant background knowledge from neuroscience and psychology. We complement this information by linking it to augmented utilitarianism as a suitable ethical framework. Based on that, we propose first practical guidelines for the design of approximate ethical goal functions that might better capture the variety of human moral judgements. Finally, we conclude and address future possible challenges.
The inner alignment problem, which asserts whether an arbitrary artificial intelligence (AI) model satisfices a non-trivial alignment function of its outputs given its inputs, is undecidable. This is rigorously proved by Rice's theorem, which is also equivalent to a reduction to Turing's Halting Problem, whose proof sketch is presented in this work. Nevertheless, there is an enumerable set of provenly aligned AIs that are constructed from a finite set of provenly aligned operations. Therefore, we argue that the alignment should be a guaranteed property from the AI architecture rather than a characteristic imposed post-hoc on an arbitrary AI model. Furthermore, while the outer alignment problem is the definition of a judge function that captures human values and preferences, we propose that such a function must also impose a halting constraint that guarantees that the AI model always reaches a terminal state in finite execution steps. Our work presents examples and models that illustrate this constraint and the intricate challenges involved, advancing a compelling case for adopting an intrinsically hard-aligned approach to AI systems architectures that ensures halting.
Pluralistic alignment is concerned with ensuring that an AI system's objectives and behaviors are in harmony with the diversity of human values and perspectives. In this paper we study the notion of pluralistic alignment in the context of agentic AI, and in particular in the context of an agent that is trying to learn a policy in a manner that is mindful of the values and perspective of others in the environment. To this end, we show how being considerate of the future wellbeing and agency of other (human) agents can promote a form of pluralistic alignment.
I am a person and so are you. Philosophically we sometimes grant personhood to non-human animals, and entities such as sovereign states or corporations can legally be considered persons. But when, if ever, should we ascribe personhood to AI systems? In this paper, we outline necessary conditions for AI personhood, focusing on agency, theory-of-mind, and self-awareness. We discuss evidence from the machine learning literature regarding the extent to which contemporary AI systems, such as language models, satisfy these conditions, finding the evidence surprisingly inconclusive. If AI systems can be considered persons, then typical framings of AI alignment may be incomplete. Whereas agency has been discussed at length in the literature, other aspects of personhood have been relatively neglected. AI agents are often assumed to pursue fixed goals, but AI persons may be self-aware enough to reflect on their aims, values, and positions in the world and thereby induce their goals to change. We highlight open research directions to advance the understanding of AI personhood and its relevance to alignment. Finally, we reflect on the ethical considerations surrounding the treatment of AI systems. If AI systems are persons, then seeking control and alignment may be ethically untenable.
Our ability to build autonomous agents that leverage Generative AI continues to increase by the day. As builders and users of such agents it is unclear what parameters we need to align on before the agents start performing tasks on our behalf. To discover these parameters, we ran a qualitative empirical research study about designing agents that can negotiate during a fictional yet relatable task of selling a camera online. We found that for an agent to perform the task successfully, humans/users and agents need to align over 6 dimensions: 1) Knowledge Schema Alignment 2) Autonomy and Agency Alignment 3) Operational Alignment and Training 4) Reputational Heuristics Alignment 5) Ethics Alignment and 6) Human Engagement Alignment. These empirical findings expand previous work related to process and specification alignment and the need for values and safety in Human-AI interactions. Subsequently we discuss three design directions for designers who are imagining a world filled with Human-Agent collaborations.
Data grows in value when joined and combined; likewise the power of voice grows in ensemble. With 15 UK choirs, we explore opportunities for bottom-up data governance of a jointly created Choral AI Dataset. Guided by a survey of chorister attitudes towards generative AI models trained using their data, we explore opportunities to create empowering governance structures that go beyond opt in and opt out. We test the development of novel mechanisms such as a Trusted Data Intermediary (TDI) to enable governance of the dataset amongst the choirs and AI developers. We hope our findings can contribute to growing efforts to advance collective data governance practices and shape a more creative, empowering future for arts communities in the generative AI ecosystem.
Artificial intelligence is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing. These applications will increase as AI capabilities continue to progress, which has the potential to be highly beneficial for society, or to cause serious harm. The role of AI governance is ultimately to take practical steps to mitigate this risk of harm while enabling the benefits of innovation in AI. This requires answering challenging empirical questions about current and potential risks and benefits of AI: assessing impacts that are often widely distributed and indirect, and making predictions about a highly uncertain future. It also requires thinking through the normative question of what beneficial use of AI in society looks like, which is equally challenging. Though different groups may agree on high-level principles that uses of AI should respect (e.g., privacy, fairness, and autonomy), challenges arise when putting these principles into practice. For example, it is straightforward to say that AI systems must protect individual privacy, but there is presumably some amount or type of privacy that most people would be willing to give up to develop life-saving medical treatments. Despite these challenges, research can and has made progress on these questions. The aim of this chapter will be to give readers an understanding of this progress, and of the challenges that remain.
The integration of AI systems into the military domain is changing the way war-related decisions are made. It binds together three disparate groups of actors - developers, integrators, users - and creates a relationship between these groups and the machine, embedded in the (pre-)existing organisational and system structures. In this article, we focus on the important, but often neglected, group of integrators within such a sociotechnical system. In complex human-machine configurations, integrators carry responsibility for linking the disparate groups of developers and users in the political and military system. To act as the mediating group requires a deep understanding of the other groups' activities, perspectives and norms. We thus ask which challenges and shortcomings emerge from integrating AI systems into resort-to-force (RTF) decision-making processes, and how to address them. To answer this, we proceed in three steps. First, we conceptualise the relationship between different groups of actors and AI systems as a sociotechnical system. Second, we identify challenges within such systems for human-machine teaming in RTF decisions. We focus on challenges that arise a) from the technology itself, b) from the integrators' role in the sociotechnical system, c) from the human-machine interaction. Third, we provide policy recommendations to address these shortcomings when integrating AI systems into RTF decision-making structures.
Current definitions of Information Science are inadequate to comprehensively describe the nature of its field of study and for addressing the problems that are arising from intelligent technologies. The ubiquitous rise of artificial intelligence applications and their impact on society demands the field of Information Science acknowledge the sociotechnical nature of these technologies. Previous definitions of Information Science over the last six decades have inadequately addressed the environmental, human, and social aspects of these technologies. This perspective piece advocates for an expanded definition of Information Science that fully includes the sociotechnical impacts information has on the conduct of research in this field. Proposing an expanded definition of Information Science that includes the sociotechnical aspects of this field should stimulate both conversation and widen the interdisciplinary lens necessary to address how intelligent technologies may be incorporated into society and our lives more fairly.
This paper presents a systematic literature review in Computer Science that provide an overview of the initiatives related to digital misinformation. This is an exploratory study that covers research from 1993 to 2020, focusing on the investigation of the phenomenon of misinformation. The review consists of 788 studies from SCOPUS, IEEE, and ACM digital libraries, synthesizing the primary research directions and sociotechnical challenges. These challenges are classified into Physical, Empirical, Syntactic, Semantic, Pragmatic, and Social dimensions, drawing from Organizational Semiotics. The mapping identifies issues related to the concept of misinformation, highlights deficiencies in mitigation strategies, discusses challenges in approaching stakeholders, and unveils various sociotechnical aspects relevant to understanding and mitigating the harmful effects of digital misinformation. As contributions, this study present a novel categorization of mitigation strategies, a sociotechnical taxonomy for classifying types of false information and elaborate on the inter-relation of sociotechnical aspects and their impacts.
The development of artificial intelligence (AI) technologies has far exceeded the investigation of their relationship with society. Sociotechnical inquiry is needed to mitigate the harms of new technologies whose potential impacts remain poorly understood. To date, subfields of AI research develop primarily individual views on their relationship with sociotechnics, while tools for external investigation, comparison, and cross-pollination are lacking. In this paper, we propose four directions for inquiry into new and evolving areas of technological development: value--what progress and direction does a field promote, optimization--how the defined system within a problem formulation relates to broader dynamics, consensus--how agreement is achieved and who is included in building it, and failure--what methods are pursued when the problem specification is found wanting. The paper provides a lexicon for sociotechnical inquiry and illustrates it through the example of consumer drone technology.
This study investigates how U.S. news media framed the use of ChatGPT in higher education from November 2022 to October 2024. Employing Framing Theory and combining temporal and sentiment analysis of 198 news articles, we trace the evolving narratives surrounding generative AI. We found that the media discourse largely centered on institutional responses; policy changes and teaching practices showed the most consistent presence and positive sentiment over time. Conversely, coverage of topics such as human-centered learning, the job market, and skill development appeared more sporadically, with initially uncertain portrayals gradually shifting toward cautious optimism. Importantly, media sentiment toward ChatGPT's role in college admissions remained predominantly negative. Our findings suggest that media narratives prioritize institutional responses to generative AI over long-term, broader ethical, social, and labor-related implications, shaping an emerging sociotechnical imaginary that frames generative AI in education primarily through the lens of adaptation and innovation.
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The aim of this paper is to present a problem reduction algorithm that deletes redundant vertices and edges, preserving the optimal solution. The algorithm's running time is O(N^3) in the worst case, but it is significantly faster in practice. The algorithm has reduced the problem size by 15-20% on average in our experiments and this has decreased the solution time by 10-60% for each of the considered solvers.
合并后的分组构建了一个从宏观叙事到微观技术的全方位AI治理框架。研究揭示了AI治理正从抽象的伦理原则转向由地缘政治主权、跨区域法律博弈、社会技术想象以及可计算的审计指标共同驱动的复杂生态系统。治理叙事不仅关乎技术对齐,更深植于全球权力重构、话语权争夺以及对人类主体性的重新定义中。