大国竞争视阈下美国情报体系的智能协同机制研究
大国竞争视阈下的战略格局与地缘政治博弈
聚焦AI技术在大国竞争背景下的宏观战略影响、科技霸权争夺、地缘政治关系重塑,以及国家层面的技术治理与安全文化研究。
- The relationship between artificial intelligence and national security: Geopolitical dimensions(Branislav Milosavljević, Slađan Milosavljević, 2026, Serbian Journal of Engineering Management)
- Global Governance of Artificial Intelligence and Great Power Rivalry: The Conflict Between Market Logic and Security Logic(Aiyu Yang, 2025, Lecture Notes in Education Psychology and Public Media)
- Public Intelligence in Public Diplomacy(Szabolcs Lóránt, 2025, Nemzetbiztonsági Szemle)
- Military-Civil Collaboration in the USA in the Sphere of Advanced Technologies(A. Stepanov, 2025, World Economy and International Relations)
- The offset imaginary: Great power competition, security imaginaries, and the making of artificial intelligence in American defense planning(Tom F. A. Watts, 2026, Contemporary Security Policy)
- Artificial intelligence in multidomain operations: a SWOT analysis(Sorin Topor, Victor Vevera, Alexandru Georgescu, E. Ciuperca, 2025, BULLETIN OF "CAROL I" NATIONAL DEFENCE UNIVERSITY)
- Synthesizing Strategic Frameworks for Great Power Competition(Major Gavin Holtz, 2025, Journal of Advanced Military Studies)
- STRATEGIC CONVERGENCE OF EAST AND WEST: U.S. AND CHINA’S MILITARY DOCTRINES IN THE AGE OF ARTIFICIAL INTELLIGENCE(Herlin Anak Aman, Aini Fatihah Roslam, Tharishini Krisnan, 2025, International Journal of Law, Government and Communication)
- The Prospect of Electronic Warfare in the 21st Century: An Analysis of Electronic Warfare Equipment Innovation and Its Strategic Impact Based on the Fusion of Quantum Communication and Artificial Intelligence(A. Yang, Wensong Chen, Huiping Luo, Henghui Si, Jinhui Cha, Guowen Li, Shuo Wang, Ruonan Liu, Yuze Fu, Zhengxiang Yang, 2024, Applied Science and Innovative Research)
- AGENTIC AI FOR NATIONAL SECURITY AND DEFENSE: AN ANALYTICAL STUDY ON AUTONOMOUS AGENTS IN SURVEILLANCE, STRATEGIC DECISION-MAKING, AND MILITARY OPERATIONS WITH A FOCUS ON ETHICAL AND LEGAL BOUNDARIES(Aashay Gupta, 2026, Journal of Digital Security and Forensics)
- US-China AI Competition in the Arab Gulf: Diverging Uses of Technological Statecraft on the Regional Level(Rory Miller, Steven Wright, 2025, Alternatives: Global, Local, Political)
- Back to the basics: international relations, intelligence, and strategic competition(C. Q. Thurston, 2023, The Journal of Defense Modeling and Simulation: Applications, Methodology, Technology)
- The Impact of AI on Global Power Shifts: A Case Study of U.S.-China Rivalry(Nourhan Tosson Ibrahim, 2025, China Quarterly of International Strategic Studies)
- Artificial Intelligence and The National Security Matrix(Rahat Naseem Ahmed Khan, Asmat Ullah Khan, 2025, Research Journal for Social Affairs)
- Open-source intelligence and great-power competition under mediatization(Jiaxi Zhou, 2024, Security Journal)
情报体系转型与数字化技术应用
探讨前沿技术(大数据、云计算、生成式AI、知识图谱等)在情报感知、全周期管理及网络安全防御中的实战演进与架构重塑。
- First contact: integrating generative AI into digital diplomatic intelligence(Corneliu Bjola, I. Manor, 2024, Place Branding and Public Diplomacy)
- A paradigm shift in crisis management: The nexus of AGI‐driven intelligence fusion networks and blockchain trustworthiness(Yang Yue, Joseph Z. Shyu, 2024, Journal of Contingencies and Crisis Management)
- Generative AI and cybersecurity: Exploring opportunities and threats at their intersection(Kunter Orpak, 2025, Maandblad voor Accountancy en Bedrijfseconomie)
- Transforming the Intelligence Cycle through Adapting to Complex Environments and the Operational Dynamics of Special Warfare(Emir Z. Muhić, 2025, Applied Cybersecurity & Internet Governance)
- The Application of Cloud Computing in Military Intelligence Fusion(Xi Cheng, Xuejun Liao, 2011, 2011 International Conference of Information Technology, Computer Engineering and Management Sciences)
- Smart new world: adapting human intelligence for the digital age(David V. Gioe, T. Manganello, 2025, Intelligence and National Security)
- GENERATIVE AI–DRIVEN SITUATIONAL AWARENESS FOR NATIONAL SECURITY: MULTI-MODAL DATA FUSION, ANOMALY DETECTION, AND AUTONOMOUS THREAT RESPONSE(Hondor Saragih, Hoga Saragih, 2025, International Journal of Applied Mathematics)
- 人工智能技术在智能武器装备的研究与应用(Unknown Authors, Unknown Journal)
- Analysis of Multi-Domain Operations Concept and the Role of Emerging Advanced and Disruptive Technologies for Its Operationalisation(Ionuț-Iulian Călugăru, 2024, Romanian Military Thinking)
- The Multi-Domain Approach to Military Operations and its Challenges to Intelligence and Intelligence, Surveillance, Reconnaissance(Ondrej Kacmarik, Radovan Vasicek, 2024, Challenges to National Defence in Contemporary Geopolitical Situation)
- 基于虚拟现实技术的人机交互在军事演练中的应用发展趋势 - 汉斯出版社(Unknown Authors, Unknown Journal)
- Opportunities and Directions for the Evolution of Command and Control Systems in the Context of Multi-domain Operations(A. Tóth, 2023, Vojenské reflexie)
- Revolutionizing Warfare: The Role of Artificial Intelligence in the Future of Defense(Tahir Abbas, Wajid Ali, I. Khan, S. Saleem, 2024, The Regional Tribune)
- Air Defense Transformation: Strategy in the Context of Multi-Domain Operations(Ali Mustopo, Oktaheroe Ramsi, Rudy AG Gultom, 2025, Formosa Journal of Applied Sciences)
- AI on the Frontlines: The Role of Artificial Intelligence in the Russia-Ukraine War(Dr Sundus Basharat Ahmad, Shanzay Saeed, 2025, Pakistan Journal of Integrated Social Sciences (PJISS))
- Research on Military Knowledge Graph Fusion Method Based on Multi-Technology(Zhe Liu, Jinhua Wang, 2025, 2025 5th International Conference on Consumer Electronics and Computer Engineering (ICCECE))
人机协同决策机制与智能化作战伦理
重点研究人工智能在战略预见、指挥控制及高风险作战中的应用,侧重于人的监管、可解释性(XAI)、复杂环境决策、核威慑及相关伦理风险治理。
- Human–Machine Collaboration for Strategy Foresight: The Case of Generative AI(M. E. Picavet, P. Maroni, Amardeep Sandhu, Kevin C. Desouza, 2025, Public Administration Review)
- Beyond Automation: The Ethical, Doctrinal, and Operational Challenges of Human-AI Collaboration in Military Decision-Making(Tadeusz Zieliński, 2026, De Securitate et Defensione. O Bezpieczeństwie i Obronności)
- Integration of Artificial Intelligence in Nuclear Command and Control Systems (NC2): Assessing Cold- War Paradigm(Zaigham Abbas, Fouzia Amin, Muhammad Mashhood Khan, 2025, Journal of Social Sciences Review)
- Causal Reasoning and Large Language Models for Military Decision-Making: Rethinking the Command Structures in the Era of Generative AI(Dimitrios Doumanas, Andreas Soularidis, Konstantinos Kotis, 2026, AI)
- Explaining Strategic Decisions in Multi-Agent Reinforcement Learning for Aerial Combat Tactics(Ardian Selmonaj, Alessandro Antonucci, Adrian Schneider, Matthias Sommer, Michael Rüegsegger, 2026, NATO Journal of Science and Technology)
- Integrating Generative AI into Tactical Military Decision-Making(Domagoj Tulicic, Robert Fabac, 2025, Strategos)
- Using AI in Wargaming Simulation as a Multi-Domain Decision Support Tool(G. Chance, Callum Pender, C. Halliday, 2026, NATO Journal of Science and Technology)
- Strategic Foresight in NATO and Strategic Commands - An Analysis of Methodologies and Institutional Architecture(Andrzej Jacuch, 2025, Przegląd Nauk o Obronności)
- Quantum-Cognitive Strategic Leadership: A Novel Theoretical Framework Applied to Contemporary Intelligence Architecture: A Case Study Analysis of Meta-Strategic Thinking in 21st Century Security Governance(Harikumar Pallathadka, Parag Deb Roy, 2025, Journal for Research in Applied Sciences and Biotechnology)
- The Space Race Re-imagined: Outer Space Technology and its Impact on Cross-Domain Deterrence(Natalia Shahrukh, 2025, Journal of Security & Strategic Analyses)
- Artificial Intelligence and Nuclear Weapons: A Commonsense Approach to Understanding Costs and Benefits(Herbert Lin, 2025, Texas National Security Review)
军事智能化环境下的网络安全防御与合规治理
专项分析生成式AI与大语言模型在国防情报环境中的特有安全威胁(如对抗性攻击、数据中毒),以及相应的合规性挑战与防御对策。
- Redefining Cybersecurity: Strategic Integration of Artificial Intelligence for Proactive Threat Defense and Ethical Resilience(2025, International Journal of Science and Engineering Applications)
- 面向人工智能的军队网络安全试验鉴定研究 - 汉斯出版社(Unknown Authors, Unknown Journal)
- Generative AI for FFRDCs(Arun S. Maiya, 2025, 2025 IEEE International Conference on Data Mining Workshops (ICDMW))
- Cybersecurity Challenges and Mitigations for LLMs in DoD Applications(Corinne Yorkman, Mark Reith, 2025, European Conference on Cyber Warfare and Security)
- Ethical and Adversarial Risks of Generative AI in Military Cyber Tools(V. Varadarajan, 2025, Journal of Information Systems Engineering and Management)
- Assessment of Cost-Benefit Dynamics of Cybersecurity Compliance Investments: A Multi-Sectoral Analysis Across Financial, Energy and Intelligence Industries(Oluwatobi Bamigbade, 2025, Journal of Energy Research and Reviews)
- THE QUESTION OF THE SCOPE OF DECISION-MAKING OF COMBAT SYSTEMS BASED ON GENERATIVE ARTIFICIAL INTELLIGENCE TECHNOLOGY AS AN IMPORTANT ELEMENT OF RISK MANAGEMENT OF THE POTENTIAL RISKS ARISING FROM IT(Mirosław Matosek, D. Prokopowicz, 2025, International Journal of Legal Studies ( IJOLS ))
本报告整合后的体系架构分为四个层级:首先是宏观的大国竞争战略框架,明确了技术竞争的地缘政治维度;其次是中观的体系转型,涵盖了情报与指挥技术的数字化底层架构;再次是微观的人机协同与决策机制,探讨了智能化应用的核心逻辑;最后是支撑性的安全治理体系,重点解决智能化带来的网络安全防御与伦理合规挑战。该划分逻辑清晰,实现了从国家战略到战术实施、从技术应用到风险防御的全面覆盖。
总计50篇相关文献
美国政府最早启动研究与发展网络安全试验鉴定工作,不仅推动了网络安全手段与安全试验鉴定相关技术的发展,加强了联邦政府信息系统的安全管理,同时也促进了美军武器系统网络 ...
这篇文章主要侧重于人工智能技术在智能武器装备中的研究与应用.描述了人工智能的定义,人工智能技术的发展以及美国对人工智能的重视.探讨了人工智能在智能武器装备中的 ...
在智能化的信息交互方面,训练系统经过对前期模拟训练积累的大量数据进行学习,可以形成不同作战场景下的作战方案,并在之后的作战中,根据作战场景对积累的方案进行比对,提供 ...
Artificial Intelligence (AI) is increasingly permeating critical decision-making domains, including nuclear command and control (NC2) systems. This study examines the strategic and ethical dimensions of AI integration into NC2 structures, emphasizing its potential to enhance decision-making speed, accuracy, and resilience while mitigating human cognitive limitations. The research introduces the concept of "Intelligentization Syndrome," a theoretical framework explaining resistance to AI adoption in high-risk environments. By contextualizing historical technological resistance and contemporary AI-related anxieties, the study identifies key psychological and structural barriers to AI symbiosis with NC2 systems. Furthermore, it evaluates different AI integration models—human-in-the-loop, human-on-the-loop, and human-out-of-the-loop—highlighting the advantages of a human-on-the-loop configuration as a balanced approach that leverages AI’s computational strengths while preserving human oversight. The study concludes that a phased and regulated AI integration strategy, complemented by robust ethical frameworks and safety measures, is essential to harness AI’s potential without compromising strategic stability.
No abstract available
This Article thoroughly examines the revolutionary impact of Artificial Intelligence (AI) on improving threat detection and response tactics within the swiftly changing realm of cybersecurity. Conventional security measures, struggling against the complexity of contemporary cyber threats, fail to provide adequate protection. In response, AI, driven by machine learning algorithms and predictive analytics, becomes a dynamic and adaptive entity strengthening digital defenses. The investigation commences with a comprehensive analysis of the methods by which AI enhances danger detection. Behavioral analytics utilizes AI to assess user behaviors and network activity, creating a proactive baseline, while anomaly detection and predictive analysis harness machine learning to recognize deviations from the norm and forecast potential dangers. This comprehensive strategy enables organizations to remain proactive against emerging cyber threats. Moreover, the study explores the crucial function of AI in incident response. AI-driven automated incident analysis expedites reaction times by rapidly analyzing and prioritizing security warnings. The amalgamation of AI with threat intelligence streams guarantees a perpetually updated knowledge repository, enabling organizations to respond adeptly to emerging dangers. The dynamic flexibility of AI allows systems to evolve and learn from each incidence, hence enhancing their defensive capacities over time. The discourse recognizes the significant advantages of AI in cybersecurity while simultaneously addressing the obstacles associated with its application. False positives, a potential drawback, require a measured approach to prevent the perception of typical action as harmful. Ethical factors, including privacy concerns and responsible AI practices, highlight the necessity for a judicious and principled incorporation of AI in cybersecurity. The paper underscores the essential collaborative synergy between human expertise and AI technologies. The essay emphasizes the importance of continuous investment in AI training programs for cybersecurity professionals, acknowledging that AI is best successful when enhanced by human insights. Additionally, it advocates for routine security audits to assess and refine cybersecurity protocols, collaborative research efforts to tackle ethical issues, and user education activities to strengthen collective defenses. As the digital landscape evolves, the incorporation of AI in cybersecurity seems not as a cure-all but as a formidable ally. This partnership guarantees a robust protection against increasingly complex cyber-attacks, reinforcing the basis for a secure digital future.
Abstract:Artificial intelligence (AI), particularly machine learning (ML), has transformed computing, offering potential benefits in the nuclear enterprise, which encompasses weapons, delivery systems, platforms, and command and control infrastructure. While AI can enhance efficiency in areas like predictive maintenance and operational planning, its integration into the nuclear enterprise poses significant risks, some of which are inherent in the nature of ML. Five principles should guide AI’s responsible application in a nuclear weapons context: maintaining meaningful human control in nuclear decision-making processes; evaluating AI risks within a nation’s broader nuclear posture; recognizing the challenges of verifying international agreements on AI restrictions; managing risks through self-imposed limitations; and leveraging AI to enhance human oversight. While AI offers opportunities to improve nuclear surety and operational efficiency in areas like planning and predictive maintenance, its deployment must prioritize minimizing catastrophic risks and preserving human judgment in critical decision-making processes.
Great power competition has escalated globally, making it increasingly important for the Department of Defense (DoD) to adopt artificial intelligence (AI) technologies that are advanced and secure. Large language models (LLMs), which generate text, code, images, and other digital content based on data sets used in training have gained attention for their potential in DoD applications such as data analysis, intelligence processing, and communication. However, due to the complex architecture and extensive data dependency of LLMs, integrating LLMs into defense operations presents unique cybersecurity challenges. These risks, if not properly managed, could pose severe threats to national security and mission integrity. This survey paper categorizes these challenges into vulnerability-centric risks, such as data leakage, and misinformation, and threat-centric risks, including prompt manipulation and data poisoning, providing a comprehensive framework for understanding the potential risks of LLMs in DoD settings. Each category is reviewed to identify the primary risks, current mitigation strategies, and potential gaps, ultimately identifying where further research is needed. By summarizing the state of the art in LLM cybersecurity, this paper offers a foundational understanding of LLM security within the DoD. By advocating for a dual approach that considers both the evolving nature of cyber threats and the operational needs of the DoD, it aims to provide actionable recommendations to guide ongoing research in the integration of LLMs to DoD operations.
No abstract available
ABSTRACT This article draws from the International Relations literature on security imaginaries to examine how the development of AI has been socially constructed by defense planners in the United States as a key technological domain of great power competition. It argues that to fully understand this relationship, we need to recognize how the offset imaginary promoted as part of the Third Offset Strategy has shaped a specific view of these technologies' desired geopolitical purpose. The offset imaginary has framed technological innovation, the development of new warfighting concepts, and organizational adaptation as key to sustaining the DoD's battlefield edge over competitors with larger militaries. By analyzing the institutionalization of this imaginary in the Pentagon's AI adoption efforts in the decade after November 2014, this article calls for greater awareness of how security imaginaries shape American defense planning in today's era of great power competition.
Abstract:This article examines the theoretical foundations necessary for conceptualizing and operationalizing modern great power competition through the innovative synthesis of three influential military frameworks: Colonel John R. Boyd's observe, orient, decide, act (OODA) loop, Colonel John A. Warden's systems analysis, and Chinese colonels Qiao Liang and Wang Xiangsui's unrestricted warfare theory. The research demonstrates how complementary frameworks provide strategic practitioners with comprehensive capabilities for identifying systemic vulnerabilities, orchestrating cross-domain effects, and maintaining decisive advantage through superior decision-making processes. The analysis reveals how modern competition transcends traditional military boundaries, necessitating organizational architectures capable of implementing synchronized operations across multiple competitive domains. This theoretical synthesis supports the proposal for an Interagency Action Committee on China (IAC-C) and identifies the foundational principles of a cross-domain organizational framework.
As United States foreign policy returns to a focus on great power competition, it is worth reviewing the fundamental theories associated with understanding the threat and its impact on state relations. The social science fields of international relations (IR) and security studies provide the foundational theory and associated concepts for strategic intelligence analysis in this area. The paper addresses four broad theories (realism, liberalism, economic structuralism, and constructivism) and illustrates their impact on policymakers and intelligence analysts as they craft strategy. The author argues for a more explicit inclusion of IR theories, frameworks, and methods in strategic intelligence analysis.
Artificial Intelligence is playing a transformational role in the domains of national security and international relations, which means that it is uniquely situated in military systems, cybersecurity, intelligence systems, and diplomatic strategy, thereby altering the way in which states define and respond to security threats. Also, fundamentally different than the preceding information technological revolutions, AI can engage all levels of security policy from the tactical level concerning battlefield operations to the strategic level relating to high-level decision making about geopolitical risks. AI is being embraced as both a weapon to possess in national security and, simultaneously, it is being regarded as a weakness in the new global security context at a time when great power strategic competition is strengthening.AI is transforming national security norms and paradigms in traditional ways, including advanced military capabilities, dynamic approaches to cyber defense, predictive measures to surveil threats, and a strategic foresight capacity that has not existed previously. The performance of the US, China, and Russia in terms of national security in terms of AI development represents a new type of technological hegemony or advantage over other states, thereby negotiating positional power or benefits of dominance, or all other states will come to experience technological obsolescence and erosion of national security priorities
This paper studies the escalating competition between the United States and China in the field of artificial intelligence (AI), through a neorealist perspective, asserting that this competition is not just about technology, but also reflects a broader structural expression of great power rivalry in an anarchic global environment. The research explores how Washington, as the hegemonic power, utilizes export controls, investment limitations, and democratic principles to curb China’s technological advancement, while China, the alleged revisionist state, harnesses centralized governance, extensive data resources, and state-driven innovation to challenge the prevailing international order. The study evaluates the ramifications of this rivalry across military, economic, technological, and trade sectors. Additionally, it contemplates potential future outcomes, ranging from China overtaking the United States to increasing global economic disparity and digital fragmentation. The paper also offers potential future scenarios concerning the power which is going to win the AI race. In the end, it contends that the contest is altering global power relationships and could either heighten international instability or, if handled wisely, promote inclusive worldwide innovation. Thus, the competition not only signifies systemic shifts in power but also has the capability to transform the framework of the forthcoming world order.
This article examines NATO's approach to strategic forecasting within an increasingly complex security environment characterized by great power competition, technological transformation, and multidimensional threats. The research aims to assess NATO's methodological frameworks, institutional architecture, and temporal dimensions of strategic foresight capabilities. The analysis employs an examination of NATO's distributed institutional architecture for strategic foresight, mapping the roles and responsibilities of key entities from the North Atlantic Council to specialized commands and research institutions. The study evaluates three methodological frameworks: the Strategic Foresight Analysis (SFA) covering 20-year horizons using PMESII domain scanning, the Framework for Future Alliance Operations (FFAO) translating geopolitical trends into military implications, and the Multiple Futures Project (MFP). The analysis reveals a multi-layered institutional architecture spanning from NATO Headquarters' specialized units to Allied Command Operations (ACO) and Allied Command Transformation (ACT), each contributing distinct analytical capabilities. The 2023 Strategic Foresight Analysis identified seven key drivers and developed four generic scenarios ranging from "Fragmenting World" to "Pervasive Competition," utilizing input from eight hundred participants and AI-assisted horizon scanning tools. NATO's contemporary approach to strategic foresight represents a system that emphasizes organizational adaptability over predictive precision. The Alliance has successfully developed a distributed institutional framework that leverages diverse analytical perspectives while mitigating organizational blind spots through temporal differentiation and capability-based planning methodologies. Rather than pursuing perfect prediction, NATO's strategic foresight focuses on building adaptive capacity to accommodate multiple potential futures across an increasingly complex security environment where military and civilian spheres are increasingly blurred.
In the context of systemic change from a unipolar order to one characterized by multipolarity, artificial intelligence (AI) is viewed by the United States (US) and China not just as a transformational technology with apparently unlimited economic potential. It is also viewed as a central area of competition between these great powers that has the potential to redefine geopolitics in the emerging global order. This article argues that AI in the military sphere should be conceptualized as a tool of “technological statecraft.” This is particularly relevant to the Gulf, where AI has the potential to drive systemic changes in regional security and economic order. For over four decades, the US has been the central pillar of Gulf security doctrine. China, for its part, is the region’s leading trading partner. While the US is looking to use military AI to bind its Gulf allies to its side, China is engaging in technological statecraft to win them away from Washington. This illustrates how the abstract, multifaceted nature of AI has facilitated further interdependencies across defense and economic realms while also broadening the strategic autonomy of Gulf states in the multipolar global order. This also demonstrates how the Gulf states are thinking of AI in terms of navigating great power competition in a multipolar world, preserving their agency and strategic autonomy, and advancing their national interests for economic development through smart strategic positioning.
The development of Artificial Intelligence (AI) represents one of the most significant phenomena of the 21st century, profoundly altering the concepts of power, sovereignty, and security in the contemporary international system. In the age of comprehensive global digitalisation and growing technological interconnectedness, states are increasingly recognising AI as an essential strategic resource for maintaining and preserving vital national interests and for effectively projecting geopolitical influence on the global stage. The paper analyses the relationship between AI and national security through a geopolitical lens, with particular emphasis on the role of the United States and China in shaping the new technological order. The authors examine how AI transforms the concept of security, how it is used across military, intelligence, and cyber structures, and the risks and ethical challenges it poses. The work aims to argue that competition for technological leadership in AI constitutes the central arena of contemporary international rivalry. Moreover, the competition between major powers is not merely a technological race; it also has implications for establishing new global standards, rules of engagement, and economic dominance, thereby directly shaping the future of the global order and the architecture of international security.
This paper examines the evolution of intelligence sharing as a tool of public diplomacy through analysis of two recent cases: the U.S. intelligence disclosures preceding Russia’s 2022 invasion of Ukraine and the 2023 Chinese surveillance balloon incident. Drawing on declassified documents and contemporary reporting, the article demonstrates how intelligence sharing has transformed from a purely operational tool into a sophisticated instrument of public diplomacy. The paper reveals distinct approaches to intelligence disclosure: systematic pre-emptive sharing in Ukraine versus rapid crisis response during the balloon incident. Both cases, however, demonstrate the U.S. Intelligence Community’s growing sophistication in balancing operational security with strategic communication needs. Through comparative analysis, the article identifies key patterns in how intelligence sharing can shape international narratives, build coalition support, and counter adversary messaging while maintaining credibility and protecting sources. This evolution in intelligence sharing practices, formalised in U.S. Intelligence Community Directive 405, represents a significant development in how intelligence services engage with both foreign governments and public audiences in an era of digital information warfare and great power competition.
The relationship between the United States and China has evolved through multiple phases since the Cold War era—shifting from ideological confrontation to diplomatic engagement and now entering a phase of strategic technological rivalry. Within this context, the rise of artificial intelligence (AI) has reshaped the global security landscape and challenged traditional frameworks of military planning and doctrine. This study analyzes the convergence and adaptation of military doctrines between the world’s two major powers—the United States and China—considering rapidly evolving AI technologies. Through a comparative analytical approach, the paper examines how both nations conceptualize and implement AI within their military strategies, particularly in intelligence operations, autonomous weapons systems, and cyber warfare capabilities. The study also explores how differing strategic cultures between East and West influence defense policymaking and the application of AI in military affairs. Findings indicate that, despite divergent values and philosophies, both countries are moving toward strategic convergence in shaping future military paradigms centered on technological innovation. These insights are crucial for understanding great power dynamics in an era of intensifying technological competition and their broader implications for global security.
The article provides an in-depth analysis of the evolving practices of collaboration between U.S. government agencies and private high-tech companies in the context of national security. It highlights the increasing institutionalization and strategic significance of public-private partnerships as a response to both accelerating technological change and intensifying great-power competition. The study focuses on key organizational actors such as the Defense Advanced Research Projects Agency (DARPA), the Defense Innovation Unit (DIU), and the Central Intelligence Agency’s venture capital fund In-Q-Tel. It explores the various mechanisms through which the U.S. Department of Defense and other national security institutions engage with commercial enterprises – from traditional procurement contracts and research grants to fast-track innovation programs and dual-use venture capital initiatives. Special attention is given to how commercial innovation is integrated into defense-related technological development in critical domains such as artificial intelligence, big data analytics, cloud computing, microelectronics, and space technologies. The article underscores the role of both major technology corporations and small startups in advancing military capabilities and shaping the broader defense innovation ecosystem. Beyond institutional design and policy instruments, the article addresses significant challenges arising from this collaboration. These include issues of regulatory complexity, ownership and commercialization of intellectual property, ethical concerns associated with military applications of emerging technologies, as well as tensions stemming from differing organizational cultures, values, and operational timelines between government bodies and private firms. Furthermore, the paper reflects on the geopolitical ramifications of deeper public-private integration, particularly with respect to U.S. – China strategic rivalry and its impact on technology governance and investment flows.
In an era characterized by vast data streams and complex socioeconomic dynamics, the fusion and precise analysis of multi‐sourced intelligence has emerged as a pivotal challenge. To address this, the study constructs a sophisticated intelligence fusion network (IFN) architecture leveraging the potential of Artificial General Intelligence (AGI) and the security tenets of blockchain technology. Drawing from diverse fields including informatics, computer science, data analytics, and network security, the research adopts an integrative methodology comprising both a comprehensive literature review and systems analysis. Key findings highlight the prowess of AGI‐driven IFNs in enhancing governmental early warning systems for crisis management. These networks underscore a paradigm shift from reactive postevent measures to proactive pre‐event forecasting, thus bolstering the efficacy of governmental responses. Moreover, the decentralized nature of blockchain technology ensures data integrity, fostering trust in interdepartmental data sharing—an essential for efficient crisis management in hierarchical administrative structures. This study accentuates the need for redefining crisis management strategies, emphasizing data‐driven decision‐making and seamless intelligence sharing to ensure optimal outcomes.
The rapid development of science and technology is driving the shape of electronic warfare to change dramatically, especially under the influence of quantum communication and artificial intelligence (AI) technology. This article comprehensively explores innovative approaches to the development of electronic warfare readiness in the 21st century, with a focus on the composite application of quantum communication technology and artificial intelligence technology in modern electronic warfare equipment. The non-reproducibility and anti-interference characteristics of quantum communication, as well as the rapid decision making and learning ability of artificial intelligence, are studied in this paper, and the impact of this technology fusion on the strategy and tactical execution of electronic warfare is further discussed. Using quantum communication, the communication security of electronic warfare system is greatly enhanced. The introduction of artificial intelligence can optimize tactical decisions and improve response speed. In addition, the paper also discusses the far-reaching impact of these technological integration on the global strategic security pattern, pointing out that it will promote the change of electronic tactics and bring new strategic competition focus. Finally, the paper puts forward some suggestions for the future research and development of electronic warfare equipment, emphasizing the need to find a balance between technical advantages and ethical regulations.
No abstract available
ABSTRACT This article considers the impact of digital advances on clandestine Human Intelligence (HUMINT) operations. Despite the arrival and advancement of disruptive technologies, classical HUMINT tradecraft – personal secret interaction between case officer and agent – remains indispensable for revealing adversaries’ intentions and providing decision advantage for statecraft. The authors argue that while emerging digital technologies present both new opportunities and challenges, they do not obviate the need for classical HUMINT. Instead, a fusion of traditional HUMINT tradecraft and emerging digital technologies is essential. HUMINT must – and will – adapt to remain relevant in an increasingly digitized world.
Artificial intelligence is reshaping the global economy, security, and governance, with the United States and China holding dominant positions. Their rivalry reflects the conflict between market logic, emphasizing efficiency and innovation, and security logic, prioritizing risk management and strategic control. Both powers exercise strong normative influence in multilateral platforms, yet domestic incorporation of norms remains limited. Market rationality accelerates diffusion of technologies such as Large Language Models but aggravates risks of inequality, whereas security rationality mitigates threats but constrains cooperation. Divergences appear in domains including quantum AI, data governance, and militarycivil fusion, where strategic confrontation delays regulatory adaptation. Balanced and inclusive frameworks are required for effective global governance, as highlighted by initiatives such as the Bletchley Declaration, with institutions like the United Nations serving a bridging role. Progress depends on cultivating reciprocal trust, avoiding zero-sum dynamics, and achieving mutually beneficial outcomes in AI governance. The persistence of these tensions illustrates the structural challenges inherent in aligning technological development with coherent global regulatory mechanisms.
Artificial intelligence (AI) is rapidly transforming modern warfare by reshaping military operations, strategic planning, and geopolitical dynamics. The ongoing Russia-Ukraine conflict provides an apt real-world case study for examining the revolutionary impact of AI oncontemporary warfare. This article explores the deployment of AI and AI-enabled technologies by both Russia and Ukraine to enhance battlefield intelligence, automate decision-making, and influence escalation dynamics.Using the Revolution in Military Affairs (RMA) as the theoretical framework, the research analyzes how AI has been operationalized throughout the conflict. Russia has primarily conducted AI-driven cyber operations and disinformation campaigns, while Ukraine, with support from Western partners, has used AI-powered drones, facial recognition systems, and data-fusion platforms. The article highlights the strategic implications of AI, such as its role in shaping alliance formations, intensifying the security dilemma, and enabling asymmetrical balancing.It use a qualitative case study method, analyzing secondary data from military reports, and academic sources to assess the operational and strategic impact of AI technologies on both sides. By bridging theoretical perspectives and real-world applications, this article contributes to the ongoing discourse on AI-enabled conflicts and global security challenges. Ethical concerns, including automation bias, accountability inautonomous warfare, and the lack of international regulations governing military AI, are also addressed.
In the context of information warfare, the fusion of knowledge graphs in the military field is of great significance for assisting decision-making, intelligence analysis, and battlefield situational awareness. This paper proposes a method for constructing and applying military knowledge graphs based on multi-technology fusion, addressing the fusion issues between the CMNEE dataset and external knowledge graphs (such as CN-DBpedia). The method encompasses four key steps: data preprocessing, entity linking, graph construction, and graph fusion. By integrating technologies such as Stanford CoreNLP, TF-IDF with cosine similarity, Graph Convolutional Networks (GCN), BiLSTM + CRF models, and knowledge embedding (TransE), the fusion effect is significantly enhanced. Experimental results show that this method outperforms other combination methods across multiple metrics, providing strong support for military knowledge mining and application, with important practical significance and broad application prospects.
Ensuring real-time situational awareness in national security operations is increasingly challenging due to the growing complexity of multi-domain threats and the diversity of intelligence sources. Existing analytical and centralized fusion approaches struggle with data scarcity, privacy constraints, and limited adaptability to evolving scenarios. To address these challenges, this study proposes a novel Generative AI–driven situational awareness framework that integrates multi-modal data fusion, generative anomaly simulation, and autonomous response optimization within a unified architecture. The framework leverages diffusion models and Generative Adversarial Networks (GANs) to generate high-fidelity synthetic anomalies, enabling robust detection of rare events. A federated multimodal fusion mechanism ensures secure cross-agency collaboration without exchanging raw data, while a reinforcement learning (RL) and multi-agent RL (MARL)-based decision intelligence module delivers machine-speed response strategies. Furthermore, explainable AI (XAI) and conformal uncertainty quantification enhance system transparency and trustworthiness. Extensive experiments are conducted on benchmark datasets, including xView and xBD for geospatial intelligence and UNSW-NB15 and CIC-IDS2018 for cybersecurity. The results demonstrate that the proposed framework significantly outperforms baseline methods, improving F1-scores by up to 10.6%, achieving 15.4% higher fusion accuracy, and reducing response latency by 42.9%. Additionally, XAI-based explanations reduce analyst verification time by 23%, ensuring compliance with the NIST AI Risk Management Framework. These findings confirm that the proposed approach provides a scalable, trustworthy, and adaptive solution for enhancing real-time situational awareness in mission-critical national security environments.
This paper discusses the multi-domain approach to military operations. Through comparative research and literature review, authors analyze how Western and peer adversary countries, namely the Russian Federation and the People's Republic of China, perceive and implement multi-domain operations. The article also identifies the challenges presented by the multi-domain character of the contemporary and future operating environment to intelligence and ISR. It highlights the crucial role of timely intelligence and surveillance in the diverse and contested operating environment, emphasizing the need for new technologies like artificial intelligence and big data processing.
This study is grounded in the necessity to integrate military capabilities across land, sea, air, space, and cyber domains to establish an adaptive and responsive air defense system. The objective of this research is to present various air defense strategies developed by different countries within the framework of multi-domain operations and to identify the most effective approaches in addressing the dynamics of modern threats. This study employs a qualitative method through comparative case studies of countries such as the United States, Russia, China, and Israel, which are considered to have progressively adopted multi-domain-based air defense strategies. The analysis highlights the vital role of integrating advanced technologies such as artificial intelligence (AI) and big data to support real-time threat detection and decision-making processes.
At present, Multi-Domain Operations (MDO) concept is of interest for the whole defence and security sphere, especially for the Euro-Atlantic area, where its implementation and operationalisation are desired at the level of the military instrument of power, as well as at the national and international strategic level, in an allied and partner context. The manner in which emerging and disruptive technologies can be transferred and integrated into the military field in order to provide stability and give the concept practical-applicable utility, and the way in which it can be operationalised, are elements of great interest to national military authorities at the moment, while giving the concept a probabilistic character for those analysing this area of the military field. Moreover, the decision-making process and the operational side are elements that, in the multi-domain integrative framework, require the involvement of advanced technologies, in particular artificial intelligence, which also, as a subdivision of technology, practically marks the existence of the contemporary human being, becoming a basic constituent in the conduct of military actions. In this article, we will try to give a realistic picture of this concept by presenting information analysed and extracted from bibliographical sources that reflect a qualitative and topical character. Therefore, the research method used throughout this paper will be the bibliographic research method (literature review), through which we will attempt to project a critical analysis of the impact that the concept will have on society.
Command & Control (C2) planning presents an increasingly complex challenge, such as the growing availability of relevant data and how to process this in useable timeframes. By understanding the likely Red Force responses to a potential Blue Force Course of Action (CoA), planners are empowered to make better strategic decisions using insights from Artificial Intelligence (AI) wargaming. Modelling and Simulation (M&S) tools, in combination with AI, can rapidly predict Red Force CoA by consuming and processing observational data of the operational picture. Red Force Response (RFR) is presented as a decision support tool that exploits AI within a wargaming simulator to find potential Red Force CoAs. Using state-of-the-art Deep Neural Network (DNN) algorithms, including Proximal Policy Optimisation (PPO) and Curiosity Learning, integrated into a Multi-Agent Reinforcement Learning (MARL) environment, the RFR agent finds both high-performing and novel CoAs based on the reward and action selection diversity respectively. A 91% Red Force win probability was achieved in a tactical air scenario when trained for 17,587 episodes against a superior Blue Force. The concept demonstrates an effective use of AI for C2 planning, how cloud computing can be used to effectively train agents, and how the concept could scale to larger problems.
The military conflicts of recent times have highlighted the need for modern military operations to focus on multi-domain operations, which requires a transformation of the command and control system, by creating mobile and flexible headquarters that can provide relevant information at any time, even during redeployments and redeployments. Cloud computing can be used to integrate data collection tools, ensuring fast and accurate analysis of large amounts of incoming data. A private 5G network can provide a secure, high-speed and reliable communication environment for cloud-based command and control systems. This network will enable real-time information transfer, facilitate rapid decision-making processes and enable the deployment of autonomous vehicles and drones. The authors have also explored capabilities that support the deployment of autonomous vehicles and drones, enabling commanders to conduct intelligence and surveillance more effectively. By leveraging technology and advanced communications systems, commanders can lead their teams in a dynamic environment while providing a high level of situational awareness. This integration of information and mobility allows commanders to react quickly and decisively, ultimately increasing their effectiveness on the battlefield.
This study examines the cost-benefit dynamics of cybersecurity compliance across the financial, energy, and military intelligence sectors, utilising open-access datasets from the BLS, Statista, Verizon DBIR, ENISA, and IBM. A one-way ANOVA, chi-square test, cost-benefit analysis, and logistic regression were employed to analyse sectoral differences in compliance costs, breach reductions, return on compliance investment (ROCI), and institutional success factors. Results show that financial institutions incur the highest average compliance costs ($30.94 million), while military intelligence yields the highest Return on Capital Invested (ROCI) (1.67). Compliance reduces breach incidents by 57.14% in finance, 50% in energy, and 48.57% in intelligence. Regression findings highlight workforce capacity (β = 3.06, p = .019) and training investment (β = 0.00044, p < .001) as strong predictors of compliance success, while budget constraints significantly hinder outcomes (β = -4.11, p < .001). It is recommended that regulatory bodies develop sector-specific frameworks, expand budget flexibility, institutionalise role-specific training, and incentivise automation and intelligence sharing.
In today’s increasingly complex and dynamic international environment, intelligence services are confronted with the phenomenon often referred to as special warfare, understood here as the combination of kinetic and non-kinetic threats such as information manipulation, cyberattacks, political subversion, and the involvement of organised criminal groups. While similar multi-domain challenges have existed in earlier periods, including during the Cold War and the global war on terror – the speed, technological dimension, and volume of contemporary operations create qualitatively new pressures on intelligence structures. This paper, using a conceptual and comparative methodological approach, critically reviews the existing scholarship on the intelligence cycle and examines case-inspired dynamics of special warfare to assess whether a network-based model can more effectively reflect these demands. The analysis consolidates critiques of the traditional linear cycle, develops a model emphasising simultaneity, adaptability, and real-time cooperation between analysts, operators, and decision-makers, and identifies institutional and cultural challenges of implementation. The study does not claim that such challenges are entirely unique to the present, but argues that digitalisation and cyber-based domains accentuate the limitations of linear approaches. It concludes that the networked approach, while not replacing the traditional cycle completely, represents an important adaptation in intelligence work with implications for both national security practice and the theoretical advancement of intelligence studies.
Multidomain operations are a strategic concept that integrates multiple domains of operation (land, sea, air, space and cyber) to achieve common objectives in a complex and dynamic environment. In the context of rapidly evolving technology, Artificial Intelligence (AI) has become an essential tool for optimizing and streamlining multi-domain operations, providing innovative solutions for sectors such as mobility and maneuver of forces and weapons, logistics, decision-making and other military technologies. In this paper, we will highlight the applications, benefits and challenges associated with the implementation of AI in multi-domain operations through a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis and propose some future development directions.
This study introduces a novel theoretical framework termed "Quantum-Cognitive Strategic Leadership" (QCSL) that synthesizes quantum cognition theory with complexity science to understand strategic decision-making in uncertain environments. Unlike traditional strategic leadership models that rely on linear decision-making paradigms, QCSL incorporates principles of cognitive superposition, strategic uncertainty management, and adaptive intelligence architectures. Through systematic application of this framework to contemporary intelligence governance, we demonstrate how meta-strategic thinking enables leaders to navigate complex, multi-domain challenges while maintaining coherent operational effectiveness. The framework's empirical validation through case study methodology reveals how quantum-cognitive principles manifest in practical strategic leadership, offering new theoretical insights for strategic studies, cognitive science, and public administration. This research contributes to emerging literature on complexity-based leadership theory while providing practical frameworks for developing strategic capabilities in uncertain environments.
Space is no longer a zero-sum game domain but the strategic arena of competition in which deterrence is determined not by threatening force but also by controlling orbital facilities, cyber space, and AI-powered decision systems. The evolution of space technologies from precision-strike satellites to autonomous ISR capabilities has rearranged the battlespace with information dominance and disruption being more valuable than sheer military might. The war of today is not mass mobilisation but network centric warfare (NCW), where the key to success lies in controlling the intelligence stream, jamming adversary’s communications, and pre empting attacks before the first shot has been fired. This study analyses how these advances have propelled Cross Domain Deterrence (CDD) - a theory of deterrence no longer based on brute power but on multi-domain pre emption, strategic deception, and systemic vulnerability. Based on a mixed-methods approach, integrating doctrinal analysis, strategic stability modelling, and case studies, this study examines how space and cyber technologies raise crisis instability, encourage first-strike tendency, and erode conventional deterrence mechanisms. Findings show that reliance on AI-based command infrastructure, cyber-physical sabotage, and dual-use commercial-military resources renders deterrence less stabilising and more escalatory. Without paradigm change in the governance of space and crisis, deterrence will not keep war away from us - it will determine its inevitability.
Artificial intelligence (AI) is pivotal in shaping the future technological landscape. Multi-Agent Reinforcement Learning (MARL) has emerged as a significant AI technology for simulating complex dynamics across various domains, enabling novel potentials for advanced strategic planning and coordination among autonomous agents. However, its practical deployment in sensitive military contexts is constrained by the lack of explainability: a critical factor for reliability, safety, strategic validation, and human-machine interaction. This paper reviews the latest advancements in explainability within MARL and presents novel use cases, emphasizing its indispensability for examining agents’ decision-making processes. Existing techniques are critically assessed and associated with the domain of military strategies, focusing on simulated air combat scenarios. The concept of a novel information-theoretic explainability descriptor to analyse the cooperation capabilities of agents is then introduced. The aim of this research is to highlight the necessity of precisely understanding AI decisions and aligning these artificially generated tactics with human understanding and strategic military doctrines, thereby enhancing the transparency and reliability of AI systems. By illuminating the crucial importance of explainability in advancing MARL for operational defence, this work supports not only strategic planning but also the training of military personnel with insightful and comprehensible analyses.
The accelerating integration of artificial intelligence (AI) into military systems presents unprecedented opportunities and complex dilemmas, particularly in the domain of decision-making under pressure. This article critically explores the ethical, doctrinal, and operational challenges that arise when human-AI collaboration is employed in high-stakes military contexts. At the heart of the study is the question of how military organizations can construct human-AI teams that simultaneously enhance operational effectiveness, uphold legal and ethical standards, and maintain meaningful human control over critical functions. To address this challenge, the study adopts a qualitative and interdisciplinary methodology that synthesizes doctrinal analysis, particularly of NATO and U.S. Department of Defense frameworks, with in-depth case studies of AI-enabled systems used in military operations. It investigates how AI systems affect traditional decision-making processes by accelerating data synthesis, improving situational awareness, and enabling faster reaction times. However, it also highlights emerging risks, such as automation bias, the erosion of human moral agency, loss of interpretability of algorithmic decisions, and the increasing difficulty in assigning responsibility for outcomes. The findings underscore that without appropriate safeguards, AI could undermine ethical accountability and legal clarity. To mitigate these risks, the article proposes a comprehensive framework for designing effective human-AI teams, which includes the implementation of transparent system architectures, explainable AI (XAI) models, trust calibration strategies, adaptive training modules, and multi-layered oversight mechanisms. Special attention is given to the necessity of doctrinal adaptation, the cultural and institutional readiness of military organizations, and the role of normative principles in regulating machine autonomy. Ultimately, the article argues that responsible innovation in military AI must be grounded in ethical rigor, legal certainty, and strategic prudence. Only by embedding these values into the design, deployment, and governance of human-AI teaming can military institutions ensure that AI enhances, rather than compromises, the legitimacy and effectiveness of defense operations.
This research paper basically analyzes the neophyte role of Artificial Intelligence in the defense domain and the ways through which this has been revolutionizing modern warfare with new operational and functional methods and tactics on the battlefield coupled with empowering decision-making processes outside the field. AI has been ingrained deeply into the veins of modern warfare at such a level that not only does it provide the facilities of an autonomous machine that works independently of human fallacious features but also enhances the potential of decision-making capabilities in order to analyze larger sets of data in a few minutes. Moreover, developments in AI and its integration in the military domain have led to the novice concept of ''Hyperwar,'' where automation of machines would eventually minimize human control over decisions. The effects of platforms under AI control would be multiplied by many folds, ultimately making it impossible for an enemy to execute a command or respond, known as the multiplier force effect. Not only will its application enhance the capacity and capability of weapons systems, but it also will alter the nature of warfare. This paper substantially investigates the unprecedented contingencies and how AI-based applications are putting three basic and integral aspects of the future of defense in danger, particularly Autonomous Weapons systems and modern warfare, Intelligence, reconnaissance, and national and international Security. However, few skeptics argue that the application of AI in the military domain requires a fundamental recalculation of what constitutes deterrence and military strength.
Decision-making in military organizations is particularly challenging at all levels, from tactical to strategic. Therefore, support for decision-making is crucial, which, in modern systems, includes traditional and more advanced command and control (C2) models. Due to the complexity of contemporary military operations, tactical decisionmaking should involve rapid, precise, and informed consideration of courses of action (COA). This research focuses on determining the application of generative artificial intelligence (AI) systems, such as ChatGPT, within tactical military decision-making as a potential component of the C2 model. As an advanced language model, ChatGPT can provide potential support to commanders in analyzing and selecting the optimal course of action by generating relevant situational solutions based on large data sets utilized by artificial intelligence. The research shows that using ChatGPT in this context enables the automatic generation of relevant scenarios based on realistic and hypothetical input data. Furthermore, by analyzing possible solutions through interaction with the generative AI system, it is possible to optimize decisions as well as the decision-making process at the tactical command level. This reduces human error and improves response time in dynamic and complex military situations. Although increasingly advanced AI models can significantly enhance military decision-making, this process must not be fully automated. Human oversight must be maintained in the final decision-making phase, thereby preserving accountability in critical military decisions.
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed by military experts. With the rapid advancement of Large Language Models (LLMs) such as ChatGPT, Claude, and DeepSeek, a new research question emerges: can LLMs perform causal reasoning at a level that could meaningfully replace human decision-makers, or should they remain human-led decision-support tools in high-stakes environments? This paper explores the causal reasoning capabilities of LLMs for operational and strategic military decisions. Unlike conventional AI models that rely primarily on correlation-based predictions, LLMs are now able to engage in multi-perspective reasoning, intervention analysis, and scenario-based assessments. We introduce a structured empirical evaluation framework to assess LLM performance through 10 de-identified real-world-inspired battle scenarios, ensuring models reason over provided inputs rather than memorized data. Critically, LLM outputs are systematically compared against a human expert baseline, composed of military officers across multiple ranks and years of operational experience. The evaluation focuses on precision, recall, causal reasoning depth, adaptability, and decision soundness. Our findings provide a rigorous comparative assessment of whether carefully prompted LLMs can assist, complement, or approach expert-level performance in military planning. While fully autonomous AI-led command remains premature, the results suggest that LLMs can offer valuable support in complex decision processes when integrated as part of hybrid human-AI decision-support frameworks. Since our evaluation directly tests this capability, this paradigm shift raises fundamental question: Is there a possibility to fully replace high-ranking officers/commanders in leading critical military operations, or should AI-driven tools remain as decision-support systems enhancing human-driven battlefield strategies?
Generative AI, particularly large language models (LLMs), is reshaping the cybersecurity landscape by enabling both innovative defense mechanisms and novel forms of attack. This article explores the dual role of generative AI in both offensive and defensive cybersecurity operations. While GenAI offers significant advancements in defensive capabilities, it is also being leveraged by nation-state actors to enhance the sophistication and success rates of cyberattacks. The article analyzes how LLMs are applied in offensive engagements such as red teaming, penetration testing, and threat intelligence, while also identifying emerging technical, operational, and strategic risks associated with their deployment. Special attention is given to the cybersecurity challenges of generative AI systems themselves, highlighting limitations in conventional frameworks and proposing governance-oriented mitigations such as model evaluation, human-in-the-loop oversight, GenAI-specific red teaming, and the structured dissemination of threat intelligence derived from GenAI-enabled security practices.
Federally funded research and development centers (FFRDCs) face text-heavy workloads, from policy documents to scientific and engineering papers, that are slow to analyze manually. We show how large language models can accelerate summarization, classification, extraction, and sense-making with only a few input-output examples. To enable use in sensitive government contexts, we apply OnPrem.LLM, an open-source framework for secure and flexible application of generative AI. Case studies on defense policy documents and scientific corpora, including the National Defense Authorization Act (NDAA) and National Science Foundation (NSF) Awards, demonstrate how this approach enhances oversight and strategic analysis while maintaining auditability and data sovereignty.
Introduction: This paper discusses the ethical and adversarial implications of implementing generative AI in military cyber secure methods. Generative AI has been displayed in numerous applications for threat simulation and defense from threats in civilian use. Still, there are important ethical considerations in military use because of the potential misuse of generative AI. Cyber threats against military systems continue to grow more sophisticated than previously, and we hope to add data to the body of research in this area to help bridge the identified gap in understanding the risks of generative AI in a military context. Objectives: The paper seeks to explore the ethical dilemmas, including accountability, autonomy, and misuse, surrounding military applications of generative AI. The paper examines adversarial risks associated with generative AI, including manipulation or other uses by hostile actors. The objective is to recommend measures for considering the ethical dilemmas, while at the same time improving the defenses. Methods: The methodology will assess ethical risks such as autonomy, weaponization, and bias related to AI systems. It will determine adversarial risks by recommending using adversarial training strategies, hybrid AI systems, and robust defense mechanisms against adversarially manipulated AI-generated threats. It will also propose ethical frameworks and accountability models for military cybersecurity. Results: This paper provides a comparative performance evaluation of military cybersecurity systems in a traditional and an AI-smart cyber context. The significant findings establish that generative AI potentially improves detection accuracy and, most notably, response times. It also introduces new risks such as adversarial manipulation. The experimentation results illustrated how adversarial training increases the robustness of models, reduces vulnerability, and provides greater defensive capabilities against adversarial threats. Conclusions: Generative AI in military cybersecurity has considerable benefits compared to traditional methods, particularly in enhancing detection performance, response time, and adaptability. As illustrated, the benefits of an AI-enhanced system improved the accuracy of malware detection by 15%, from 80% to 95%, and a 15% increase in phishing email detection, from 78% to 93%. The ability to react quickly to a new threat was also key, as response time was reduced by 60%, from 5 minutes to 2 minutes, which is essential in military situations where responding quickly will minimize impact. Additionally, the AI systems showed the ability to reduce false favorable rates from 10% to 4% (which is excellent) and lower false negative rates from 12% to 5% (which is also that employed the AI system greatly based on its ability to identify what a real threat looks like and its=apability to identify a real threat.
Generating strategic foresight for public organizations is a resource‐intensive and non‐trivial effort. Strategic foresight is especially important for governments, which are increasingly confronted by complex and unpredictable challenges and wicked problems. With advances in machine learning, information systems can be integrated more creatively into the strategic foresight process. We report on an innovative pilot project conducted by an Australian state government that leveraged generative artificial intelligence (AI), specifically large language models, for strategic foresight using a design science approach. The project demonstrated AI's potential to enhance scenario generation for strategic foresight, improve data processing efficiency, and support human decision‐making. However, the study also found that it is essential to balance AI automation with human expertise for validation and oversight. These findings highlight the importance of iterative design to develop robust AI tools for strategic foresight which, alongside stakeholder engagement and process transparency, build trust and ensure practical relevance.
This analytical study explores the integration of agentic artificial intelligence (AI) systems autonomous agents capable of independent decision-making into national security and defense domains, with particular emphasis on surveillance, strategic decision-making, and military operations. Employing a mixed-methods approach, including systematic literature review, secondary data analysis from defense reports, and hypothetical simulations grounded in real-world datasets, the study examines the transformative potential of these technologies while scrutinizing ethical dilemmas such as accountability, bias, and lethal autonomy, alongside legal frameworks like international humanitarian law. Key findings reveal a 45% increase in AI adoption for surveillance between 2020 and 2024 across major powers, yet highlight persistent gaps in regulatory oversight, with 68% of surveyed experts citing accountability as a primary concern. The analysis underscores the need for robust ethical guidelines and adaptive legal structures to mitigate risks. Conclusions advocate for interdisciplinary policy reforms to harness agentic AI's benefits while safeguarding human rights and global stability, contributing to theoretical advancements in AI governance within security contexts.
No abstract available
Artificial intelligence (AI) is playing an increasingly prominent role in the military sphere, bringing both numerous benefits and key challenges and risks. AI in the military enables increased operational efficiency through autonomous drones and vehicles that can perform complex missions with minimal human intervention and improves the precision of military operations. Artificial intelligence also supports realtime data analysis for faster and more accurate decision-making. Determinants of AI technology applications in the military sphere are the growing demands of national security and the need to keep up with techno-logical advances, allowing countries to gain a strategic advantage and better protect their interests. However, AI applications also come with serious challenges and risks, such as issues of ethics and accountability, as autonomous combat systems can make decisions to use force without direct human oversight. In addition, information systems may be vulner-able to cyber attacks, for which AI technology may be involved, posing additional risks to national security. The development of AI technology, including generative AI, often precedes necessary changes in legal norms. It is necessary to develop an international legal framework to regulate the use of AI in armed conflict to prevent escalation and uncon-trolled development of military technologies. Accordingly, the issue of the scope of deci-sion-making of combat systems based on generative artificial intelligence technology is now rapidly growing in importance as an important element in managing the risk of potential threats arising from it.
本报告整合后的体系架构分为四个层级:首先是宏观的大国竞争战略框架,明确了技术竞争的地缘政治维度;其次是中观的体系转型,涵盖了情报与指挥技术的数字化底层架构;再次是微观的人机协同与决策机制,探讨了智能化应用的核心逻辑;最后是支撑性的安全治理体系,重点解决智能化带来的网络安全防御与伦理合规挑战。该划分逻辑清晰,实现了从国家战略到战术实施、从技术应用到风险防御的全面覆盖。