AI for SOC
大语言模型(LLM)与智能体(Agentic AI)驱动的自主化运营
这组文献代表了 SOC 的前沿趋势,探讨如何利用 LLM、生成式 AI 和多智能体协作框架(Agentic AI)实现自动化的日志解释、威胁分析、恶意软件分析基准测试以及自主事件响应。研究重点在于利用 AI Copilot 和 RAG 技术构建能够理解自然语言并执行复杂安全任务的自主代理。
- LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres(Ronal Singh, Shahroz Tariq, Fatemeh Jalalvand, Mohan Baruwal Chhetri, Surya Nepal, Cécile Paris, Martin Lochner, 2025, ArXiv)
- ReAct-Driven SOC Agent with Integrated Detection Engineering for AI-Enhanced Autonomous Alert Handling(Tarek Radah, H. Chaoui, Chaimae Saadi, 2025, Journal of Information Systems Engineering and Management)
- Prompt-Driven Cybersecurity Education: Exploring LLM-Based Simulation of Security Operations Center Activities(Basil Hamdan, 2025, J. Comput. Sci. Coll.)
- Generative AI Enabled Actionable Decision Support in Cyber Security Operations for Enterprise Security(Saurabh Basu, Utkrisht Singh, Sandeep Sharma, Dalela Pankaj Kumar, Rajkumar Upadhyay, 2024, 2024 ITU Kaleidoscope: Innovation and Digital Transformation for a Sustainable World (ITU K))
- Generative AI for Automated Security Operations in Cloud Computing(Advait Patel, Pravin Pandey, Hariharan Ragothaman, Ramasankar Molleti, Diwakar Reddy Peddinti, 2025, 2025 IEEE 4th International Conference on AI in Cybersecurity (ICAIC))
- A Unified Framework for Human AI Collaboration in Security Operations Centers with Trusted Autonomy(Ahmad Mohsin, Helge Janicke, Ahmed Ibrahim, Iqbal H. Sarker, Seyit Ahmet Camtepe, 2025, ArXiv)
- CyberSOCEval: Benchmarking LLMs Capabilities for Malware Analysis and Threat Intelligence Reasoning(Lauren Deason, Adam Bali, Ciprian Bejean, Diana Bolocan, James Crnkovich, Ioana Croitoru, K. Durai, Chase Midler, Calin Miron, David Molnar, Brad Moon, Bruno Ostarcevic, Alberto Peltea, M. Rosenberg, C. Sandu, Arthur Saputkin, Sagar Shah, Daniel Stan, Ernest Szocs, Shengye Wan, Spencer Whitman, Sven Krasser, Joshua Saxe, 2025, ArXiv)
- LOCALINTEL: Generating Organizational Threat Intelligence from Global and Local Cyber Knowledge(Shaswata Mitra, Subash Neupane, Trisha Chakraborty, Sudip Mittal, Aritran Piplai, Manas Gaur, Shahram Rahimi, 2024, ArXiv)
- Generative AI and Security Operations Center Productivity: Evidence from Live Operations(James Bono, Justin Grana, Alec Xu, 2024, ArXiv Preprint)
- Enhancing Security Operations Center: Wazuh Security Event Response with Retrieval-Augmented-Generation-Driven Copilot(Ismail, Rahmat Kurnia, Farid Widyatama, Ilham Mirwansyah Wibawa, Zilmas Arjuna Brata, Ukasyah, Ghitha Afina Nelistiani, Howon Kim, 2025, Sensors (Basel, Switzerland))
- Towards LLM-based Synthetic Dataset Generation of Cyber Incident Response Process Logs(H. Galadima, Cormac J. Doherty, Rob Brennan, 2024, 2024 Cyber Research Conference - Ireland (Cyber-RCI))
- Autonomous Cyber Defense: LLM-Powered Incident Response with LangChain and SOAR Integration(Sandhya Guduru, 2021, International Journal of Computer Science and Information Technology Research)
- Autonomous Agentic AI Architectures for Optimizing Security Operations Centers (SOC) KPIS: Methodology, Impact on Detection, Response, and Recovery(Miroslav Stefanov, Kristyan Stefanov, Laxima Niure Kandel, Sean Crouse, Boyan Jekov, 2025, Land Forces Academy Review)
- Towards Trustworthy Agentic IoEV: AI Agents for Explainable Cyberthreat Mitigation and State Analytics(Meryem Malak Dif, Mouhamed Amine Bouchiha, Abdelaziz Amara Korba, Y. Ghamri-Doudane, 2025, ArXiv)
- Agentic Observability: Automated Alert Triage for Adobe E-Commerce(Aprameya Bharadwaj, Kyle Tu, 2026, ArXiv Preprint)
- A Novel LLM Approach of Cybersecurity Threat Analysis and Response(Tian Hu, Shangyuan Zhuang, Zhaorui Guo, Jiyan Sun, Yinlong Liu, Wei Ma, Hongchao Wang, lingfeng zhao, xiaojie zhang, 2025, Proceedings of the 16th International Conference on Internetware)
- CORTEX: Collaborative LLM Agents for High-Stakes Alert Triage(Bowen Wei, Yuan Shen Tay, Howard Liu, Jinhao Pan, Kun Luo, Ziwei Zhu, C. Jordan, 2025, ArXiv)
- The Evolution of Agentic AI in Cybersecurity: From Single LLM Reasoners to Multi-Agent Systems and Autonomous Pipelines(Vaishali Vinay, 2025, 2026 IEEE 5th International Conference on AI in Cybersecurity (ICAIC))
- Labeling Network Intrusion Detection System (NIDS) Rules with MITRE ATT&CK Techniques: Machine Learning vs. Large Language Models(Nir Daniel, F. Kaiser, Shay Giladi, Sapir Sharabi, Raz Moyal, Shalev Shpolyansky, Andres Murillo, Aviad Elyashar, Rami Puzis, 2025, Big Data Cogn. Comput.)
- Decoding BACnet Packets: A Large Language Model Approach for Packet Interpretation(Rashi Sharma, Hiroyuki Okada, Tatsumi Oba, Karthikk Subramanian, Naoto Yanai, Sugiri Pranata, 2024, ArXiv)
- AI-Augmented SOC: A Survey of LLMs and Agents for Security Automation(S. Srinivas, Brandon Kirk, Julissa Zendejas, Michael Espino, Matthew Boskovich, Abdul Bari, Khalil Dajani, Nabeel Alzahrani, 2025, J. Cybersecur. Priv.)
- Sola-Visibility-ISPM: Benchmarking Agentic AI for Identity Security Posture Management Visibility(Gal Engelberg, Konstantin Koutsyi, Leon Goldberg, Reuven Elezra, Idan Pinto, Tal Moalem, Shmuel Cohen, Yoni Weintrob, 2026, ArXiv Preprint)
- Cognitive SOC: Evidence-Backed Narrative Generation for Security Operations with Multi-Agent LLM Architecture(S. Sheikhi, Panos Kostakos, Lauri Lovén, 2025, 2025 IEEE International Conference on Big Data (BigData))
告警分诊自动化、可解释性 AI 与人机协同决策
该组文献专注于解决 SOC 分析师面临的‘告警疲劳’问题。通过自动化分诊、优先级排序、无监督异常检测以及可解释性 AI(XAI)技术(如 SHAP、LIME),增强告警的透明度与可信度,旨在对齐人机信任信号并提升决策效率。
- Decision-Aware Trust Signal Alignment for SOC Alert Triage(Israt Jahan Chowdhury, Md Abu Yousuf Tanvir, 2026, ArXiv Preprint)
- HeATed Alert Triage (HeAT): Transferrable Learning to Extract Multistage Attack Campaigns(Stephen Moskal, S. Yang, 2022, ArXiv)
- An Assessment of the Usability of Machine Learning Based Tools for the Security Operations Center(Sean Oesch, R. A. Bridges, Jared M. Smith, Justin M. Beaver, John R. Goodall, Kelly M. T. Huffer, Craig Miles, Daniel Scofield, 2020, 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics))
- Information-Dense Reasoning for Efficient and Auditable Security Alert Triage(Guangze Zhao, Yongzheng Zhang, Changbo Tian, Dan Xie, Hongri Liu, Bailing Wang, 2025, ArXiv)
- AutoCRAT: Automatic Cumulative Reconstruction of Alert Trees(Eric Ficke, Raymond M. Bateman, Shouhuai Xu, 2024, ArXiv)
- Automated Alert Classification and Triage (AACT): An Intelligent System for the Prioritisation of Cybersecurity Alerts(M. Turcotte, Franccois Labreche, Serge-Olivier Paquette, 2025, ArXiv)
- Carbon Filter: Real-time Alert Triage Using Large Scale Clustering and Fast Search(Jonathan Oliver, Raghav Batta, Adam Bates, Muhammad Adil Inam, Shelly Mehta, Shu-Tao Xia, 2024, ArXiv)
- Automated Threat-Alert Screening for Battling Alert Fatigue with Temporal Isolation Forest(Muhamad Erza Aminanto, Lei Zhu, Tao Ban, Ryoichi Isawa, Takeshi Takahashi, D. Inoue, 2019, 2019 17th International Conference on Privacy, Security and Trust (PST))
- Combating Threat-Alert Fatigue with Online Anomaly Detection Using Isolation Forest(Muhamad Erza Aminanto, Lei Zhu, Tao Ban, Ryoichi Isawa, Takeshi Takahashi, D. Inoue, 2019, No journal)
- Explainable AI for Cloud Intrusion Detection: A User Study of SHAP and LIME in AWS GuardDuty(Humberto Goncalves, Zakaria Alomari, 2026, 2026 IEEE 5th International Conference on AI in Cybersecurity (ICAIC))
- A cyber security data triage operation retrieval system(Chen Zhong, Tao Lin, Peng Liu, J. Yen, Kai Chen, 2018, Comput. Secur.)
- Human-AI Collaboration in Cloud Security: Cognitive Hierarchy-Driven Deep Reinforcement Learning(Zahra Aref, Sheng Wei, N. Mandayam, 2025, ArXiv)
- Towards AI-Driven Human-Machine Co-Teaming for Adaptive and Agile Cyber Security Operation Centers(Massimiliano Albanese, Xinming Ou, Kevin Lybarger, Daniel Lende, Dmitry Goldgof, 2025, ArXiv)
- From Alert Fatigue to Augmented Defense: A Case for AI Copilots in Cybersecurity Operation Centers(Vishwesh Akre, Thaeer Kobbaey, Goran Lazarov, Khaled Abdulsalam, Waleed Al-Sit, Jamal Diab, 2025, 2025 10th International Conference on Information Technology Trends (ITT))
- Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification(Rajesh Kalakoti, Risto Vaarandi, Hayretdin Bahşi, Sven Nõmm, 2025, No journal)
- AI-driven Decision Support in Management: Shaping the Future of Organizational Decision-Making(U. Shrivastava, 2026, International Journal of Social Sciences and Management)
智能安全编排(SOAR)、SIEM 集成与响应自动化
此组文献关注 SOC 的核心基础设施升级,研究如何将 AI/ML 集成到 SIEM 和 SOAR 平台中。涵盖了自动化剧本(Playbook)生成、超自动化框架设计、跨服务观测以及针对勒索软件等特定威胁的快速响应流程优化。
- Autonomous threat hunting with AI in multi cloud and hybrid security architectures(Rosemary Chisom Dimakunne, Queeneth Etelaowoni Ogbeche, Abdussobur Giwa, 2022, International Journal of Management & Entrepreneurship Research)
- AI-optimized SOC playbook for Ransomware Investigation(P. R. Rajgopal, 2025, International journal of data science and machine learning)
- Toward Robust Security Orchestration and Automated Response in Security Operations Centers with a Hyper-Automation Approach Using Agentic Artificial Intelligence(Ismail, Rahmat Kurnia, Zilmas Arjuna Brata, Ghitha Afina Nelistiani, Shinwook Heo, Hyeong-Sik Kim, Howon Kim, 2025, Inf.)
- CYBER SECURITY OF RAILWAY AUTOMATION AND TELEMECHANICS SYSTEMS. ARTIFICIAL INTELLIGENCE FOR DETECTING AND RESPONDING TO THREATS(V. Sotnyk, Mehriban Almammadova, Anatolii Melikhov, 2025, Collection of Scientific Works of the Ukrainian State University of Railway Transport)
- Intelligent Threat Detection: The Future of Cybersecurity with AI and SOAR(A. S, S. Parthiban, 2025, Advanced International Journal for Research)
- Effectiveness of AI/ML in SOAR (Security Automation and Orchestration) Platforms(S. Subudhi, 2024, International Journal of Science and Research (IJSR))
- Design and implementation of control support intelligence for the enhancement of efficiency in the Security Operations Center (SOC)(Jin-Won Kim, Yong-Joon Lee, Sang-Do Lee, 2023, Journal of the Korea Academia-Industrial cooperation Society)
- Enhancing Cyber Resilience: Convergence of SIEM, SOAR, and AI in 2024(Shanmugavelan Ramakrishnan, Dinesh Reddy Chittibala, 2024, International Journal of Computing and Engineering)
- Designing Scalable Software Automation Frameworks for Cybersecurity Threat Detection and Response(Bhargav Dilipkumar Jaiswal, 2025, International Journal of Scientific Research and Management (IJSRM))
- IC-SECURE: Intelligent System for Assisting Security Experts in Generating Playbooks for Automated Incident Response(Ryuta Kremer, Prasanna N. Wudali, Satoru Momiyama, Toshinori Araki, Jun Furukawa, Y. Elovici, A. Shabtai, 2023, ArXiv)
- Security orchestration and automation models for accelerating incident detection and response(Adetomiwa A. Dosunmu, Peter Olusoji Ogundele, 2025, Computer Science & IT Research Journal)
- Botnet Detection and Incident Response in Security Operation Center (SOC): A Proposed Framework(Roslaily Muhammad, S. Ismail, N. Hassan, 2024, International Journal of Advanced Computer Science and Applications)
- Artificial Intelligence based Security Orchestration, Automation and Response System(Rahul Vast, Shruti Sawant, Aishwarya Thorbole, Vishal Sahebrao Badgujar, 2021, 2021 6th International Conference for Convergence in Technology (I2CT))
- AI-ASSISTED SECURITY ORCHESTRATION IN HEALTHCARE INCIDENT RESPONSE(Gaurang Deshpande, Deepak Singh, 2021, Phoenix: International Multidisciplinary Research Journal ( Peer reviewed High Impact Journal ))
- Integrative Analytics for Autonomous Threat Response: AI-Secured Business Processes in Finance Ecosystems(Peter Olusegun Aina, 2025, International Journal of Research Publication and Reviews)
基于深度学习的威胁检测算法与主动防御技术
该组侧重于底层的检测能力建设,利用 Transformer、GNN、LSTM 等模型对网络流量、端点行为和恶意 IP 进行高精度识别。同时涉及联邦学习、数字孪生、AutoML 等前沿技术在主动防御和隐私保护检测中的应用。
- ALERT-Transformer: Bridging Asynchronous and Synchronous Machine Learning for Real-Time Event-based Spatio-Temporal Data(Carmen Martin-Turrero, Maxence Bouvier, Manuel Breitenstein, Pietro Zanuttigh, Vincent Parret, 2024, ArXiv Preprint)
- IP SafeGuard–An AI-Driven Malicious IP Detection Framework(Abdullah Al Siam, M. Alazab, A. Awajan, Md. Rakibul Hasan, Areej Obeidat, Nuruzzaman Faruqui, 2025, IEEE Access)
- Carbon Filter: Scalable, Efficient, and Secure Alert Triage for Endpoint Detection & Response(Muhammad Adil Inam, Jonathan Oliver, Raghav Batta, Adam Bates, 2025, 2025 28th International Symposium on Research in Attacks, Intrusions and Defenses (RAID))
- A Host-based Intrusion Detection: Using Signature-based and AI-driven Anomaly Detection for Enhanced Cybersecurity*(Fazal-ur Rehman, Farhan Mushtaq, Hafsah Zaman, 2024, 2024 4th International Conference on Digital Futures and Transformative Technologies (ICoDT2))
- AI-Driven Cybersecurity Threat Detection: Building Resilient Defense Systems Using Predictive Analytics(Biswajit Chandra Das, M. S. Sartaz, Syed Ali Reza, Arat Hossain, Md Nasiruddin, Kanchon Kumar Bishnu, Kazi Sharmin Sultana, Sadia Sharmeen Shatyi, MD. Azam Khan, Joynal Abed, 2025, ArXiv)
- Next-Generation Ransomware Defense Using LSTM-Powered Behavioral Analytics(G. M. Sathyaseelan, Y. R. Babu, Shankar Das Boddu, N. Reshma, S. Santhoshkumar, D. Vikram, 2026, 2026 9th International Conference on Computational Intelligence in Data Science (ICCIDS))
- AegisUI: Behavioral Anomaly Detection for Structured User Interface Protocols in AI Agent Systems(Mohd Safwan Uddin, Saba Hajira, 2026, ArXiv Preprint)
- SILU: Strategy Involving Large-scale Unlabeled Logs for Improving Malware Detector(Taishi Nishiyama, Atsutoshi Kumagai, Kazunori Kamiya, Kenji Takahashi, 2020, 2020 IEEE Symposium on Computers and Communications (ISCC))
- PORTFILER: Port-Level Network Profiling for Self-Propagating Malware Detection(Talha Ongun, Oliver Spohngellert, Benjamin A. Miller, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Alastair Nottingham, J. Davidson, M. Veeraraghavan, 2021, 2021 IEEE Conference on Communications and Network Security (CNS))
- Evaluating the Effectiveness of AI-Driven Threat Intelligence Systems: A Technical Analysis(Rajesh Rajamohanan, Nair, 2025, Journal of Computer Science and Technology Studies)
- AI-Powered Incident Response Automation in Critical Infrastructure Protection(Ehimah Obuse, Edima David Etim, Iboro Akpan Essien, Emmanuel Cadet, Joshua Oluwagbenga Ajayi, Eseoghene Daniel Erigha, Lawal Abdulmutalib Babatunde, 2023, International Journal of Advanced Multidisciplinary Research and Studies)
- AI Driven ChatOps for DevSecOps: Automating Security Incident Response(Balajee Asish Brahmandam, 2025, International Journal of Multidisciplinary Research in Science, Engineering and Technology)
- Cyber Defense Digital Twins: A Federated Learning and Zero-Trust AI Architecture for Autonomous Threat Prediction and Response(Sivaramakrishnan Narayanan, International Journal of Scientific Research in Computer Science, Engineering and Information Technology)
- Proactive Insider Threat Detection Framework: An Explainable AI and Behavioral Analytics-Driven Approach(Oladapo Adeduro, Olabisi Josh-Falade, A. Mesioye, 2026, Journal of Future Artificial Intelligence and Technologies)
- Towards Autonomous Cybersecurity: An Intelligent AutoML Framework for Autonomous Intrusion Detection(Li Yang, Abdallah Shami, 2024, ArXiv Preprint)
- Optimal Machine Learning Algorithms for Cyber Threat Detection(H. Farooq, Naif M. Otaibi, 2018, 2018 UKSim-AMSS 20th International Conference on Computer Modelling and Simulation (UKSim))
- Adaptive Behavioral Analytics for Intrusion Prevention in Ai-Driven Digital Currency and Financial Cyber Defense Systems(Eric Jhessim, 2025, International Journal of Innovative Science and Research Technology)
特定领域应用、云原生安全与下一代认知架构
这些文献探讨了 AI 在特定垂直行业(金融、医疗、能源、轨道交通)及复杂架构(多云、零信任、工业控制系统)下的 SOC 实践。此外,还包括了关于认知 SOC 架构演进、安全知识图谱构建及合规性管理(如 SOC 2)的研究。
- AI-Enhanced Cloud Security: A Dynamic Framework for Adaptive Threat Intelligence(M. K, Yadhukrishna M R, Jyothi B, P. M, 2025, 2025 International Conference on Computing Technologies (ICOCT))
- AI-Driven Proactive Cloud Application Data Access Security(Priyanka Neelakrishnan, 2024, International Journal of Innovative Science and Research Technology (IJISRT))
- Towards a Responsive Security Operations Center for UAVs(Fadhila Tlili, S. Ayed, Lamia Chaari, 2024, 2024 International Wireless Communications and Mobile Computing (IWCMC))
- An Integrated Approach to AI-Enhanced Security Information and Event Management(Mugeshwaran K, K. Maharajan, Dumala Nithish, Dinesh S, N. Uday, 2025, 2025 International Conference on Computational Robotics, Testing and Engineering Evaluation (ICCRTEE))
- Behavioral Analytics and AI in Zero Trust Security: A Framework for Adaptive Identity and Access Management(Mukul Mangla, 2025, International Journal Science and Technology)
- A Multi-Layered Adaptive Cybersecurity Framework for the Banking Sector Integrating Next-Gen Firewalls with AI-Driven IDPS(Sokroeurn Ang, Mony Ho, Sopheatra Huy, Midhunchakkaravarthy Janarthanan, 2026, STAP Journal of Security Risk Management)
- Synergizing AI and Cybersecurity: A New Methodology to Real-Time Intrusion Detection and Prevention System(Dileep Singh Kushwah, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- AI-Driven Cybercrime Forensics for Predictive Threat Detection and Investigative Intelligence(Atif Khan, 2026, International Journal of Scientific Interdisciplinary Research)
- RelExt: Relation Extraction using Deep Learning approaches for Cybersecurity Knowledge Graph Improvement(Aditya Pingle, Aritran Piplai, Sudip Mittal, A. Joshi, James Holt, Richard Zak, 2019, 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM))
- Digital Forensics and Incident Response (DFIR) Automation: Leveraging AI to Accelerate Breach Investigation, Evidence Collection, and Cyberattack Mitigation(John Kuforiji, 2025, Journal of Data Analysis and Critical Management)
- The Role of Technical Product Managers in Architecting AI-Powered Infrastructure: A Compliance-Driven Framework(Chinenye Blessing Onyekaonwu, Olaide Oluwatobi Ogundolapo, Amina Catherine Peter- Anyebe, 2025, International Journal of Innovative Science and Research Technology)
- Development of Proactive Cyber Security System using Azure Honeynet Platform(Usha Desai, Shankar G, Indrajeet Palled, Vadiraj S Namavali, S. Patil, Susha M, 2024, 2024 International Conference on Augmented Reality, Intelligent Systems, and Industrial Automation (ARIIA))
- Autonomous Zero Trust Enforcement: Revolutionizing Security Through AI-Powered Identity Behavior Analytics(Bharatveeranjaneya Reddy Devagiri, 2025, Journal of Computer Science and Technology Studies)
- Cybersecurity in AI-Driven Data Centers: Reinventing Threat Detection(Subhash Bondhala, 2025, International Journal of Advanced Research in Science, Communication and Technology)
- Cloud Security Leveraging AI: A Fusion-Based AISOC for Malware and Log Behaviour Detection(N. Okonkwo, L. L. Dhirani, 2025, ArXiv)
- THE ROLE OF AI-DRIVEN CYBER RISK ANALYTICS ON CLOUD SECURITY POSTURE MANAGEMENT IN ENTERPRISE SYSTEMS(Anisur Rahman, 2025, International Journal of Business and Economics Insights)
- AI-Driven Incident Response for Digital Forensics and Incident Response: A Comprehensive Framework(Santosh Datta Bompally, 2025, Journal of Computer Science and Technology Studies)
- Interdisciplinary Optimization of Security Operations Centers with Digital Assistant(Bence Tureczki, Katalin Szenes, 2021, 2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI))
- Cybersecurity Resilience Demonstration for Wind Energy Sites in Co-Simulation Environment(Michael Mccarty, Jay Johnson, Bryan Richardson, C. Rieger, Rafer Cooley, J. Gentle, Bradley Rothwell, T. Phillips, Beverly Novak, Megan Culler, Brian Wright, 2023, IEEE Access)
- Intelligent Risk Assessment in OSNs Using Behavioral Analytics and Trust Metrics(Bejawada Saritha, Rajender S, 2025, Journal of Computer Allied Intelligence)
- AI-Driven Cybersecurity in Storage Infrastructure(Oluwatosin Oladayo Aramide, 2024, World Journal of Advanced Engineering Technology and Sciences)
- A cyber threat intelligence model using MISP and machine learning in a SOC environment(A. Aljahdali, 2025, International Journal of ADVANCED AND APPLIED SCIENCES)
- AI-Driven Insider Threat Detection Using Wazuh and Behavioral Analytics: A Modular Approach(N. R. Rao, 2025, International Journal for Research in Applied Science and Engineering Technology)
- AI-Driven Quantum Cryptography: Machine Learning for Robust QKD, Secure Operations, and Quantum-Safe Integration(Ankit Gupta, Shilpi Mittal, 2026, 2026 IEEE 5th International Conference on AI in Cybersecurity (ICAIC))
- The Next Generation Cognitive Security Operations Center: Network Flow Forensics Using Cybersecurity Intelligence(Konstantinos Demertzis, Panayiotis Kikiras, Nikos Tziritas, S. Sánchez, L. Iliadis, 2018, Big Data Cogn. Comput.)
- Ontologies for Network Security and Future Challenges(Danny Velasco, Glen Rodriguez, 2017, ArXiv Preprint)
- The Next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks(Konstantinos Demertzis, Nikos Tziritas, Panayiotis Kikiras, S. Sánchez, L. Iliadis, 2019, Big Data Cogn. Comput.)
- Labeling NIDS Rules with MITRE ATT&CK Techniques: Machine Learning vs. Large Language Models(Nir Daniel, F. Kaiser, Shay Giladi, Sapir Sharabi, Raz Moyal, Shalev Shpolyansky, Andres Murillo, Aviad Elyashar, R. Puzis, 2024, ArXiv)
非安全运营领域的“SoC”研究(术语歧义项)
该组文献虽然包含关键词“SoC”,但其指代的是“电池荷电状态”(State of Charge)或硬件“系统级芯片”(System-on-Chip),属于能源管理与硬件设计领域,在安全运营中心研究中应予以排除。
- An 2.31uJ/Inference Ultra-Low Power Always-on Event-Driven AI-IoT SoC With Switchable nvSRAM Compute-in-Memory Macro(Haoyang Sang, Wenao Xie, Gwangtae Park, H.-J. Yoo, 2024, IEEE Transactions on Circuits and Systems II: Express Briefs)
- A Hybrid Deep Learning and XAI-Driven Framework forAccurate Estimation of Battery SOC AND SOH(N. Keerthi, B. Jyothi, Kalyan D., K. V. G. Rao, T. Rakesh, M. K. Kumar, 2025, International Journal of Basic and Applied Sciences)
- Edge AI-Driven Battery Management for Electric Vehicles: Models, Hardware, and Future Directions(Sanjana Jayavant Zangaruche, Rutuja Deepak Pakhare, Saptami Madan Gadekar, Sakshi Dinesh Sangelia, Shrusti Basavaraj Allagi, Prakash K Sonwalkar, 2025, 2025 Global Conference on Information Technology and Communication Networks (GITCON))
- AI-Driven Energy Management System for Renewable Powered Electric Vehicle Charging Stations(N. Ravikumar, Korikana Ramadevi, Dumpa Poojitha, Banadaru Pavan Sai Santhosh, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- AI-Driven Cache Coherence Verification with Graph Neural Networks in SoC-Based Shared Memory Systems(2022, Journal of Information Systems Engineering and Management)
- AI-Driven Anomaly Detection in Oscilloscope Images for Post-Silicon Validation(Kowshic A. Akash, Tobias Wulf, Torsten Valentin, Alexander Geist, Ulf Kulau, Sohan Lal, 2025, 2025 38th International Conference on VLSI Design and 2024 23rd International Conference on Embedded Systems (VLSID))
- AI-Driven Evolution in Integrated Circuit Design: A Qualitative Exploration of Transformative Potential and Human Collaboration(Danny Rittman, J. Butler, Abdullah Alshboul, 2024, The Pinnacle: A Journal by Scholar-Practitioners)
本报告最终将 AI for SOC 的研究划分为六大维度。核心演进路径清晰地展示了从基础的“深度学习检测算法”到“SIEM/SOAR 自动化集成”,再到当前最前沿的“LLM 与智能体自主运营”的跨越。研究重点已从单纯的检测精度提升,转向解决“告警疲劳”、增强“人机协同可解释性”以及适配“云原生与零信任”等复杂架构。此外,报告严格区分了安全运营中心(SOC)与硬件芯片/电池管理(SoC)的同名异义研究,确保了行业研究的严谨性与聚焦度。
总计108篇相关文献
The sophistication of cyberthreats demands more efficient and intelligent tools to support Security Operations Centers (SOCs) in managing and mitigating incidents. To address this, we developed the Security Event Response Copilot (SERC), a system designed to assist analysts in responding to and mitigating security breaches more effectively. SERC integrates two core components: (1) security event data extraction using Retrieval-Augmented Generation (RAG) methods, and (2) LLM-based incident response guidance. This paper specifically utilizes Wazuh, an open-source Security Information and Event Management (SIEM) platform, as the foundation for capturing, analyzing, and correlating security events from endpoints. SERC leverages Wazuh’s capabilities to collect real-time event data and applies a RAG approach to retrieve context-specific insights from three vectorized data collections: incident response knowledge, the MITRE ATT&CK framework, and the NIST Cybersecurity Framework (CSF) 2.0. This integration bridges strategic risk management and tactical intelligence, enabling precise identification of adversarial tactics and techniques while adhering to best practices in cybersecurity. The results demonstrate the potential of combining structured threat intelligence frameworks with AI-driven models, empowered by Wazuh’s robust SIEM capabilities, to address the dynamic challenges faced by SOCs in today’s complex cybersecurity environment.
Gartner, a large research and advisory company, anticipates that by 2024 80% of security operation centers (SOCs) will use machine learning (ML) based solutions to enhance their operations.11https://www.ciodive.com/news/how-data-science-tools-can-lighten-the-load-for-cybersecurity-teams/572209/ In light of such widespread adoption, it is vital for the research community to identify and address usability concerns. This work presents the results of the first in situ usability assessment of ML-based tools. With the support of the US Navy, we leveraged the national cyber range-a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities-to study six US Naval SOC analysts' usage of two tools. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics for user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust $a$ and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings.
—In the dynamic landscape of evolving cyber threats, Security Operations Centers (SOCs) play an important role in protecting digital assets. Among these threats, botnets are particularly challenging due to their ability to take over many devices and launch coordinated attacks. Through comparative analysis, the research gaps in existing frameworks have been identified. Based on these insights, a botnet detection and incident response framework aligned with SOC practices has been proposed. This proposed framework emphasizes proactive measures by integrating threat intelligence, detection and monitoring tools to detect botnet attack and facilitate rapid response. Future research will focus on conducting evaluation and validation studies to assess the effectiveness and performance of the framework in controlled environments. This effort will contribute to develop the framework and ensuring it aligns with practical cybersecurity needs.
The proactive Security Operations Center (SOC) is a state-of-the-art technology, in the field of critical infrastructure cyber defense, which secure against cyberattacks and cyberwars. The cybersecurity is emerging domain with the application of advanced Artificial Intelligence (AI) techniques which can block the malware attacks such as the ransomware, that exploits zero-day vulnerabilities. This paper emphases on development and deployment of a comprehensive cybersecurity solution using honeynet technology and SOC procedures, using the Azure cloud environment. The primary objective is to enhance cybersecurity measures by detecting, analyzing, and mitigating the cyber threats in real-time setup. Proposed study includes designing a network architecture, deploying honeypots, integrating security monitoring tools, and setting up a SOC to respond to security incidents. The methodology involves a detailed plan for setting up the honeynet, configuring the necessary tools, and establishing procedures for monitoring and response. The implementation phase covers the technical aspects, including the configuration of honeypots and integration of monitoring tools. The results demonstrate the system's capability to capture various types of attacks and provides the effective responses using geo map alerts. In this study we found the highest attacks from Australia (Brisbane) when we constructed the honeynet platform. Finally, analyzing the attacker’s patterns, rules are set using the Kusto Query Language (KQL) tool to secure the cyber infrastructure against the attacks. Deployment and configuration challenges such as complex setup, scalability issues during the handling of web servers, IoT devices, databases can be resource-intensive and expensive. Dependency on cloud infrastructure and latency issues may hinder real-time threat analysis are the limitations of Azure- specific features.
Abstract Data triage is a fundamental stage of cyber defense analysis for achieving cyber situational awareness in a Security Operations Center (SOC). It has a high requirement for cyber security analysts' capabilities of information processing and expertise in cyber defense. However, the present situation is that most novice analysts who are responsible for performing data triage tasks suffer a great deal from the complexity and intensity of their tasks. To fill the gap, we propose to provide novice analysts with on-the-job suggestions by presenting the relevant data triage operations conducted by senior analysts in previous tasks. In a previous study, a tracing method has been developed to track an analyst's data triage operations. This paper mainly presents a data triage operation retrieval system that (1) models the context of a data triage analytic process, (2) uses a centroid similarity matching method to compare contexts, and (3) presents the matched traces to the novice analysts as suggestions. We have implemented and evaluated the performance of the system through both automated testing and human evaluation. The results show that the proposed retrieval system can effectively identify the relevant traces based on an analyst's current analytic process.
Security Operations Center (SoC) analysts gather threat reports from openly accessible global threat repositories and tailor the information to their organization's needs, such as developing threat intelligence and security policies. They also depend on organizational internal repositories, which act as private local knowledge database. These local knowledge databases store credible cyber intelligence, critical operational and infrastructure details. SoCs undertake a manual labor-intensive task of utilizing these global threat repositories and local knowledge databases to create both organization-specific threat intelligence and mitigation policies. Recently, Large Language Models (LLMs) have shown the capability to process diverse knowledge sources efficiently. We leverage this ability to automate this organization-specific threat intelligence generation. We present LocalIntel, a novel automated threat intelligence contextualization framework that retrieves zero-day vulnerability reports from the global threat repositories and uses its local knowledge database to determine implications and mitigation strategies to alert and assist the SoC analyst. LocalIntel comprises two key phases: knowledge retrieval and contextualization. Quantitative and qualitative assessment has shown effectiveness in generating up to 93% accurate organizational threat intelligence with 64% inter-rater agreement.
Security Analysts that work in a ‘Security Operations Center’ (SoC) play a major role in ensuring the security of the organization. The amount of background knowledge they have about the evolving and new attacks makes a significant difference in their ability to detect attacks. Open source threat intelligence sources, like text descriptions about cyber-attacks, can be stored in a structured fashion in a cybersecurity knowledge graph. A cybersecurity knowledge graph can be paramount in aiding a security analyst to detect cyber threats because it stores a vast range of cyber threat information in the form of semantic triples which can be queried. A semantic triple contains two cybersecurity entities with a relationship between them. In this work, we propose a system to create semantic triples over cybersecurity text, using deep learning approaches to extract possible relationships. We use the set of semantic triples generated through our system to assert in a cybersecurity knowledge graph. Security Analysts can retrieve this data from the knowledge graph, and use this information to form a decision about a cyber-attack.
Today's cyber defenders are overwhelmed by a deluge of security alerts, threat intelligence signals, and shifting business context, creating an urgent need for AI systems to enhance operational security work. While Large Language Models (LLMs) have the potential to automate and scale Security Operations Center (SOC) operations, existing evaluations do not fully assess the scenarios most relevant to real-world defenders. This lack of informed evaluation impacts both AI developers and those applying LLMs to SOC automation. Without clear insight into LLM performance in real-world security scenarios, developers lack a north star for development, and users cannot reliably select the most effective models. Meanwhile, malicious actors are using AI to scale cyber attacks, highlighting the need for open source benchmarks to drive adoption and community-driven improvement among defenders and model developers. To address this, we introduce CyberSOCEval, a new suite of open source benchmarks within CyberSecEval 4. CyberSOCEval includes benchmarks tailored to evaluate LLMs in two tasks: Malware Analysis and Threat Intelligence Reasoning--core defensive domains with inadequate coverage in current benchmarks. Our evaluations show that larger, more modern LLMs tend to perform better, confirming the training scaling laws paradigm. We also find that reasoning models leveraging test time scaling do not achieve the same boost as in coding and math, suggesting these models have not been trained to reason about cybersecurity analysis, and pointing to a key opportunity for improvement. Finally, current LLMs are far from saturating our evaluations, showing that CyberSOCEval presents a significant challenge for AI developers to improve cyber defense capabilities.
No abstract available
Machine learning is becoming a key component to automatically detect malware-infected hosts by analyzing network logs in a security operations center (SOC). However, machine learning usually requires a large amount of labeled training data, which is difficult to acquire since labels are manually set by professional security analysts. On the other hand, abundant unanalyzed logs are kept stored in daily operation and stay unlabeled even though they could compensate for the lack of existing labeled training data. This paper proposes SILU, a novel semi-supervised learning method, which fully leverages unlabeled data and enhances detection capability without increasing manually labeled data. SILU learns from combined labeled and unlabeled training data to automatically augment labeled training data and then generates a classifier through the screening process. Unlike most semi-supervised learning methods used in cyber security, which use test data as unlabeled training data, SILU does not require retraining every time test data change since it can use different datasets for unlabeled training and test data. This helps SOC operation for practically suppressing detecting time. In addition, though SILU partially includes a supervised learning method, it does not require a specific supervised learning method. Therefore, SILU can be added on to any type of classifier of a supervised learning method. Moreover, SILU can suppress the deterioration of classification performance for test data through the screening process. We evaluated SILU using two types of real-world logs: proxy logs from a large enterprise and NetFlow from a large ISP. We demonstrated that by evaluating with different types of classifiers, SILU always improves detection capability for supervised learning methods. SILU also outperforms current semi-supervised methods. As a whole, SILU works as an add-on to existing supervised learning methods with little overhead and performs better than conventional supervised learning methods. Our evaluation also shows that using NetFlow from ISP as unlabeled training data works better than using only labeled proxy logs in the same enterprise. These results suggest that SILU can extend detection capability more when different organizations, e.g., SOCs and ISPs, collaborate and share unlabeled data.
Recent self-propagating malware (SPM) campaigns compromised hundred of thousands of victim machines on the Internet. It is challenging to detect these attacks in their early stages, as adversaries utilize common network services, use novel techniques, and can evade existing detection mechanisms. We propose PorTFILER (PORT-Level Network Traffic ProFILER), a new machine learning system applied to network traffic for detecting SPM attacks. PORTFILER extracts port-level features from the Zeek connection logs collected at a border of a monitored network, applies anomaly detection techniques to identify suspicious events, and ranks the alerts across ports for investigation by the Security Operations Center (SOC). We propose a novel ensemble methodology for aggregating individual models in PORTFILER that increases resilience against several evasion strategies compared to standard ML baselines. We extensively evaluate PorTFILER on traffic collected from two university networks, and show that it can detect SPM attacks with different patterns, such as WannaCry and Mirai, and performs well under evasion. Ranking across ports achieves precision over 0.94 and false positive rates below $8 \times 10^{-4}$ in the top 100 highly ranked alerts. When deployed on the university networks, PorTFILER detected anomalous SPM-like activity on one of the campus networks, confirmed by the university SOC as malicious. PortFILER also detected a Mirai attack recreated on the two university networks with higher precision and recall than deep-learning based autoencoder methods.
Unmanned Aerial Vehicles (UAVs) industry has experienced rapid growth and widespread adoption across various sectors. This increased affordability of UAVs has led to their extensive use, making them valuable assets for critical missions. However, the growing reliance on UAVs has also exposed them to various attacks. Multiple researchers have proposed frameworks based on different technologies to address these issues. Among these, AI-based frameworks and intrusion detection systems have emerged as prominent solutions for detecting UAV attacks. Due to the existing vulnerabilities, it is crucial to implement additional security measures to protect UAV processes. In this regard, a responsive Security Operations Center (SOC) offers a suitable solution to bridge this gap. This paper explores the need for a SOC specifically tailored for UAVs, focusing on cyber awareness and control measures. We emphasize the importance of control mechanisms within a SOC to ensure the protection of UAVs. We propose a responsive security solution for UAVs by integrating a SOC within an AI-based framework. The detection and mitigations expected through combining SOC in an AI-based IDS are promising. They indicate the effectiveness of the chosen techniques, which guarantees UAVs security.
To counter recent diverse and sophisticated AI-based cyberattacks, more systematic security control policies are being applied. Attack types are becoming increasingly advanced, but malicious activities are still being detected using solutions such as security information and event management (SIEM). Many security experts are performing control tasks based on a security operations center (SOC). This paper describes the design and implementation of an AI-based monitoring support system in an SOC. The proposed system focuses on tickets, which are used in the primary process of performing security operations tasks in the SOC. They enable AI to handle or assist with tasks directly according to the risk level of the tickets. As a result, a reduction in the workload for operations personnel is anticipated
New opportunities in cloud computing have brought many new risks that require effective protection of dynamic distributed environments. Introducing a new formative technology, generative AI, to cloud security has far-reaching benefits for automating threat detection, real-time incident addressing, and vulnerability management. This paper focuses on extending generative AI with cloud security tools like AWS GuardDuty and Google Cloud Security Command Center; the contemplation of accuracy enhancement and response efficiency highlights its aim. Concerning actual applications such as SOAR systems, the study demonstrates how media industry giants, such as Netflix and JPMorgan Chase, have used AI to minimize risk factors while increasing operational efficiency. The paper also discusses the significant increase in response time, enhanced detection accuracy, and the shift to proactive security strategies brought by generative AI. Drawing attention to AI systems’ opportunities, the study examines the subsequent issues connected with AI applications, including over-dependence on AI tools, adversarial risk to models, and the complex nature of decisionmaking in the context of AI systems. The present study also highlights the importance of generative AI in strengthening the defense of the cloud environment, but, at the same time, it recognizes the significance of preventive efforts and planned action plans to manage these technologies efficiently.
As the strategic goals of the corporates shift more and more towards digitalization, new trends are defined in all industries. For the accomplishment of the strategic goals the corporate has to take care of the protection of its digital value chains against these threats. A Security Operations Center (SOC) can effectively contribute to this protection. AI can recommend SOC Architecture Models based on the inventories, the strategy and the experience of the members of the top management of the corporate. Thus, companies that make use of digital assistants might be able to achieve their strategic goals in a most effective and more cost-efficient way.
Cloud Security Operations Center (SOC) enable cloud governance, risk and compliance by providing insights visibility and control. Cloud SOC triages high-volume, heterogeneous telemetry from elastic, short-lived resources while staying within tight budgets. In this research, we implement an AI-Augmented Security Operations Center (AISOC) on AWS that combines cloud-native instrumentation with ML-based detection. The architecture uses three Amazon EC2 instances: Attacker, Defender, and Monitoring. We simulate a reverse-shell intrusion with Metasploit, and Filebeat forwards Defender logs to an Elasticsearch and Kibana stack for analysis. We train two classifiers, a malware detector built on a public dataset and a log-anomaly detector trained on synthetically augmented logs that include adversarial variants. We calibrate and fuse the scores to produce multi-modal threat intelligence and triage activity into NORMAL, SUSPICIOUS, and HIGH\_CONFIDENCE\_ATTACK. On held-out tests the fusion achieves strong macro-F1 (up to 1.00) under controlled conditions, though performance will vary in noisier and more diverse environments. These results indicate that simple, calibrated fusion can enhance cloud SOC capabilities in constrained, cost-sensitive setups.
An SIEM system enabled by Artificial Intelligence (AI) was proposed to solve the major challenges of current security monitoring practices. The proposed architecture incorporates the use of artificial intelligence at all levels of the security operation center to transform traditional security operations from passive detection to active protection. The system design incorporates the application of machine learning for pattern recognition, contextual analysis, and alert prioritization to overcome the major challenges of traditional SIEM solutions that are based on rules and produce numerous alerts. The detection efficiencies for complicated attack patterns, including APT and zero-day attacks, were found to be significantly higher than those of the conventional systems in the experimental analysis. Integration with high-performance computing provides real-time security data analysis without compromising the performance, whereas the mean time to detection and response is significantly reduced. The effectiveness of the system in detecting threats early, classifying them correctly, and recommending response actions in multiple case studies involving various attacks was demonstrated. The architecture is a major improvement over current security monitoring technologies, and it provides more effective protection against ever-increasing threats with less analyst workload through contextualized and automated alerts.
Abstract Security Operations Centers (SOCs) face significant challenges due to the large volume, diversity, and dynamics of incident events. Alarm fatigue, delayed initiation of response, and the high share of false positives or missed threats limit team effectiveness and increase organizational risk. This study presents a methodology for automated management of key performance indicators (KPIs) in an SOC environment through an Agentic AI architecture and machine learning. Within the project, 214 CSV files were processed, comprising over 8.6 million data rows extracted from SIEM, Incident Management, Task Tracking, and CRM systems. Sixteen specific indicators were used, grouped into four categories: detection and filtering (TTD, FNR, FPR), response and resolution (TTR, IRR, SIHR), recovery and operations (MTTR, OE), satisfaction and risk management (CSR, SIER). The system includes ten specialized Agentic AI agents with clearly defined roles ‒ monitoring time parameters, predicting false alarm probabilities, automatically triggering playbooks, calculating operational metrics, and analyzing customer satisfaction. Five machine learning models were trained: two XGBoost classifiers for FPR and FNR, two LightGBM regressors for TTR and MTTR, and a BERT model for textual feedback analysis. The results demonstrate reduced detection and response times, a lower rate of false alarms, and improved operational predictability in calculating KPI values. The methodology shows the applicability of Agentic AI for optimizing SOC processes on real and public data, without the need for manual intervention in most processing phases.
The increasing volume, velocity, and sophistication of cyber threats have placed immense pressure on modern Security Operations Centers (SOCs). Traditional rule-based and manual processes are proving insufficient, leading to alert fatigue, delayed responses, high false-positive rates, analyst dependency, and escalating operational costs. Recent advancements in Artificial Intelligence (AI) offer new opportunities to transform SOC workflows through automation and augmentation. Large Language Models (LLMs) and autonomous AI agents have shown strong potential in enhancing capabilities such as log summarization, alert triage, threat intelligence, incident response, report generation, asset discovery, and vulnerability management. This paper reviews recent developments in the application of LLMs and AI agents across these SOC functions, introducing a taxonomy that organizes their roles and capabilities within operational pipelines. While these technologies improve detection accuracy, response time, and analyst support, challenges persist, including model interpretability, adversarial robustness, integration with legacy systems, and the risk of hallucinations or data leakage. A detailed capability-maturity model outlines the levels of integration with SOC tasks. This survey synthesizes trends, identifies persistent limitations, and outlines future directions for trustworthy, explainable, and safe AI integration in SOC environments.
Cybersecurity operations are increasingly adopting agentic AI solutions due to the time-critical and complex decision-making in security operations centers (SOCs). While large language models (LLMs) are good with summarization tasks or interpreting structured and unstructured reports, real-world SOC workflows have additional requirements such as access to original logs, reproducibility and accountability to triage security incidents. For example, analysts routinely correlate alerts to understand the kill-chain of the cyber-attack and analyze the event telemetries to identify the root cause event which may not have triggered an alert. Incorrect and incomplete automations in such settings can directly impact production systems and business operations.In this survey, we examine the architectural shifts from single-model assistants to tool-augmented agents, distributed multiagent systems, and schema-constrained investigation pipelines. We introduce a five-generation taxonomy that represents the evolution of agentic AI systems, their limitations and risks across different parameters, such as reasoning depth, tool interaction, memory, reproducibility and safety. We also review the emerging benchmarks to evaluate cyber-oriented agents and identify open challenges including response validation, tool-use correctness, multi-agent coordination, long-horizon reasoning and safeguards for high-impact actions. Finally, we discuss how these challenges influence deployment decisions in operational SOC environments. Our analysis provides a structured perspective on the current state of agentic AI in cybersecurity and highlights the technical and governance considerations necessary for its deployment.
Information and communication technology (ICT) has become a major global driver, but it also exposes organizations to frequent cyber threats, making asset protection increasingly difficult. Cyber threat intelligence (CTI) is essential for improving cybersecurity, especially when integrated into a security operations center (SOC) for real-time threat monitoring and analysis. This study proposes a real-time CTI framework within a SOC environment, hosted on Linode, which integrates the Malware Information Sharing Platform (MISP) and a Security Information and Event Management (SIEM) system to collect indicators of compromise (IoCs). The framework uses machine learning to detect fraud in mobile money transactions such as cash-in, cash-out, debit, payment, and transfer. Fraudulent activity often involves the use of stolen identity information for unauthorized transactions. The system generates detailed alert reports and provides predictive insights into potential threats, helping organizations strengthen user trust and protect their reputation. Experimental results using financial datasets show high performance: logistic regression achieved 98.83% accuracy, while the random forest model reached a test accuracy of 95.86% and cross-validation accuracy of 95.76%. The F1-score was 0.9586, and the ROC-AUC score was 0.9923, indicating strong classification capability.
Analysts in Security Operations Centers (SOCs) are often occupied with time-consuming investigations of alerts from Network Intrusion Detection Systems (NIDSs). Many NIDS rules lack clear explanations and associations with attack techniques, complicating the alert triage and the generation of attack hypotheses. Large Language Models (LLMs) may be a promising technology to reduce the alert explainability gap by associating rules with attack techniques. In this paper, we investigate the ability of three prominent LLMs (ChatGPT, Claude, and Gemini) to reason about NIDS rules while labeling them with MITRE ATT&CK tactics and techniques. We discuss prompt design and present experiments performed with 973 Snort rules. Our results indicate that while LLMs provide explainable, scalable, and efficient initial mappings, traditional machine learning (ML) models consistently outperform them in accuracy, achieving higher precision, recall, and F1-scores. These results highlight the potential for hybrid LLM-ML approaches to enhance SOC operations and better address the evolving threat landscape. By utilizing automation, the presented methods will enhance the analysis efficiency of SOC alerts, and decrease workloads for analysts.
The growing sophistication of cyber threats and the exponential rise in alert volumes have exposed the limitations of traditional Security Operations Centers (SOCs), leading to analyst fatigue, high turnover, and inefficiencies in incident response. Conventional SOAR platforms struggle to address these issues due to their rigid rule-based logic and insufficient contextual awareness. Although large language model (LLM)-based solutions have shown potential, they often lack consistency in reasoning, effective tool orchestration, factual accuracy, and adaptability to emerging threats. In this work, we present an autonomous SOC agent that integrates the ReAct (Reasoning and Acting) framework with detection engineering principles to overcome these challenges. By embedding structured investigation logic and enriched alert metadata directly into the analysis workflow, our approach delivers domain-specific context to support accurate tool invocation and actionable remediation guidance. This integration fosters transparency and reliability throughout the alert lifecycle. Empirical evaluations demonstrate that our solution significantly enhances alert triage and incident response, offering a scalable path toward more resilient, AI-driven SOC operations.
The rising frequency and sophistication of cyberattacks have made real-time malicious IP detection a critical challenge for modern Security Operations Center (SOC). Traditional solutions, such as static blacklists and manual IP reputation checks, are no longer sufficient in today’s dynamic threat scenario. To overcome these constraints, we present IP SafeGuard, an AI-driven platform that incorporates multi-source threat intelligence, sophisticated feature engineering, and machine learning (ML)for real-time IP categorization. The framework collects data from AbuseIPDB, VirusTotal, and other sources to compute a Dynamic Threat Score (DTS) for each IP address. It leverages an XGBoost-based classification model to achieve high accuracy and low false-positive rates, even in skewed datasets. Experimental findings indicate the improved performance of IP SafeGuard, with an accuracy of 98.2%, a precision of 97.8%, and a recall of 98.5%. The average detection duration of 45 milliseconds makes it appropriate for real-time SOC integration, enabling automated incident response through Security Information and Event Management (SIEM) alerting and firewall blocking. The framework’s modular design assures scalability and adaptability, making it a vital tool for high-volume situations. By overcoming the limits of old approaches and using the power of ML, IP SafeGuard considerably boosts the efficiency and efficacy of current cybersecurity systems. Future work involves expanding the system to enable new threat intelligence sources and studying federated learning for secure and privacy-preserving threat information exchange.
Security Operations Centers (SOCs) face growing challenges in managing cybersecurity threats due to an overwhelming volume of alerts, a shortage of skilled analysts, and poorly integrated tools. Human-AI collaboration offers a promising path to augment the capabilities of SOC analysts while reducing their cognitive overload. To this end, we introduce an AI-driven human-machine co-teaming paradigm that leverages large language models (LLMs) to enhance threat intelligence, alert triage, and incident response workflows. We present a vision in which LLM-based AI agents learn from human analysts the tacit knowledge embedded in SOC operations, enabling the AI agents to improve their performance on SOC tasks through this co-teaming. We invite SOCs to collaborate with us to further develop this process and uncover replicable patterns where human-AI co-teaming yields measurable improvements in SOC productivity.
This technical article examines the growing implementation of artificial intelligence in cybersecurity operations, specifically focusing on threat intelligence platforms. Through empirical analysis and industry data, It demonstrates that organizations deploying AI-driven threat intelligence solutions experience significantly improved detection and response metrics compared to traditional Security Operations Center (SOC) models. It validates that AI integration leads to faster threat detection, more accurate classification, and reduced mean time to repair across various security incidents. The article explores the technical underpinnings of these systems, including machine learning models, behavioral analytics, and automated response frameworks, while also addressing implementation challenges and best practices. The article findings provide compelling evidence that AI-driven approaches represent not merely an enhancement to existing security operations but a fundamental transformation in how organizations detect, analyze, and respond to sophisticated cybersecurity threats. It concludes by examining emerging technologies such as federated learning, explainable AI, adversarial learning, and autonomous response capabilities that will shape the future evolution of AI-driven threat intelligence.
This research introduces a Hybrid Intrusion Detection System (HIDS) that merges signature-based detection, with AI-powered anomaly detection to enhance the accuracy and effectiveness of identifying cyber threats. The proposed HIDS demonstrates an ability to detect uncommon and sophisticated cyber threats with an accuracy rate of 90.37%. By combining Gradient Boosting and K-Nearest Neighbors (KNN) algorithms the system improves detection precision, speeds up response times, and expands coverage across network traffic. This comprehensive approach overcomes the limitations of traditional methods by enabling threat responses while reducing false positive rates. The study highlights the potential of integrating signature-based and AI-driven techniques to strengthen cybersecurity defences which emphasizes the benefits of this approach. When the system detects a potential threat, alerts are sent to the Security Operations Center (SOC) or Network Operations Center (NOC), with details such as nature of the threat, and the affected system. This study establishes a foundation for real-time cyber threat detection and intelligence sharing in cloud environments, with future Hybrid IDS versions operating across multiple hosts and supported by a web service for cross-platform compatibility and centralized alert system.
The accelerated digital transformation of the banking sector has enhanced the delivery of financial services but simultaneously expanded the cyberattack surface, exposing institutions to advanced persistent threat (APT), zero-day exploit, and obfuscated malware. Conventional perimeter defenses, primarily Layer 3 and 4 firewalls and signature-based intrusion detection systems (IDS), offer insufficient protection against encrypted, evasive, and previously unknown cyberattacks, and frequently generate high false-positive rates that burden Security Operations Center (SOC). This study proposes a multilayered adaptive cybersecurity framework that integrates Layer 7 Next Generation Firewall (NGFW), hybrid Network and Host-based Intrusion Detection and Prevention System (NIDPS/HIDPS), and an AI-driven analysis engine. The framework employs a dual-stage detection architecture, combining Convolutional Neural Network (CNN) for spatial representation learning and Random Forest (RF) classifiers for anomaly decisioning. The model was evaluated using a strategically consolidated dataset derived from CIC-IDS-2017 and UNSW-NB15, specifically isolating cyberattack vectors prevalent in financial infrastructures (e.g., SQL Injection, DDoS, and Brute Force). The model achieves 99.65% detection accuracy and a reduced false-positive rate of 0.35%, significantly outperforming classical SVM and standalone signature-based systems. The results demonstrate that the proposed architecture aligned with NIST and PCI-DSS standard as well as defense-in-depth mechanism suitable for real-time, high-frequency financial environments.
AI-Driven Cybercrime Forensics has become increasingly important for predictive threat detection and investigative intelligence due to the rising complexity and volume of digital evidence. This quantitative study examined how AI-driven forensic capabilities predicted key investigative outcomes, including investigative efficiency, decision accuracy, and case documentation quality. Data were obtained from 210 valid respondents working in cybercrime-related roles. The sample included digital forensics analysts (24.8%), incident response specialists (21.0%), SOC analysts or threat hunters (18.1%), and cybersecurity managers (13.3%). Most respondents reported high familiarity with AI-based security tools (43.8%) and high exposure to multi-source forensic evidence (45.7%). Descriptive results showed high agreement for predictive threat detection effectiveness (M = 4.12, SD = 0.61) and investigative intelligence quality (M = 4.05, SD = 0.64). Workflow efficiency also scored strongly (M = 3.98, SD = 0.69). Explain ability and trust calibration produced the lowest mean (M = 3.74, SD = 0.78), while evidence traceability and documentation integrity remained moderate-high (M = 3.81, SD = 0.73). Reliability analysis confirmed acceptable-to-strong internal consistency, with Cronbach’s alpha values ranging from 0.78 to 0.89. Multiple regression results indicated that investigative intelligence quality was the strongest predictor of investigative efficiency (β = .41, t = 7.82, p < .001), decision accuracy (β = .34, t = 6.11, p < .001), and documentation quality (β = .29, t = 5.02, p < .001). Workflow efficiency significantly predicted investigative efficiency (β = .33, p = .001) and decision accuracy (β = .21, p = .018). The regression models explained 62% of the variance in investigative efficiency (R² = .62), 55% in decision accuracy (R² = .55), and 49% in documentation quality (R² = .49). Overall, findings confirmed that AI-supported correlation and intelligence generation most strongly improved investigative outcomes.
Quantum Key Distribution (QKD) provides key exchange security grounded in quantum physics, but practical deployments still face constraints in scalability, noise tolerance, real-time operations, and implementation security. In parallel, machine learning (ML) has proven effective for adaptive control, prediction, and anomaly detection in complex cyber-physical systems. This paper reviews how AI/ML can be integrated into QKD to improve robustness and operational security without weakening core cryptographic guarantees. We present an AI-supported QKD architecture, summarize key performance metrics (QBER, secret key rate, and throughput), and outline how learning-based control can tune parameters dynamically under varying channel conditions. We also discuss security-operations integration (SOC visibility, alerting, and key lifecycle telemetry) and provide a reinforcement learning (RL) case study to illustrate adaptive parameter selection. Finally, we analyze threats introduced by AI and identify practical design constraints (fail-safe behavior, policy bounds, and verification) required to preserve quantum-security principles while enabling quantum-safe integration with post-quantum cryptography workflows.
Given the complexity of multi-tenant cloud environments and the growing need for real-time threat mitigation, Security Operations Centers (SOCs) must adopt AI-driven adaptive defense mechanisms to counter Advanced Persistent Threats (APTs). However, SOC analysts face challenges in handling adaptive adversarial tactics, requiring intelligent decision-support frameworks. We propose a Cognitive Hierarchy Theory-driven Deep Q-Network (CHT-DQN) framework that models interactive decision-making between SOC analysts and AI-driven APT bots. The SOC analyst (defender) operates at cognitive level-1, anticipating attacker strategies, while the APT bot (attacker) follows a level-0 policy. By incorporating CHT into DQN, our framework enhances adaptive SOC defense using Attack Graph (AG)-based reinforcement learning. Simulation experiments across varying AG complexities show that CHT-DQN consistently achieves higher data protection and lower action discrepancies compared to standard DQN. A theoretical lower bound further confirms its superiority as AG complexity increases. A human-in-the-loop (HITL) evaluation on Amazon Mechanical Turk (MTurk) reveals that SOC analysts using CHT-DQN-derived transition probabilities align more closely with adaptive attackers, leading to better defense outcomes. Moreover, human behavior aligns with Prospect Theory (PT) and Cumulative Prospect Theory (CPT): participants are less likely to reselect failed actions and more likely to persist with successful ones. This asymmetry reflects amplified loss sensitivity and biased probability weighting -- underestimating gains after failure and overestimating continued success. Our findings highlight the potential of integrating cognitive models into deep reinforcement learning to improve real-time SOC decision-making for cloud security.
No abstract available
This paper explores a novel approach that leverages LLMs to generate a dataset of realistic, synchronised and interlinked IR Process activities, incidents, and IR team members communication data logs. In cybersecurity, public real-world data is scarce due to privacy concerns, and since traditional anonymization methods fail to fully protect sensitive information, we must explore alternatives like LLMs for generating synthetic datasets, especially as IR process log datasets are even more scarce. We explore data augmentation by adding missing fields and textual data to an expurgated IR process log public dataset using few-shot learning, where ChatGPT was conditioned on samples from both the publicly available enhanced base dataset and IR playbooks to allow the generation of the IR process activities and communication data. The enhanced base dataset consists of field attributes from both the refined existing dataset and new required fields from data sources of typical IR tools used by Security Operations Center (SOC) and Computer Security Incident Response Team (CSIRT). The generated data seeks to reflect the natural variation in individual IR process paths to further enrich the dataset with realistic and contextually accurate examples that align with real-world IR scenarios. Synthetic IR dataset generation holds significant potential as a source of valuable resources for IR researchers and practitioners for research, training, and testing of tools.
The Industrial Control System (ICS) environment encompasses a wide range of intricate communication protocols, posing substantial challenges for Security Operations Center (SOC) analysts tasked with monitoring, interpreting, and addressing network activities and security incidents. Conventional monitoring tools and techniques often struggle to provide a clear understanding of the nature and intent of ICS-specific communications. To enhance comprehension, we propose a software solution powered by a Large Language Model (LLM). This solution currently focused on BACnet protocol, processes a packet file data and extracts context by using a mapping database, and contemporary context retrieval methods for Retrieval Augmented Generation (RAG). The processed packet information, combined with the extracted context, serves as input to the LLM, which generates a concise packet file summary for the user. The software delivers a clear, coherent, and easily understandable summary of network activities, enabling SOC analysts to better assess the current state of the control system.
No abstract available
The evolving landscape of cybersecurity threats demands the modernization of Security Operations Centers (SOCs) to enhance threat detection, response, and mitigation. Security Orchestration, Automation, and Response (SOAR) platforms play a crucial role in addressing operational inefficiencies; however, traditional no-code SOAR solutions face significant limitations, including restricted flexibility, scalability challenges, inadequate support for advanced logic, and difficulties in managing large playbooks. These constraints hinder effective automation, reduce adaptability, and underutilize analysts’ technical expertise, underscoring the need for more sophisticated solutions. To address these challenges, we propose a hyper-automation SOAR platform powered by agentic-LLM, leveraging Large Language Models (LLMs) to optimize automation workflows. This approach shifts from rigid no-code playbooks to AI-generated code, providing a more flexible and scalable alternative while reducing operational complexity. Additionally, we introduce the IVAM framework, comprising three critical stages: (1) Investigation, structuring incident response into actionable steps based on tailored recommendations, (2) Validation, ensuring the accuracy and effectiveness of executed actions, (3) Active Monitoring, providing continuous oversight. By integrating AI-driven automation with the IVAM framework, our solution enhances investigation quality, improves response accuracy, and increases SOC efficiency in addressing modern cybersecurity threats.
A Security Operations Center (SOC) is a central technical level unit responsible for monitoring, analyzing, assessing, and defending an organization’s security posture on an ongoing basis. The SOC staff works closely with incident response teams, security analysts, network engineers and organization managers using sophisticated data processing technologies such as security analytics, threat intelligence, and asset criticality to ensure security issues are detected, analyzed and finally addressed quickly. Those techniques are part of a reactive security strategy because they rely on the human factor, experience and the judgment of security experts, using supplementary technology to evaluate the risk impact and minimize the attack surface. This study suggests an active security strategy that adopts a vigorous method including ingenuity, data analysis, processing and decision-making support to face various cyber hazards. Specifically, the paper introduces a novel intelligence driven cognitive computing SOC that is based exclusively on progressive fully automatic procedures. The proposed λ-Architecture Network Flow Forensics Framework (λ-ΝF3) is an efficient cybersecurity defense framework against adversarial attacks. It implements the Lambda machine learning architecture that can analyze a mixture of batch and streaming data, using two accurate novel computational intelligence algorithms. Specifically, it uses an Extreme Learning Machine neural network with Gaussian Radial Basis Function kernel (ELM/GRBFk) for the batch data analysis and a Self-Adjusting Memory k-Nearest Neighbors classifier (SAM/k-NN) to examine patterns from real-time streams. It is a forensics tool for big data that can enhance the automate defense strategies of SOCs to effectively respond to the threats their environments face.
A Security Operations Center (SOC) can be defined as an organized and highly skilled team that uses advanced computer forensics tools to prevent, detect and respond to cybersecurity incidents of an organization. The fundamental aspects of an effective SOC is related to the ability to examine and analyze the vast number of data flows and to correlate several other types of events from a cybersecurity perception. The supervision and categorization of network flow is an essential process not only for the scheduling, management, and regulation of the network’s services, but also for attacks identification and for the consequent forensics’ investigations. A serious potential disadvantage of the traditional software solutions used today for computer network monitoring, and specifically for the instances of effective categorization of the encrypted or obfuscated network flow, which enforces the rebuilding of messages packets in sophisticated underlying protocols, is the requirements of computational resources. In addition, an additional significant inability of these software packages is they create high false positive rates because they are deprived of accurate predicting mechanisms. For all the reasons above, in most cases, the traditional software fails completely to recognize unidentified vulnerabilities and zero-day exploitations. This paper proposes a novel intelligence driven Network Flow Forensics Framework (NF3) which uses low utilization of computing power and resources, for the Next Generation Cognitive Computing SOC (NGC2SOC) that rely solely on advanced fully automated intelligence methods. It is an effective and accurate Ensemble Machine Learning forensics tool to Network Traffic Analysis, Demystification of Malware Traffic and Encrypted Traffic Identification.
The integration of Large Language Models (LLMs) into Security Operations Centres (SOCs) presents a transformative, yet still evolving, opportunity to reduce analyst workload through human-AI collaboration. However, their real-world application in SOCs remains underexplored. To address this gap, we present a longitudinal study of 3,090 analyst queries from 45 SOC analysts over 10 months. Our analysis reveals that analysts use LLMs as on-demand aids for sensemaking and context-building, rather than for making high-stakes determinations, preserving analyst decision authority. The majority of queries are related to interpreting low-level telemetry (e.g., commands) and refining technical communication through short (1-3 turn) interactions. Notably, 93% of queries align with established cybersecurity competencies (NICE Framework), underscoring the relevance of LLM use for SOC-related tasks. Despite variations in tasks and engagement, usage trends indicate a shift from occasional exploration to routine integration, with growing adoption and sustained use among a subset of analysts. We find that LLMs function as flexible, on-demand cognitive aids that augment, rather than replace, SOC expertise. Our study provides actionable guidance for designing context-aware, human-centred AI assistance in security operations, highlighting the need for further in-the-wild research on real-world analyst-LLM collaboration, challenges, and impacts.
Security Operations Centers face massive, heterogeneous alert streams under minute-level service windows, creating the Alert Triage Latency Paradox: verbose reasoning chains ensure accuracy and compliance but incur prohibitive latency and token costs, while minimal chains sacrifice transparency and auditability. Existing solutions fail: signature systems are brittle, anomaly methods lack actionability, and fully cloud-hosted LLMs raise latency, cost, and privacy concerns. We propose AIDR, a hybrid cloud-edge framework that addresses this trade-off through constrained information-density optimization. The core innovation is gradient-based compression of reasoning chains to retain only decision-critical steps--minimal evidence sufficient to justify predictions while respecting token and latency budgets. We demonstrate that this approach preserves decision-relevant information while minimizing complexity. We construct compact datasets by distilling alerts into 3-5 high-information bullets (68% token reduction), train domain-specialized experts via LoRA, and deploy a cloud-edge architecture: a cloud LLM routes alerts to on-premises experts generating SOAR-ready JSON. Experiments demonstrate AIDR achieves higher accuracy and 40.6% latency reduction versus Chain-of-Thought, with robustness to data corruption and out-of-distribution generalization, enabling auditable and efficient SOC triage with full data residency compliance.
When a network is attacked, cyber defenders need to precisely identify which systems (i.e., computers or devices) were compromised and what damage may have been inflicted. This process is sometimes referred to as cyber triage and is an important part of the incident response procedure. Cyber triage is challenging because the impacts of a network breach can be far-reaching with unpredictable consequences. This highlights the importance of automating this process. In this paper we propose AutoCRAT, a system for quantifying the breadth and severity of threats posed by a network exposure, and for prioritizing cyber triage activities during incident response. Specifically, AutoCRAT automatically reconstructs what we call alert trees, which track network security events emanating from, or leading to, a particular computer on the network. We validate the usefulness of AutoCRAT using a real-world dataset. Experimental results show that our prototype system can reconstruct alert trees efficiently and can facilitate data visualization in both incident response and threat intelligence analysis.
This article presents a structured framework for Human-AI collaboration in Security Operations Centers (SOCs), integrating AI autonomy, trust calibration, and Human-in-the-loop decision making. Existing frameworks in SOCs often focus narrowly on automation, lacking systematic structures to manage human oversight, trust calibration, and scalable autonomy with AI. Many assume static or binary autonomy settings, failing to account for the varied complexity, criticality, and risk across SOC tasks considering Humans and AI collaboration. To address these limitations, we propose a novel autonomy tiered framework grounded in five levels of AI autonomy from manual to fully autonomous, mapped to Human-in-the-Loop (HITL) roles and task-specific trust thresholds. This enables adaptive and explainable AI integration across core SOC functions, including monitoring, protection, threat detection, alert triage, and incident response. The proposed framework differentiates itself from previous research by creating formal connections between autonomy, trust, and HITL across various SOC levels, which allows for adaptive task distribution according to operational complexity and associated risks. The framework is exemplified through a simulated cyber range that features the cybersecurity AI-Avatar, a fine-tuned LLM-based SOC assistant. The AI-Avatar case study illustrates human-AI collaboration for SOC tasks, reducing alert fatigue, enhancing response coordination, and strategically calibrating trust. This research systematically presents both the theoretical and practical aspects and feasibility of designing next-generation cognitive SOCs that leverage AI not to replace but to enhance human decision-making.
In today’s fast-evolving threat landscape, ransomware attacks have become more sophisticated, faster, and more destructive leaving traditional Security Operations Center (SOC) response strategies struggling to keep pace. Traditional SOC workflows struggle to match the speed and complexity of modern ransomware attacks. Manual processes like alert triage, incident scoping, and containment often consume critical hours giving adversaries ample opportunity to encrypt data, exfiltrate assets, and demand ransoms. AI-optimized SOC playbooks are redefining this paradigm by automating the entire investigation lifecycle. Leveraging machine learning, LLMs, and real-time telemetry analysis, these systems rapidly identify high-fidelity threats, enrich alerts with contextual intelligence, and scope incidents with minimal analyst input reducing response time from hours to mere minutes. Generative AI further accelerates this shift by auto-generating attack summaries, mapping indicators to known threat tactics, and recommending or initiating containment actions such as isolation or credential revocation. These playbooks evolve continuously by learning from analyst feedback and past events, improving both accuracy and efficiency over time. The result is a measurable reduction in mean-time-to-detect (MTTD) and mean-time-to-respond (MTTR), while empowering SOC analysts to focus on strategic analysis over repetitive triage. As ransomware campaigns grow faster and more autonomous, adopting AI-driven SOC playbooks has become a mission-critical step for organizations seeking proactive, resilient security operations.
Artificial intelligence is revolutionizing Digital Forensics and Incident Response (DFIR) by transforming detection, investigation, and remediation capabilities across the security operations lifecycle. Integrating machine learning, behavioral analytics, and automated workflows has created unprecedented opportunities to address cyber threats' growing volume and complexity while improving operational efficiency. Security teams facing an overwhelming deluge of alerts can now leverage AI to rapidly identify genuine threats, prioritize responses, and accelerate investigations. This comprehensive article explores the multifaceted applications of AI across the DFIR domain, from automated threat detection and alert triage to sophisticated forensic analysis and orchestrated response capabilities. The technical considerations for successful implementation include data pipeline development, algorithm selection, and integration with existing security infrastructure. Equally important are the safeguards and ethical considerations for responsible AI adoption, encompassing data integrity, model security, bias mitigation, and human oversight. A structured framework for AI-driven incident response is presented, highlighting the critical balance between automation and human expertise throughout the detection, investigation, remediation, and continuous improvement phases. As the cybersecurity landscape evolves, this transformative approach promises substantial improvements in security posture and operational efficiency when implemented with appropriate governance and technical rigor.
This study examines whether AI-driven cyber risk analytics improve Cloud Security Posture Management (CSPM) in enterprise systems and through which organizational mechanisms. We reviewed 47 prior studies to ground constructs and hypotheses, then executed a quantitative, cross-sectional, multi-case design across 220 cloud or security-team cases drawn from medium-to-large enterprises, with 512 survey responses synchronized to a ninety-day export of objective CSPM metrics. The problem addressed is persistent misconfiguration and alert overload in elastic, multi-tenant clouds that blunt security performance; the purpose is to quantify how analytics capability relates to measurable posture and to test the roles of triage efficiency and governed automation. Key variables include AI analytics capability, alert-triage efficiency, automation level, and CSPM outcomes such as misconfigurations per 100 resources, percent of critical findings remediated within policy windows, compliance score, mean time to detect, and mean time to remediate, with firm size, cloud tenure, provider mix, regulatory intensity, and account topology as controls. The analysis plan comprised reliability and validity checks, descriptive statistics, correlation matrices, hierarchical multiple regression with heteroskedasticity-robust inference, non-parametric bootstrapped mediation, and interaction-term moderation tests, plus robustness diagnostics. Headline findings show analytics capability is positively associated with stronger posture after controls, part of this relationship is mediated by improved alert triage, and the association is amplified at higher automation levels, indicating that explainable analytics plus governed automation yield the largest posture gains. Implications for practice are to invest in coverage-rich, explainable analytics, set explicit triage throughput objectives, and codify safe policy-as-code and auto-remediation so prioritized insights reliably become timely fixes; for scholarship, the work advances a capability to process to outcome model of CSPM conditioned by automation.
ABSTRACT: Though security incident response sometimes trails in speed and integration, DevSecOps approaches hasten software delivery. This study offers a ChatOps platform driven by artificial intelligence to automate and simplify security incident response inside DevSecOps processes. We show how artificial intelligence (AI) combined with ChatOps (chat-centric operations) allows fast detection, collaborative investigation, and automated remediation of security problems in real time. The method puts machine learning-based threat detection into team chat systems (e.g. Slack, Microsoft Teams), hence enabling security bots to alert teams of incidents with enhanced context, help in triage, and carry out containment operations using chat commands. We assess the advantages of this integration, including quicker reaction times, better teamwork, less human error, and ongoing incident learning. Automated documentation in a banking industry case study shows notable response speed (minutes rather than hours) and compliance improvement. We talk about issues such tool integration, false positives, and confidence in automation and recommend future improvements such sophisticated language models for incident management. The findings show that ChatOps driven by artificial intelligence may turn incident response into a proactive, quick procedure in line with DevSecOps agility.
Cloud-based intrusion detection systems, such as AWS GuardDuty, provide effective threat detection but often lack transparency, which undermines analyst trust and incident response capabilities. This paper evaluates the integration of XAI techniques, specifically SHAP and LIME, into AWS GuardDuty alert triage workflow to enhance interpretability. An XGBoost model trained on the CIC-IDS2017 dataset was used to generate explanations for security alerts presented to twelve security professionals, divided into two groups: control and XAI-supported. Participants classified four attack types while providing confidence ratings and justifications for their decisions. Results show that analysts with XAI access achieved higher classification accuracy, with three participants correctly classifying all alerts, compared to none in the control group. They also demonstrated increased confidence levels and more elaborate reasoning. However, participants encountered varying difficulty interpreting SHAP and LIME visualizations, with LIME proving more immediately actionable for alert-specific analysis while SHAP’s global insights presented a steeper learning curve. These findings highlight both the potential and practical challenges of deploying XAI in operational cloud security environments.
No abstract available
Network-based intrusion detection systems (NIDSes) tend to output massive alert logs to cover all suspicious communications that deviate from normal network traffic. Due to the tremendous volume of these alert logs, real-time incident response or keeping in pace with the alerts sometimes turns out to be impractical for security operators who have to genuinely investigate each alert to verify whether immediate remedial action is necessary. This problem, known as the threat-alert fatigue problem, causes many unexplored alerts and hence deteriorates the quality of service (QoS). In order to reduce the massive number of alerts, we propose an alert screening scheme that can triage alerts on the basis of the potential of a vast threat. We leverage the fully unsupervised nature of the adopted isolation forest method. Our proposed scheme does not require any prior labeling information and is thus suitable for most NIDSes deployed in enterprise environments. Moreover, by taking advantage of the temporal information in the alerts, we observe that each period (currently set to one day) has its distinct characteristics, which can be exploited to isolate anomalies. This study demonstrates the advantages of unsupervised learning in reducing vast threat alerts and lays the groundwork for battling the alert fatigue problem.
Analysts in Security Operations Centers (SOCs) are often occupied with time-consuming investigations of alerts from Network Intrusion Detection Systems (NIDS). Many NIDS rules lack clear explanations and associations with attack techniques, complicating the alert triage and the generation of attack hypotheses. Large Language Models (LLMs) may be a promising technology to reduce the alert explainability gap by associating rules with attack techniques. In this paper, we investigate the ability of three prominent LLMs (ChatGPT, Claude, and Gemini) to reason about NIDS rules while labeling them with MITRE ATT&CK tactics and techniques. We discuss prompt design and present experiments performed with 973 Snort rules. Our results indicate that while LLMs provide explainable, scalable, and efficient initial mappings, traditional Machine Learning (ML) models consistently outperform them in accuracy, achieving higher precision, recall, and F1-scores. These results highlight the potential for hybrid LLM-ML approaches to enhance SOC operations and better address the evolving threat landscape.
Security Operations Centers (SOCs) are overwhelmed by tens of thousands of daily alerts, with only a small fraction corresponding to genuine attacks. This overload creates alert fatigue, leading to overlooked threats and analyst burnout. Classical detection pipelines are brittle and context-poor, while recent LLM-based approaches typically rely on a single model to interpret logs, retrieve context, and adjudicate alerts end-to-end -- an approach that struggles with noisy enterprise data and offers limited transparency. We propose CORTEX, a multi-agent LLM architecture for high-stakes alert triage in which specialized agents collaborate over real evidence: a behavior-analysis agent inspects activity sequences, evidence-gathering agents query external systems, and a reasoning agent synthesizes findings into an auditable decision. To support training and evaluation, we release a dataset of fine-grained SOC investigations from production environments, capturing step-by-step analyst actions and linked tool outputs. Across diverse enterprise scenarios, CORTEX substantially reduces false positives and improves investigation quality over state-of-the-art single-agent LLMs.
Endpoint Detection & Response (EDR) products detect threats by pattern matching endpoint telemetry against behavioral rules that describe potentially malicious behavior. However, EDR can suffer from high false positives that distract from actual attacks, leading to an “alert fatigue” problem. While provenance-based alert triage techniques have shown promise, historical provenance analysis is prohibitively slow when applied to the stream-based event processing pipelines that dominate industry today; provenance-based systems may take over a minute to inspect a single alert, while individual EDR customers can face tens of millions of alerts per day. At present, these approaches cannot scale to production environments. We present Carbon Filter, an automated alert triage mechanism that reduces false alerts by upwards of $82 \%$ and is already in use by thousands of Carbon Black EDR customers today. Our key insight is that the vast majority false alerts are triggered by programs that share a common initiation context, and thus the specific false alerts associated with an initiation context can be identified. However, rather than turning to costly provenance analysis, we hypothesize that it is sufficient to use the command line arguments of alert-triggering processes as the initiation context. Through prioritizing speed for similaritypreserving hashing, clustering, and search, we demonstrate that our approach scales to millions of alerts per hour ($\gt5 \mathrm{~K} / \mathrm{sec}$). In evaluations customer alert data, we demonstrate that Carbon Filter can identify $\mathbf{8 2} \boldsymbol{\%}$ of false alerts nearly a $\mathbf{6}$-fold improvement in signal-to-noise ratio. Further, when comparing to provenancebased approaches, we show that Carbon Filter (AUC $=0.94$) actually outperforms NoDoze ($\mathbf{A U C}=\mathbf{0. 6 0}$) and RapSheet ($\mathbf{A U C}=\mathbf{0. 9 0}$) while reducing analysis time by $5,064 \mathrm{x}$ and $26,723 \mathrm{x}$, respectively.
"Alert fatigue"is one of the biggest challenges faced by the Security Operations Center (SOC) today, with analysts spending more than half of their time reviewing false alerts. Endpoint detection products raise alerts by pattern matching on event telemetry against behavioral rules that describe potentially malicious behavior, but can suffer from high false positives that distract from actual attacks. While alert triage techniques based on data provenance may show promise, these techniques can take over a minute to inspect a single alert, while EDR customers may face tens of millions of alerts per day; the current reality is that these approaches aren't nearly scalable enough for production environments. We present Carbon Filter, a statistical learning based system that dramatically reduces the number of alerts analysts need to manually review. Our approach is based on the observation that false alert triggers can be efficiently identified and separated from suspicious behaviors by examining the process initiation context (e.g., the command line) that launched the responsible process. Through the use of fast-search algorithms for training and inference, our approach scales to millions of alerts per day. Through batching queries to the model, we observe a theoretical maximum throughput of 20 million alerts per hour. Based on the analysis of tens of million alerts from customer deployments, our solution resulted in a 6-fold improvement in the Signal-to-Noise ratio without compromising on alert triage performance.
Enterprise networks are growing ever larger with a rapidly expanding attack surface, increasing the volume of security alerts generated from security controls. Security Operations Centre (SOC) analysts triage these alerts to identify malicious activity, but they struggle with alert fatigue due to the overwhelming number of benign alerts. Organisations are turning to managed SOC providers, where the problem is amplified by context switching and limited visibility into business processes. A novel system, named AACT, is introduced that automates SOC workflows by learning from analysts' triage actions on cybersecurity alerts. It accurately predicts triage decisions in real time, allowing benign alerts to be closed automatically and critical ones prioritised. This reduces the SOC queue allowing analysts to focus on the most severe, relevant or ambiguous threats. The system has been trained and evaluated on both real SOC data and an open dataset, obtaining high performance in identifying malicious alerts from benign alerts. Additionally, the system has demonstrated high accuracy in a real SOC environment, reducing alerts shown to analysts by 61% over six months, with a low false negative rate of 1.36% over millions of alerts.
With growing sophistication and volume of cyber attacks combined with complex network structures, it is becoming extremely difficult for security analysts to corroborate evidences to identify multistage campaigns on their network. This work develops HeAT (Heated Alert Triage): given a critical indicator of compromise (IoC), e.g., a severe IDS alert, HeAT produces a HeATed Attack Campaign (HAC) depicting the multistage activities that led up to the critical event. We define the concept of"Alert Episode Heat"to represent the analysts opinion of how much an event contributes to the attack campaign of the critical IoC given their knowledge of the network and security expertise. Leveraging a network-agnostic feature set, HeAT learns the essence of analyst's assessment of"HeAT"for a small set of IoC's, and applies the learned model to extract insightful attack campaigns for IoC's not seen before, even across networks by transferring what have been learned. We demonstrate the capabilities of HeAT with data collected in Collegiate Penetration Testing Competition (CPTC) and through collaboration with a real-world SOC. We developed HeAT-Gain metrics to demonstrate how analysts may assess and benefit from the extracted attack campaigns in comparison to common practices where IP addresses are used to corroborate evidences. Our results demonstrates the practical uses of HeAT by finding campaigns that span across diverse attack stages, remove a significant volume of irrelevant alerts, and achieve coherency to the analyst's original assessments.
A Network Intrusion Detection System (NIDS) monitors networks for cyber attacks and other unwanted activities. However, NIDS solutions often generate an overwhelming number of alerts daily, making it challenging for analysts to prioritize high-priority threats. While deep learning models promise to automate the prioritization of NIDS alerts, the lack of transparency in these models can undermine trust in their decision-making. This study highlights the critical need for explainable artificial intelligence (XAI) in NIDS alert classification to improve trust and interpretability. We employed a real-world NIDS alert dataset from Security Operations Center (SOC) of TalTech (Tallinn University Of Technology) in Estonia, developing a Long Short-Term Memory (LSTM) model to prioritize alerts. To explain the LSTM model's alert prioritization decisions, we implemented and compared four XAI methods: Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Integrated Gradients, and DeepLIFT. The quality of these XAI methods was assessed using a comprehensive framework that evaluated faithfulness, complexity, robustness, and reliability. Our results demonstrate that DeepLIFT consistently outperformed the other XAI methods, providing explanations with high faithfulness, low complexity, robust performance, and strong reliability. In collaboration with SOC analysts, we identified key features essential for effective alert classification. The strong alignment between these analyst-identified features and those obtained by the XAI methods validates their effectiveness and enhances the practical applicability of our approach.
Abstract: Insider threats represent a critical challengein modern cybersecurity, ofteneluding traditional defensesduet othe irsubtletyandlegit- imateaccess. ThispaperpresentsanAI-drivende- tectionsystemintegratingtheopen-sourceWazuh SIEM platform with behavioral analytics and machine learning. Leveraging the CERT Insider Threat Dataset and real-time log ingestion, the system employs supervised learning models to identifyanomalousbehavior,assigndynamicrisk scores, and provide actionable alerts. The modular architecture ensures scalability and effective threatvisualization, demonstratingproactive de- tection capabilities with reduced false positives through continuous learning.
Behavioral analytics is a cutting-edge tool in the fight for financial cybersecurity. It uses advanced AI and machine learning to pinpoint dangers that outdated methods might miss. This study examines how well these AI-based tools work and the challenges encountered, especially when trying to mitigate and prevent security breaches in the digital currency world and financial markets. The case study analysis of three large-scale security incidents, namely a cryptocurrency exchange, a banking institution an advanced persistent threat (APT), and a DeFi platform, identified the current state of behavioral analytics implementation. Key findings show that while AI-based solutions can efficiently identify threats that rely on the volume and behavioral patterns of the underlying systems, they struggle with more refined attacks that exploit legitimate features. Consequently, these systems exhibit high false positives and low response times. The cross-case analysis indicates that the behavioral correlations across domains and the threshold off-peak periods are not adequately addressed. The study offers recommendations on better implementation for algorithm development and data integration as well as policy formulation. Therefore, the main contributions are: 1: Common behavioral indicators can be derived from the financial platform. 2: Human-AI cooperation is required to obtain an effective identification process, and 3: The security and operation continuity requirements can be balanced by adjusting the threshold level in real time.
The advent of cloud computing, remote work, and increasingly sophisticated cyberattacks has rendered perimeter-based security models insufficient, prompting a global transition toward Zero Trust Security (ZTS). Central to ZTS is the principle of "never trust, always verify”, which underscores continuous authentication and dynamic access control. However, traditional Identity and Access Management (IAM) systems often lack the flexibility to address evolving behavioural anomalies and insider threats. This study proposes a comprehensive framework that integrates behavioural analytics and Artificial Intelligence (AI) to enhance adaptive IAM in Zero Trust environments. By leveraging user and entity behaviour analytics (UEBA) and machine learning models, the framework continuously monitors contextual signals, such as login patterns, device usage, and network activity, enabling proactive risk scoring and real-time access decisions. This study synthesises the existing literature, identifies the current limitations of Zero Trust IAM, and develops a layered architecture that combines behavioural monitoring with AI-driven decision-making to achieve continuous verification. The findings highlight the potential of AI-enhanced behavioural analytics to improve detection accuracy, reduce false positives, and automate the enforcement of adaptive policies. This research contributes to advancing secure, scalable, and context-aware zero-trust IAM strategies, offering a roadmap for implementation across enterprises, government systems, and multi-cloud infrastructures.
Insider threats remain a critical security challenge, necessitating advanced AI-driven behavioral analytics. However, the deployment of these systems faces two distinct but equally paralyzing hurdles: strict data protection regulations (such as GDPR and NDPR) which restrict the centralization of sensitive user logs, and the opaque "black box" nature of deep learning models which erodes the trust of security analysts. To resolve this dual conflict, this paper proposes a unified framework integrating Federated Learning (FL), Differential Privacy (DP), and Explainable AI (XAI). We employ an LSTM-based architecture where user data remains local, protected by the Laplace mechanism, while SHAP and LIME provide transparent model interpretations. Crucially, to test robustness beyond standard benchmarks, the framework is validated across two fundamentally different environments: the synthetic, user-centric CERT dataset and the real-world, cloud-native BETH dataset. Results demonstrate high adaptability, achieving F1-Scores of 0.88 on CERT and 0.86 on the complex BETH dataset - a minimal performance trade-off for guaranteed privacy. The XAI layer successfully demystified alerts across both environments, proving that high-accuracy detection, robust privacy, and actionable transparency can be achieved simultaneously in modern IT infrastructure.
Online Social Networks (OSNs) have become central to digital communication, yet they are increasingly vulnerable to security threats arising from user behavior, content sharing, and trust-based interactions. This paper proposes a behavior-aware security framework that leverages user activity patterns, content sensitivity, interaction frequency, and trust levels to dynamically assess risk and detect malicious behavior. The system architecture incorporates a real-time risk scoring model and an AI-driven behavioral analysis engine, enabling proactive threat identification and user-specific alerts. Simulation results across multiple training epochs demonstrate high detection accuracy (over 93%), low false positive rates, and improved response times. Additionally, the framework effectively engages users through awareness mechanisms and role-based access control. By integrating technical, behavioral, and psychological insights, the proposed model offers a scalable, intelligent solution for enhancing the security and resilience of OSNs in the face of evolving cyber threats. This paper presents a behavior-aware security framework that computes dynamic risk scores based on four core parameters: behavioral pattern score, content sensitivity, friend interaction rate, and trust level. The proposed system was tested on a simulated OSN environment with 500 users and evaluated across 10 training epochs. It achieved a detection accuracy of 93.6%, precision of 91.4%, recall of 92.1%, and an F1-score of 91.7%, with an average detection time of 3.2 seconds and false positive rate of just 5.8%. Over the simulation, 425 out of 450 high-risk users were correctly identified. The model also demonstrated improved learning over time, with risk classification stabilizing after Epoch 5 and detection latency reducing from 4.2 to 3.1 seconds. These results confirm that the framework offers a robust, real-time solution for identifying threats and enhancing security awareness in OSNs, making it suitable for scalable deployment in real-world platforms.
The convergence of artificial intelligence and zero-trust security architecture represents a paradigm shift in cybersecurity defense strategies. This article explores the evolution of autonomous zero-trust systems enhanced by identity behavior analytics, moving beyond traditional static verification models to dynamic, self-adjusting security frameworks. The core architectural components that enable real-time risk assessment and adaptive access control, including AI/ML engines, identity graphs, and policy-as-code enforcement mechanisms. By continuously analyzing behavioral patterns and contextual signals, these systems can detect anomalies, prevent credential theft, identify insider threats, and contain lateral movement without human intervention. The integration pathway from conventional security postures to fully autonomous enforcement is outlined, highlighting implementation strategies across various organizational environments. As organizations face increasingly sophisticated threat landscapes with expanding attack surfaces, this intelligent approach to zero trust provides enhanced protection while reducing operational burden, improving compliance readiness, and scaling effectively with evolving business requirements.
The Internet of Electric Vehicles (IoEV) envisions a tightly coupled ecosystem of electric vehicles (EVs), charging infrastructure, and grid services, yet it remains vulnerable to cyberattacks, unreliable battery-state predictions, and opaque decision processes that erode trust and performance. To address these challenges, we introduce a novel Agentic Artificial Intelligence (AAI) framework tailored for IoEV, where specialized agents collaborate to deliver autonomous threat mitigation, robust analytics, and interpretable decision support. Specifically, we design an AAI architecture comprising dedicated agents for cyber-threat detection and response at charging stations, real-time State of Charge (SoC) estimation, and State of Health (SoH) anomaly detection, all coordinated through a shared, explainable reasoning layer; develop interpretable threat-mitigation mechanisms that proactively identify and neutralize attacks on both physical charging points and learning components; propose resilient SoC and SoH models that leverage continuous and adversarial-aware learning to produce accurate, uncertainty-aware forecasts with human-readable explanations; and implement a three-agent pipeline, where each agent uses LLM-driven reasoning and dynamic tool invocation to interpret intent, contextualize tasks, and execute formal optimizations for user-centric assistance. Finally, we validate our framework through comprehensive experiments across diverse IoEV scenarios, demonstrating significant improvements in security and prediction accuracy. All datasets, models, and code will be released publicly.
In an increasingly digitalized and hyperconnected financial landscape, the complexity and frequency of cyber threats have grown exponentially, exposing financial institutions to real-time risks that conventional defense mechanisms struggle to mitigate. Traditional security frameworks, often reactive and siloed, lack the speed and contextual awareness required to protect dynamic finance ecosystems driven by automated trading, open banking, and decentralized financial services. This paper explores the emerging paradigm of Integrative Analytics for Autonomous Threat Response (IAATR)—a strategic synthesis of artificial intelligence (AI), behavioral modeling, and real-time analytics to secure business processes within finance ecosystems. From a broad perspective, the integration of AI into cybersecurity presents transformative possibilities. Machine learning models trained on network telemetry, user behavior, and transaction anomalies can detect threats proactively, adapt to novel attack patterns, and initiate countermeasures with minimal human intervention. The paper discusses how autonomous systems— rooted in deep reinforcement learning and explainable AI—enhance threat triage, isolate compromised processes, and orchestrate secure workflow rerouting to minimize systemic disruption. Narrowing the focus to finance-specific applications, the paper examines use cases including algorithmic fraud detection, insider threat mitigation in payment systems, and AI-enabled compliance monitoring. Emphasis is placed on the design of feedback loops between security intelligence layers and business process management (BPM) engines, ensuring that threat responses remain aligned with regulatory standards and operational continuity. The study concludes with a discussion on governance, ethical risks, and the role of digital trust in advancing AI-secured business environments. IAATR represents not just a technological leap, but a foundational shift toward anticipatory, resilient financial security architectures.
Ransomware has rapidly become one of the most hazardous cybersecurity threats due to its ability to encrypt the data of the users and disrupt the business activities of other countries. They need intelligent and adaptable solutions due to the fact that traditional rule-based and signature-based detection systems are often ineffective to detect new and polymorphic variants. This paper reports an AI-based ransomware detection framework based on a Long Short-Term Memory (LSTM) network to analyze sequence of file operations to identify suspicious actions. In order to reproduce the benign and ransomware-induced file system activity, i.e. create, read, write, rename and delete files, a synthetic but realistic dataset was constructed. Every event is enclosed with behavioral indicators such as file entropy, access frequency, accessible and inaccessible size of the file, and intervals between timestamps. In order to get the temporal dependencies among event sequences, the proposed model will integrate the normalized numerical features, as well as embedded categorical features. Experimental analyses of the model on the basis of precision-recall analysis, ROC curve, and confusion matrix confirm that the model has a high detection accuracy of more than 98% and generalizes well on unseen data. Also, to make entropy evolution visualization more interpretable, the distinction between processes driven by ransomware and ordinary ones can be made. The proposed LSTM-based system is effective in the sense that it is capable of identifying sequential relationships as opposed to traditional machine learning algorithms, which enables it to identify ransomware early and proactively. This study indicates the potential of behavioral intelligence that operates on deep learning to develop strong and real-time cybersecurity defense systems.
This study examines how Artificial Intelligence can aid in identifying and mitigating cyber threats in the U.S. across four key areas: intru-sion detection, malware classification, phishing detection, and insider threat analysis. Each of these problems has its quirks, meaning there needs to be different approaches to each, so we matched the models to the shape of the problem. For intrusion detection, catching things like unauthorized access, we tested unsupervised anomaly detection methods. Isolation forests and deep autoencoders both gave us useful sig-nals by picking up odd patterns in network traffic. When it came to malware detection, we leaned on ensemble models like Random Forest and XGBoost, trained on features pulled from files and traffic logs. Phishing was more straightforward. We fed standard classifiers (logistic regression, Random Forest, XGBoost) a mix of email and web-based features. These models handled the task surprisingly well; phishing turned out to be the easiest problem to crack, at least with the data we had. There was a different story. We utilized an LSTM autoencoder to identify behavioral anomalies in user activity logs. It caught every suspicious behavior but flagged a lot of harmless ones too. That kind of model makes sense when the cost of missing a threat is high and you’re willing to sift through some noise. What we saw across the board is that performance wasn’t about stacking the most complex model. What mattered was how well the model’s structure matched the way the data behaved. When signals were strong and obvious, simple models worked fine. But for messier, more subtle threats, we needed some-thing more adaptive, sequence models and anomaly detectors, though they brought their trade-offs. The takeaway here is clear: in cybersecu-rity, context drives the solution. There’s no universal model that works for everything. The smart move is to build systems that fit the prob-lem, and more importantly, evolve with it. Threats don’t sit still, and neither should our defenses.
No abstract available
Traditional security methods based on signatures are no longer adequate due to the exponential development of cyberattacks. A proactive and flexible response to changing cyberthreats is offered by intelligent threat detection, which is powered by artificial intelligence (AI), machine learning (ML), and security orchestration, automation, and response (SOAR). This study examines how AI and SOAR are transforming cybersecurity by facilitating automated reaction, quicker detection, and predictive defence tactics.
Purpose: The study aims to examine the synergistic effects of integrating Security Information and Event Management (SIEM), Security Orchestration, Automation, and Response (SOAR), and Artificial Intelligence (AI) technologies in enhancing cybersecurity frameworks. It explores how this combination can lead to a transformative era in cybersecurity, focusing on the improved efficacy of threat management and incident response. Methodology: An analytical approach was used to investigate the integration trends between SIEM and SOAR technologies, underpinned by advancements in AI. This method emphasizes accelerated incident detection and response, enriched threat intelligence collaboration, and fortified security strategies. Findings: The fusion of SIEM, SOAR, and AI technologies has led to a paradigm shift in cybersecurity, offering unparalleled efficiency in threat management and a significant reduction in the impacts of cyber incidents on entities. It highlights the accelerated detection and response to incidents and the enhancement of threat intelligence collaboration and security strategies. Unique Contribution to Theory, Practice, and Policy: This study contributes to the field by presenting invaluable insights for cybersecurity practitioners and entities aiming to strengthen their defenses against an evolving digital threat landscape. It advocates for a proactive orchestration of security measures, underlining the strategic implications of the SIEM-SOAR-AI triad for future cybersecurity endeavors. Recommendations are provided for entities to adopt this integrated approach to enhance their cybersecurity frameworks effectively.
The rapid escalation of cyber threats in both frequency and sophistication has outpaced the capacity of traditional Digital Forensics and Incident Response (DFIR) practices. Conventional manual investigation methods such as log examination, evidence extraction, and threat correlation are often too time-consuming and labor-intensive to meet the demands of real-time incident management. Consequently, organizations are increasingly turning to artificial intelligence (AI) and automation to enhance the speed, accuracy, and scalability of DFIR operations. This paper explores how AI-driven models and automation frameworks can transform digital forensics and incident response, enabling faster detection, investigation, and containment of cyberattacks. It examines the integration of machine learning, natural language processing (NLP), and robotic process automation (RPA) into DFIR workflows to automate evidence collection, pattern recognition, and anomaly detection. Moreover, the study discusses how AI-enabled SOAR (Security Orchestration, Automation, and Response) platforms streamline the decision-making process by automatically correlating multi-source data and executing predefined containment actions.The paper also highlights practical applications across enterprise and national defense contexts, showcasing how predictive forensics and adaptive response mechanisms reduce investigation time and operational fatigue. Despite these advancements, several challenges persist, including AI model bias, data imbalance, interpretability issues, and legal admissibility of AI-generated evidence. To address these concerns, the study emphasizes the need for explainable AI frameworks, standardized forensic data models, and cross-disciplinary training for DFIR professionals. Ultimately, AI and automation do not aim to replace human expertise but to augment it enhancing investigative precision, improving incident readiness, and fostering a new generation of intelligent, resilient cyber defense systems.
The increasing frequency, sophistication, and speed of cyberattacks on critical infrastructure demand advanced, adaptive, and rapid incident response capabilities. AI-powered incident response automation offers a transformative approach to safeguarding essential sectors such as energy, transportation, water, healthcare, and communications by enabling real-time detection, analysis, and mitigation of threats. This study explores the integration of artificial intelligence with security orchestration, automation, and response (SOAR) platforms to enhance the efficiency, accuracy, and resilience of incident management in critical infrastructure environments. Leveraging machine learning, natural language processing, and deep learning models, AI-driven systems can automatically correlate threat indicators, analyze network anomalies, prioritize alerts, and execute predefined containment or remediation actions with minimal human intervention. By processing large volumes of heterogeneous security data including logs, sensor readings, and operational technology (OT) telemetry these systems reduce mean time to detect (MTTD) and mean time to respond (MTTR), thereby minimizing operational disruptions and potential safety hazards. The paper evaluates key AI capabilities such as predictive analytics for proactive threat hunting, reinforcement learning for adaptive response strategies, and explainable AI for transparent decision-making in regulated environments. Challenges including integration with legacy systems, false positives, adversarial AI risks, and compliance with sector-specific regulations are critically assessed. Case studies from power grid cybersecurity, intelligent transportation systems, and smart water management highlight real-world deployments, demonstrating measurable improvements in incident containment speed, threat neutralization rates, and operational continuity. The findings indicate that AI-powered incident response automation not only strengthens cyber resilience but also aligns with national and international frameworks for critical infrastructure protection, such as NIST, ISO 27001, and sector-specific standards. Future research directions include developing interoperable AI models for multi-sector coordination, enhancing trust through AI explainability, and integrating AI with blockchain for secure audit trails. By bridging advanced analytics with automated security operations, AI-powered incident response emerges as a crucial enabler for safeguarding critical infrastructure in an era of increasingly complex and high-impact cyber threats.
Cybersecurity threats are rapidly evolving, posing significant challenges to organizations seeking to protect critical digital assets. Traditional security approaches, such as rule-based detection and manual incident response, have proven inadequate in addressing the complexity and scale of modern cyber threats, particularly those involving zero-day vulnerabilities, ransomware, and advanced persistent threats (APTs). In response, scalable software automation frameworks have emerged as a critical solution for real-time threat detection and response. This paper presents a comprehensive study on designing scalable cybersecurity automation frameworks, integrating artificial intelligence (AI), machine learning (ML), cloud computing, and Security Orchestration, Automation, and Response (SOAR) systems to enhance security resilience. The study examines key architectural principles, including microservices-based security structures, cloud-native deployment models, AI-driven anomaly detection, and automated incident response mechanisms. Furthermore, the paper explores how real-time security monitoring, predictive analytics, and Zero Trust security models contribute to an adaptive cybersecurity defense strategy. To validate the effectiveness of scalable automation frameworks, the paper presents case studies of Google Chronicle, IBM Security QRadar, and Microsoft Azure Sentinel, analyzing their efficiency in automated threat intelligence, behavioral analytics, and cloud-based security operations. Additionally, we discuss major challenges associated with scalability, performance, AI explainability, and interoperability with legacy security infrastructures. The proposed framework offers an optimized cybersecurity automation model that enhances detection speed, minimizes false positives, and ensures seamless threat response automation. The findings indicate that integrating AI-enhanced SIEM and SOAR solutions into a cloud-native cybersecurity ecosystem significantly improves cyber threat mitigation, response times, and overall security posture. Future research should focus on advancing federated learning for distributed security intelligence, blockchain for decentralized security enforcement, and explainable AI (XAI) for more transparent cybersecurity decision-making. This study contributes to the growing body of cybersecurity research by providing a scalable, AI-driven, and cloud-integrated framework for organizations to enhance their security resilience in an increasingly complex threat landscape.
The widespread adoption of cloud applications, accelerated by remote work demands, introduces new security challenges. Traditional approaches struggle to keep pace with the growing volume of cloud applications, keeping track of their user activities and countering potential threats. This paper proposes a novel user access security system for cloud applications. The system leverages user activity tracking tied to user, device, and contextual identity data. By incorporating Identity Provider (IdP) information, Natural Language Processing (NLP), and Machine Learning algorithms (ML), the system builds user baselines and tracks deviations to bubble up critical deviations to the surface and proactively prevent further worsening in real-time, working in conjunction with security orchestration, automation, and response (SOAR) tools. Deviations from the baselines, which may indicate compromised accounts or malicious intent, trigger proactive interventions. This approach offers organizations superior visibility and control over their cloud applications, enabling proactive and real-time threat detection and data breach prevention. While real- time data collection from application vendors remains a challenge, near-real-time is made feasible today. The system can also effectively utilize IdP logs, activity logs from proxies, or firewalls. This research addresses the critical need for proactive security measures in the dynamic landscape of cloud application data security. The system will need a quarter (90 days) of learning time to ensure accurate detections based on historically gathered data and protect them for future baseline predictions on the user themselves and as well as on their peers. This approach ensures the detection is contextually aware of the organization as a whole. This research completely redefines traditional thinking with decentralized intelligence across the system that has a highly scalable microservice architecture. The proposed solution is a uniquely intelligent system where both human and artificial intelligence coexist, with the ultimate overriding control lying with humans (admin). This way, the outcomes at every stage are effective, making the overall detection and proactive security effective.
The article is devoted to the study of critical cyber threats arising from the rapid digitization of railway automation and telemechanics systems, in particular microprocessor centralization and dispatching centralization. The authors justify the need to use advanced technologies, such as artificial intelligence (AI), to ensure the stability and security of critical railway infrastructure.The article analyzes critical vulnerabilities of RAAS and provides a detailed analysis of cyberattack vectors on key systems. The vulnerability of systems to attacks typical for COTS components and standard protocols (TCP/IP) is investigated, as well as the threat of introducing malicious software into the firmware to disrupt traffic safety logic, which can cause train collisions. It has been proven that AI provides the necessary level of monitoring and response that goes beyond traditional signature-based protection. Machine and deep learning mechanisms are considered, and the critical role of AI integration with SOAR (Security Orchestration, Automation, and Response) platforms is emphasized. It is concluded that artificial intelligence and automated SOAR response systems are key to transforming protection from reactive to proactive and preventive. It is proposed that the further development of cyber protection for railway transport systems should provide for the integration of SOAR with the concept of a digital twin of the railway network in order to achieve complete autonomous cybersecurity.
The accelerating complexity and volume of cyber threats have necessitated the adoption of advanced Security Orchestration, Automation, and Response (SOAR) models in enterprise environments. This paper presents a conceptual framework for integrating SOAR processes to enhance the speed, accuracy, and efficiency of incident detection and response. The study explores automation-driven workflows, orchestration of multi-tool security environments, and the use of artificial intelligence (AI) to streamline threat analysis and mitigation. By evaluating current industry practices and theoretical models, the paper identifies key performance indicators (KPIs) for measuring operational improvements in cybersecurity posture. Empirical simulations demonstrate that automation reduces mean time to detect (MTTD) and mean time to respond (MTTR) while enhancing threat prioritization and resource allocation. The results highlight the critical role of integrating orchestration with predictive analytics and human-machine collaboration to achieve optimal security outcomes. This framework offers actionable guidance for enterprises seeking to implement robust, scalable, and adaptive security operations in increasingly complex digital landscapes. Keywords: SOAR, Cybersecurity, Incident Response, Automation, Orchestration, Threat Detection.
The digital landscape faces unprecedented challenges as cyber threats targeting critical infrastructure evolve in complexity and frequency. Traditional security frameworks relying on static rule-based detection and perimeter defenses have proven insufficient against sophisticated attack vectors, including adversarial AI, polymorphic malware, and zero-day exploits. This article explores how AI-driven cybersecurity transforms protection strategies within modern data centers through autonomous threat detection, adaptive risk mitigation, and self-healing architectures. Integrating deep learning-powered Intrusion Detection and Prevention Systems (IDPS) with behavioral analytics enables the identification of subtle anomalies that conventional systems typically miss. Zero Trust Architecture enhanced by AI-driven continuous authentication establishes a security model where trust is never implici,t and access requires persistent verification. Security Orchestration, Automation, and Response (SOAR) frameworks leverage machine learning to correlate disparate events and automate response actions, dramatically reducing detection and remediation timeframes. As quantum computing emerges as a threat to traditional cryptographic standards, AI-optimized post-quantum cryptography presents viable solutions for maintaining security in the quantum era. The convergence of these technologies creates resilient cybersecurity ecosystems capable of adapting to emerging threats while maintaining operational continuity and preserving the confidentiality, integrity, and availability of critical systems and data.
No abstract available
The growing complexity of cyber threats necessitates creative solutions beyond conventional rule-based security systems. This study presents a new method for the incorporation of artificial intelligence (AI) into intrusion detection and prevention systems (IDPS) that facilitates real-time threat mitigation, adaptive learning, and autonomous response. Through the use of machine learning (ML), behavioural analytics, and generative AI, this solution overcomes the weaknesses of legacy systems while maximizing accuracy, scalability, and operational efficiency in cybersecurity. Key Words: Artificial Intelligence (AI); Cybersecurity; Intrusion Detection System (IDS); Intrusion Prevention System (IPS); Real-Time Threat Detection; Machine Learning (ML); Anomaly Detection; Behavioural Analytics; User and Entity Behaviour Analytics (UEBA); Automated Incident Response; Adaptive Learning; Generative Adversarial Networks (GANs); Adversarial AI; Zero-Day Attack Detection; Security Orchestration, Automation, and Response (SOAR); Autonomous Cybersecurity; Cyber Threat Mitigation; Network Security; Deep Learning; Explainable AI (XAI); Federated Learning; Threat Intelligence; Real-Time Analytics; Predictive Cybersecurity; AI-Driven Defence Systems; Proactive Defence Mechanisms; Self-Improving Systems; Quantum-Resistant AI; Zero Trust Architecture; Cyber-Physical Systems Security
Cybersecurity Operations Centers (CSOCs) act as one of the foremost tools for cyber defenses. However, security analysts often face overwhelming stress owing to excessive volumes of alerts, frequent false positives flagged by most existing tools, and growing mental strain. Such challenges could negatively impact threat detections by delays that cause slower incident response; thus, undermining overall security effectiveness. Common tools like Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) help automate tasks, but they often do not provide enough situational understanding, clear explanations, or real support for analysts making decisions. This paper presents a position on the role of copilots that are powered with AI, to strengthen Cybersecurity Operations Centers (CSOCs). We argue that copilots, integrating machine learning, large language models, and explainable AI (XAI), can (1) make analysts’ work easier by grouping similar alerts and highlighting the most important ones first, (2) make threat detection and incident triage more accurate by adding useful background information, (3) build analyst trust and increase adoption by giving clear and understandable explanations, and (4) improve the quality of decisions by helping humans and AI work together effectively. These points are explained using logical arguments and insights from existing literature on interactive security operations i.e. human-in-the-loop security. This paper positions CSOCs as places where analysts work with AI support. It shows how copilots can assist human analysts to improve from just reacting to threats to being more proactive and flexible in their defenses. The paper also looks at what this means for research, everyday use, and guidelines to ensure AI copilots are used safely in CSOCs.
This paper sheds some light on how AI-powered cybersecurity can be applied to protecting storage infrastructures, namely, high-throughput NFS and S3 object stores. As data becomes more sensitive and volumes larger, conventional security is failing and perhaps the most vulnerable to this are AI/ML data. The research suggests taking into consideration the behavior-based threat identification, which reflects application to detection of ransomware, data exfiltration, insider threats, and others, prior to their evolvement. An AI can proactively identify anomalies by studying the activities and actions of the users and systems and help raise an alert on the occurrence of a possible breach. The article also discusses the integration of AI systems with SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) tools, leveraging Open Telemetry for seamless coordination and real-time threat response. As it suggests the sure need to adopt appropriate security measures to highly sensitive AI/ML datasets, the article lends prominence to the flexibility and scalability of AI-enhanced cybersecurity as a solution to security issues concerning storage in a dynamic environment.
In the evolving cyber threat landscape, enterprises employ multiple security solutions such as Endpoint Detection and Response (EDR), Security Information and Event Management (SIEM), and Security Orchestration, Automation, and Response (SOAR). Security analysts are inundated with millions of security event logs from such security tools that makes it increasingly complex to manage and analyze these huge data effectively. Further, there is unavailability of dedicated as well as skilled manpower who can understand and analyse such security events. This paper proposes a novel approach based on generative AI using the state-of-the-art Mistral-7B language model to generate clear and actionable security response messages from these event logs. We demonstrate that this cutting-edge language model can translate complex logs into human-understandable security insights which can enhance analysts’ ability to prioritize and respond to threats.
The increasing sophistication of cyber threats necessitates the adoption of advanced, autonomous defense mechanisms. Large Language Models (LLMs) have emerged as a powerful tool for automating cybersecurity workflows, enabling intelligent incident response. This paper explores integrating LLM-powered incident response using LangChain, a framework that enhances natural language processing capabilities, and Security Orchestration, Automation, and Response (SOAR) platforms like Tines for automated containment workflows. The proposed system leverages MITRE ATT&CK playbooks to train LLMs, ensuring contextual decision-making and threat mitigation. Furthermore, probabilistic graphical models (PGMs) validate LLM-driven decisions, enhancing reliability and reducing false positives. This approach minimizes response time and enhances cybersecurity resilience by automating threat detection, triage, and containment. The findings underscore the transformative potential of AI-driven cyber defense, offering a scalable and efficient solution for mitigating modern cyber threats.
Cybersecurity is becoming very crucial in the today's world where technology is now not limited to just computers, smartphones, etc. It is slowly entering into things that are used on daily basis like home appliances, automobiles, etc. Thus, opening a new door for people with wrong intent. With the increase in speed of technology dealing with such issues also requires quick response from security people. Thus, dealing with huge variety of devices quickly will require some extent of automation in this field. Generating threat intelligence automatically and also including those which are multilingual will also add plus point to prevent well known major attacks. Here we are proposing an AI based SOAR system in which the data from various sources like firewalls, IDS, etc. is collected with individual event profiling using a deep-learning detection method. For this the very first step is that the collected data from different sources will be converted into a standardized format i.e. to categorize the data collected from different sources. For standardized format Here our system finds out about the true positive alert for which the appropriate/ needful steps will be taken such as the generation of Indicators of Compromise report and the additional evidences with the help of Security Information and Event Management system. The security alerts will be notified to the security teams with the degree of threat.
Modern enterprises increasingly rely on multi cloud and hybrid computing environments, which introduce expanded attack surfaces and new security complexities. Traditional security monitoring and incident response struggle to cope with the scale and sophistication of threats across AWS, Azure, and GCP. This paper proposes an AI driven, deep learning based framework for autonomous threat hunting in multi cloud and hybrid architectures. The objective is to proactively identify stealthy tactics such as lateral movement, privilege escalation, and persistence across diverse cloud platforms, using an integrated Security Orchestration, Automation, and Response (SOAR) approach. We develop a unified threat hunting system that ingests cloud telemetry (logs, network flows, identity events) and applies deep learning models (e.g., LSTM neural networks for sequential log analysis and graph neural networks for privilege graph modeling). The system’s design is benchmarked against NIST and ISO/IEC security frameworks to ensure controls compliance. In simulated evaluations, the AI driven SOAR achieved higher detection rates (over 90% for complex attack scenarios) and significantly reduced response times (automated containment in minutes) compared to baseline rule based systems. The framework maintained strong alignment with NIST SP 800-53 and ISO/IEC 27001 controls, demonstrating improved regulatory compliance. The proposed autonomous threat hunting framework enhances enterprise resilience by adaptively detecting advanced threats in multi cloud environments with minimal human intervention. It empowers security teams with rapid, orchestrated responses while adhering to industry security standards. This work has implications for boosting organizational cyber defense maturity, meeting regulatory requirements, and guiding future research in explainable and federated AI security solutions. Keywords: Threat Hunting, Deep Learning, Cloud Security, Multi Cloud, Hybrid Architecture, SOAR, NIST Compliance, ISO Standards.
This paper discusses how Security Orchestration, Automation, and Response (SOAR) systems with the help of Artificial Intelligence (AI) can be used to improve incident response in healthcare settings. With growing cases of advanced cyberattacks on patient health records and the internet of medical devices, manual response systems are failing to address the challenge among healthcare facilities. Integration of SOAR and AI technologies, including machine learning and natural language processing, can help automate the threat detection process, simplify the response process, and eliminate analyst burnout. This study reviews several studies to measure the AI-SOAR models, point out effective case studies, and determine the practical advantages of healthcare cybersecurity. Moreover, it specifies the main challenges, i.e. adversarial attacks, integration issues, and ethical issues, and offers such effective solutions as adversarial training, standard APIs, and human-in-the-loop systems. The results imply that, although AI-SOAR systems have a considerable positive impact on the resilience of healthcare cybersecurity, interoperability, explainability, and strong governance should be regarded as key requirements for successful implementation.
Satellite-based cloud computing cybersecurity threats have long posed significant challenges, particularly for cloud infrastructure operators. While prior research has partially addressed these issues by mitigating threats and enhancing human response efficiency, this paper proposes a novel AI-Driven Threat Analysis and Response (TAR) framework. The study progresses in three main phases: (1) redefining urgent threats through a novel formula; (2) implementing a triage and analysis framework using augmented Large Language Models (LLMs); and (3) automating incident response via a Security Orchestration, Automation, and Response (SOAR) platform. Our prototype, tested in a simulated public cloud environments using real production threats, demonstrated a 17% improvement in handling low- and medium-urgency threats. Experimental results show our approach achieves 97.8% coverage in automatic threat classification, significantly outperforming traditional manual methods, which achieve 77.8% coverage. With high recall and precision in managing low- and medium-urgency threats, our method enhances manual efficiency through SOAR-enabled automation. Furthermore, our augmented method surpasses the state-of-the-art GPT-4 Turbo model in addressing security threats containing Chinese characters.
Modern enterprise networks operate under persistent threats that exploit cloud-native misconfigurations, identity sprawl, and API vulnerabilities at machine speed. Existing security operations center (SOC) architectures remain largely reactive, signature-dependent, and incapable of predicting multi-stage lateral movement. This paper proposes the Cognitive Cyber Defense Digital Twin (CCDT), a unified architecture integrating federated learning (FL), graph neural network (GNN)-based attack-path forecasting, adversarially hardened detection models, and autonomous Security Orchestration, Automation, and Response (SOAR) with deception engineering. The CCDT constructs a continuously synchronized digital replica of organizational assets and employs reinforcement learning-based red agents to stress-test detection models. A federated intelligence mesh enables cross-organizational privacy-preserving gradient sharing. Experimental evaluations against CICIDS-2018 and LANL datasets demonstrate 52% faster attack-path detection, 41% reduction in false positive rate, and 60% reduction in mean-time-to-respond (MTTR) compared to traditional SOC baselines. Integrated Explainable AI (XAI) modules using SHAP values enable audit-ready compliance reporting. The CCDT represents a paradigm shift from reactive monitoring to predictive, autonomous, and privacy-preserving cyber defense for hybrid cloud environments.
Security orchestration, automation, and response (SOAR) systems ingest alerts from security information and event management (SIEM) system, and then trigger relevant playbooks that automate and orchestrate the execution of a sequence of security activities. SOAR systems have two major limitations: (i) security analysts need to define, create and change playbooks manually, and (ii) the choice between multiple playbooks that could be triggered is based on rules defined by security analysts. To address these limitations, recent studies in the field of artificial intelligence for cybersecurity suggested the task of interactive playbook creation. In this paper, we propose IC-SECURE, an interactive playbook creation solution based on a novel deep learning-based approach that provides recommendations to security analysts during the playbook creation process. IC-SECURE captures the context in the form of alert data and current status of incomplete playbook, required to make reasonable recommendation for next module that should be included in the new playbook being created. We created three evaluation datasets, each of which involved a combination of a set of alert rules and a set of playbooks from a SOAR platform. We evaluated IC-SECURE under various settings, and compared our results with two state-of-the-art recommender system methods. In our evaluation IC-SECURE demonstrated superior performance compared to other methods by consistently recommending the correct security module, achieving precision@1>0.8 and recall@3>0.92
Sandia National Laboratories and Idaho National Laboratory deployed state-of-the-art cybersecurity technologies within a virtualized, cyber-physical wind energy site to demonstrate their impact on security and resilience. This work was designed to better quantify cost-benefit tradeoffs and risk reductions when layering different security technologies on wind energy operational technology networks. Standardized step-by-step attack scenarios were drafted for adversaries with remote and local access to the wind network. Then, the team investigated the impact of encryption, access control, intrusion detection, security information and event management, and security, orchestration, automation, and response (SOAR) tools on multiple metrics, including physical impacts to the power system and termination of the adversary kill chain. We found, once programmed, the intrusion detection systems could detect attacks and the SOAR system was able to effectively and autonomously quarantine the adversary, prior to power system impacts. Cyber and physical metrics indicated network and endpoint visibility were essential to provide human defenders situational awareness to maintain system resilience. Certain hardening technologies, like encryption, reduced adversary access, but recognition and response were also critical to maintain wind site operations. Lastly, a cost-benefit analysis was performed to estimate payback periods for deploying cybersecurity technologies based on projected breach costs.
Identity Security Posture Management (ISPM) is a core challenge for modern enterprises operating across cloud and SaaS environments. Answering basic ISPM visibility questions, such as understanding identity inventory and configuration hygiene, requires interpreting complex identity data, motivating growing interest in agentic AI systems. Despite this interest, there is currently no standardized way to evaluate how well such systems perform ISPM visibility tasks on real enterprise data. We introduce the Sola Visibility ISPM Benchmark, the first benchmark designed to evaluate agentic AI systems on foundational ISPM visibility tasks using a live, production-grade identity environment spanning AWS, Okta, and Google Workspace. The benchmark focuses on identity inventory and hygiene questions and is accompanied by the Sola AI Agent, a tool-using agent that translates natural-language queries into executable data exploration steps and produces verifiable, evidence-backed answers. Across 77 benchmark questions, the agent achieves strong overall performance, with an expert accuracy of 0.84 and a strict success rate of 0.77. Performance is highest on AWS hygiene tasks, where expert accuracy reaches 0.94, while results on Google Workspace and Okta hygiene tasks are more moderate, yet competitive. Overall, this work provides a practical and reproducible benchmark for evaluating agentic AI systems in identity security and establishes a foundation for future ISPM benchmarks covering more advanced identity analysis and governance tasks.
Efforts have been recently made to construct ontologies for network security. The proposed ontologies are related to specific aspects of network security. Therefore, it is necessary to identify the specific aspects covered by existing ontologies for network security. A review and analysis of the principal issues, challenges, and the extent of progress related to distinct ontologies was performed. Each example was classified according to the typology of the ontologies for network security. Some aspects include identifying threats, intrusion detection systems (IDS), alerts, attacks, countermeasures, security policies, and network management tools. The research performed here proposes the use of three stages: 1. Inputs; 2. Processing; and 3. Outputs. The analysis resulted in the introduction of new challenges and aspects that may be used as the basis for future research. One major issue that was discovered identifies the need to develop new ontologies that relate to distinct aspects of network security, thereby facilitating management tasks.
We measure the association between generative AI (GAI) tool adoption and security operations center productivity. We find that GAI adoption is associated with a 30.13% reduction in security incident mean time to resolution. This result is robust to several modeling decisions. While unobserved confounders inhibit causal identification, this result is among the first to use observational data from live operations to investigate the relationship between GAI adoption and security worker productivity.
Towards Autonomous Cybersecurity: An Intelligent AutoML Framework for Autonomous Intrusion Detection
The rapid evolution of mobile networks from 5G to 6G has necessitated the development of autonomous network management systems, such as Zero-Touch Networks (ZTNs). However, the increased complexity and automation of these networks have also escalated cybersecurity risks. Existing Intrusion Detection Systems (IDSs) leveraging traditional Machine Learning (ML) techniques have shown effectiveness in mitigating these risks, but they often require extensive manual effort and expert knowledge. To address these challenges, this paper proposes an Automated Machine Learning (AutoML)-based autonomous IDS framework towards achieving autonomous cybersecurity for next-generation networks. To achieve autonomous intrusion detection, the proposed AutoML framework automates all critical procedures of the data analytics pipeline, including data pre-processing, feature engineering, model selection, hyperparameter tuning, and model ensemble. Specifically, it utilizes a Tabular Variational Auto-Encoder (TVAE) method for automated data balancing, tree-based ML models for automated feature selection and base model learning, Bayesian Optimization (BO) for hyperparameter optimization, and a novel Optimized Confidence-based Stacking Ensemble (OCSE) method for automated model ensemble. The proposed AutoML-based IDS was evaluated on two public benchmark network security datasets, CICIDS2017 and 5G-NIDD, and demonstrated improved performance compared to state-of-the-art cybersecurity methods. This research marks a significant step towards fully autonomous cybersecurity in next-generation networks, potentially revolutionizing network security applications.
We seek to enable classic processing of continuous ultra-sparse spatiotemporal data generated by event-based sensors with dense machine learning models. We propose a novel hybrid pipeline composed of asynchronous sensing and synchronous processing that combines several ideas: (1) an embedding based on PointNet models -- the ALERT module -- that can continuously integrate new and dismiss old events thanks to a leakage mechanism, (2) a flexible readout of the embedded data that allows to feed any downstream model with always up-to-date features at any sampling rate, (3) exploiting the input sparsity in a patch-based approach inspired by Vision Transformer to optimize the efficiency of the method. These embeddings are then processed by a transformer model trained for object and gesture recognition. Using this approach, we achieve performances at the state-of-the-art with a lower latency than competitors. We also demonstrate that our asynchronous model can operate at any desired sampling rate.
Modern enterprise systems exhibit complex interdependencies that make observability and incident response increasingly challenging. Manual alert triage, which typically involves log inspection, API verification, and cross-referencing operational knowledge bases, remains a major bottleneck in reducing mean recovery time (MTTR). This paper presents an agentic observability framework deployed within Adobe's e-commerce infrastructure that autonomously performs alert triage using a ReAct paradigm. Upon alert detection, the agent dynamically identifies the affected service, retrieves and analyzes correlated logs across distributed systems, and plans context-dependent actions such as handbook consultation, runbook execution, or retrieval-augmented analysis of recently deployed code. Empirical results from production deployment indicate a 90% reduction in mean time to insight compared to manual triage, while maintaining comparable diagnostic accuracy. Our results show that agentic AI enables an order-of-magnitude reduction in triage latency and a step-change in resolution accuracy, marking a pivotal shift toward autonomous observability in enterprise operations.
Detection systems that utilize machine learning are progressively implemented at Security Operations Centers (SOCs) to help an analyst to filter through high volumes of security alerts. Practically, such systems tend to reveal probabilistic results or confidence scores which are ill-calibrated and hard to read when under pressure. Qualitative and survey based studies of SOC practice done before reveal that poor alert quality and alert overload greatly augment the burden on the analyst, especially when tool outputs are not coherent with decision requirements, or signal noise. One of the most significant limitations is that model confidence is usually shown without expressing that there are asymmetric costs in decision making where false alarms are much less harmful than missed attacks. The present paper presents a decision-sensitive trust signal correspondence scheme of SOC alert triage. The framework combines confidence that has been calibrated, lightweight uncertainty cues, and cost-sensitive decision thresholds into coherent decision-support layer, instead of making changes to detection models. To enhance probabilistic consistency, the calibration is done using the known post-hoc methods and the uncertainty cues give conservative protection in situations where model certainty is low. To measure the model-independent performance of the suggested model, we apply the Logistic Regression and the Random Forest classifiers to the UNSW-NB15 intrusion detection benchmark. According to simulation findings, false negatives are greatly amplified by the presence of misaligned displays of confidence, whereas cost weighted loss decreases by orders of magnitude between models with decision aligned trust signals. Lastly, we describe a human-in-the-loop study plan that would allow empirically assessing the decision-making of the analysts with aligned and misaligned trust interfaces.
AI agents that build user interfaces on the fly assembling buttons, forms, and data displays from structured protocol payloads are becoming common in production systems. The trouble is that a payload can pass every schema check and still trick a user: a button might say "View invoice" while its hidden action wipes an account, or a display widget might quietly bind to an internal salary field. Current defenses stop at syntax; they were never built to catch this kind of behavioral mismatch. We built AegisUI to study exactly this gap. The framework generates structured UI payloads, injects realistic attacks into them, extracts numeric features, and benchmarks anomaly detectors end-to-end. We produced 4000 labeled payloads (3000 benign, 1000 malicious) spanning five application domains and five attack families: phishing interfaces, data leakage, layout abuse, manipulative UI, and workflow anomalies. From each payload we extracted 18 features covering structural, semantic, binding, and session dimensions, then compared three detectors: Isolation Forest (unsupervised), a benign-trained autoencoder (semi-supervised), and Random Forest (supervised). On a stratified 80/20 split, Random Forest scored best overall (accuracy 0.931, precision 0.980, recall 0.740, F1 0.843, ROC-AUC 0.952). The autoencoder came second (F1 0.762, ROC-AUC 0.863) and needs no malicious labels at training time, which matters when deploying a new system that lacks attack history. Per-attack-type analysis showed that layout abuse is easiest to catch while manipulative UI payloads are hardest. All code, data, and configurations are released for full reproducibility.
The integration of Edge Artificial Intelligence (Edge AI) into electric vehicles (EVs) represents a significant advancement in intelligent battery management systems (BMS). Traditional BMS approaches, while foundational, are increasingly inadequate in addressing the demands of real-time processing, complex degradation patterns, and the evolving requirements of modern EV architectures. This paper explores how Edge AI overcomes these limitations through low-latency data processing, high-accuracy predictions enabled by advanced machine learning models, and improved scalability and security. We examine the performance of models such as Support Vector Machines (SVM), Long Short-Term Memory networks (LSTM), and hybrid architectures in estimating key battery parameters like State of Charge (SOC) and State of Health (SOH). Further, we assess the role of emerging edge hardware technologies, including low-power AI chipsets and real-time operating systems, in enabling efficient deployment of intelligent BMS. Techniques such as sensor fusion and Vehicle-to-Everything (V2X) communication are also analyzed for their potential to enhance EV connectivity and responsiveness. The paper concludes by outlining critical challenges and future directions in developing lightweight AI models, secure data protocols, and sustainable battery lifecycle solutions, which are essential for the widespread adoption of safe, efficient, and intelligent electric vehicles.
In the post-silicon validation process, different functionalities and boundaries of a system-on-chip (SoC) are tested, generating a large amount of data in the form of oscilloscope images, trace data, and log files. Oscilloscope images are used to visualize and analyze the digital I/O signals and play a crucial role in detecting anomalies. However, the debugging process of the oscilloscope images requires a lot of manual data analysis, which is time-consuming, inefficient, costly, and prone to errors. This paper proposes an artificial intelligence (AI) model to automatically detect anomalies in the oscilloscope images. Our proposed model uses a Convolutional Autoencoder (CAE), a neural network, which we train on real silicon data obtained from various post-silicon validation projects. While autoencoders have been used for anomaly detection, this is the first use to detect anomalies in oscilloscope images for postsilicon validation. Moreover, the state-of-the-art techniques use Reconstruction Error (RCE) as an anomaly detection metric, we show that a combination of RCE and Kernel Density Estimation (KDE) error metrics greatly reduces the false negatives (68%) for the anomalous category and improves the recall metric from 62% to 88%, making our approach 41% better. In addition, our proposed model achieves 99% precision in categorizing not anomalous data points. Furthermore, the proposed model has been deployed in the production environment, significantly reducing human effort.
Abstract - The rapid global rise of Electric Vehicles (EVs) poses a major challenge to traditional power grids. It requires a smart and sustainable charging system. This paper introduces an Energy Management System (EMS) for an EV charging station that works with renewable energy sources (RES), mainly solar panels (PV), and a Battery Energy Storage System (BESS). The main goal of the EMS is to effectively manage the power flow between the PV system, BESS, utility grid, and EVs. The system focuses on charging vehicles directly from solar energy to maximize the use of renewables. Any extra solar power is stored in the BESS for future use when energy production is low or electricity prices are high. The EMS uses smart control algorithms that take into account real-time data, such as solar irradiance, grid electricity prices, the state-of-charge (SoC) of the BESS, and the charging needs of EVs. By carefully scheduling when to charge and discharge, the EMS aims to lower operational costs, ease peak load stress on the utility grid, and reduce the charging station's carbon footprint. This setup offers a reliable, affordable, and eco-friendly solution, which is essential for incorporating EVs into a sustainable transportation system. Key Words: Energy Management System, Electric Vehicles, Renewable Energy Sources, Photovoltaic, Battery Energy Storage System, EV Charging Station,Smart Grid, Optimization, Power Flow Control, Vehicle-to-Grid.
AI-Driven Cache Coherence Verification with Graph Neural Networks in SoC-Based Shared Memory Systems
No abstract available
This paper explores the intricate relationship between artificial intelligence (AI) and integrated circuit (IC) design. With the relentless evolution of technology, IC design faces escalating challenges in complexity, cost, and verification. AI emerges as a transformative force, offering solutions to these issues. The paper investigates AI's potential in IC design through a qualitative study involving domain experts. The findings reveal a consensus on AI's inevitability in the field. As ICs become more intricate, AI-driven automation becomes indispensable, though opinions on the traditional interpretation of Moore's Law diverge. The impact of AI varies across IC design domains, with certain areas benefiting significantly from automation while others rely on human expertise. AI is expected to play a pivotal role in synthesis, System-on-Chip (SoC), ASICs, and memory ICs, whereas domains like analog and photonics may experience less direct AI influence. Notably, AI is poised to revolutionize IC design verification, addressing mounting complexity. AI integration extends across essential IC design domains, including architecture, verification, validation, physical design, and manufacturing. AI-driven approaches target areas like physical verification, simulation, design for manufacturing (DFM) analysis, and cost reduction. Furthermore, AI's potential in quantum computing ICs is recognized, enhancing computing power and resource distribution efficiency. Crucially, AI is seen as a complement, not a replacement, for human expertise. The synergy between human creativity and AI-driven automation is deemed vital for achieving optimal IC design. This paper underscores the transformative potential of AI in IC design, offering insights into its role, challenges, and the enduring importance of human ingenuity.
Amid the rising demand for efficient processors, the challenge has always been to reduce power consumption without compromising performance. FinFET technology has significantly reduced leakage power issues, but dynamic power consumption at lower nodes has reemerged as a concern. Methodologies such as power/clock gating, DVFS, and voltage biasing have been proposed in the past. However, next-generation complex SoC designs at < 7nm technology require rethinking at finer granularity. The power gating methodology proposed by You et al., Roy et al., and PG-instr algorithm [1] reduce power consumption but have performance drawbacks. These techniques cause at least a 2% drop in performance in terms of wake-up latency. Moreover, You et al. and Roy et al. only considered idle time, while PG-instr focuses solely on energy savings. None of them balance idle time and energy savings together, which is crucial for achieving the best performance with low power consumption.
Internet-of-Things (IoT) drives the demand for artificial intelligence (AI) system-on-chips (SoCs) for vast always-on ultra-low power applications such as human action recognition (HAR) for surveillance systems, face detection (FD) and recognition (FR) for home security, etc. Previous AI-IoT SoCs still face limited system efficiency caused by the high leaky power of SRAMs, huge external memory access (EMA), and frequent on-chip data transfer. The proposed ultra-low power RISC-V embedded AI-IoT SoC is composed of 1) a novel bit-line (BL) segmented coupled nvSRAM macro with switchable working modes: SRAM, non-volatile memory (NVM), NVM computing in memory (CIM), performing pre-charge reusing, power gating and local data swapping; 2) a hot-silent encoded (HSE) uDMA cluster with 1MB multi-bank eMRAM to reduce the on-chip transmission power and eliminate the EMA power; 3) and an event-driven wake-up unit (EDWU) for skipping unnecessary inference; 4) a RISC-V core with dedicated ISA extension for switchable working modes. The proposed SoC achieves an energy efficiency of 20.3–35.5 TOPS/W @ResNet-20 (fix-point-8, FXP8) inferencing, which shows a <inline-formula> <tex-math notation="LaTeX">$2.82\times $ </tex-math></inline-formula>–<inline-formula> <tex-math notation="LaTeX">$3.69\times $ </tex-math></inline-formula> efficiency improvement compared to the previous state-of-the-art (SOTA) AI-IoT SoCs.
Electric vehicle (EV) batteries gradually degrade, influencing safety, performance, and overall reliability. This study introduces an AI-enabled predictive maintenance framework designed to assess battery health and anticipate servicing needs. A physics- based simulation using PyBaMM generates multivariate time-series data, including voltage, current, temperature, State of Charge (SOC), and State of Health (SOH) across repeated cycles. SOC is estimated using an LSTM network, while SOH progression is modeled through a hybrid CNN-LSTM architecture capable of capturing both short-term variations and long-term aging trends. These predictions, along with engineered features, are evaluated by a Random Forest classifier to categorize battery condition into four states: Good, Monitor, Schedule Service, and Replace. Experimental analysis confirms that the deep-learning models offer reliable SOC and SOH predictions, and the classifier provides consistent maintenance decisions. The framework shows strong potential for real-time battery diagnostics in advanced EV Battery Management Systems.
No abstract Int. J. Soc. Sc. Manage. Vol. 13, Issue-1: 1-2.
Understanding battery State of Health (SoH) and State of Charge (SoC) predictions highly ensure reliability, safety, and longevity in energy storage systems, especially for electric vehicles and smart grids. This novel hybrid-type framework enables the successive application of machine learning and deep learning models in SOC and SOH estimations. A one-of-a-kind blend of Linear Regression, XG-Boost, Recurrent Neural Networks (RNN), Bi-directional LSTM, and hybrid LSTM-GRU is suggested to capture as many temporal and non-linear patterns in battery behavior as possible. K-Best feature selection is used to enhance model generalization by keeping only the most important input features. Contrary to the existing unexplainable models, our approach leverages explainable AI methods-SHAP and LIME-to explain model decisions and magnitude of feature impact. Real-world battery datasets weighed experimentally underline the superiority of our approach from both accuracy and interpretability perspectives over traditional means. This paper's novelty lies in a hybrid modeling architecture, an interpretable learning pipeline, and a consideration of both predictive ability and interpretability. This framework has enormous potential to draw the innovations forward in intelligent battery management systems.
The rapid adoption of artificial intelligence (AI) across regulated and mission-critical industries has redefined the strategic role of Technical Product Managers (TPMs) in architecting compliant, scalable, and resilient AI-powered infrastructures. This review develops a compliance-driven framework that positions TPMs at the intersection of systems engineering, AI lifecycle orchestration, and enterprise governance. The paper examines how TPMs translate high-level regulatory requirements such as GDPR, HIPAA, NDPR, SOC 2, and emerging AI safety standards into actionable product architecture decisions, spanning data ingestion pipelines, model training workflows, MLOps automation, and post- deployment monitoring. It details TPM responsibilities across the AI lifecycle, including dataset curation oversight, model risk assessment, explainability prioritization, security-by-design enforcement, and continuous compliance validation within CI/CD and ML pipeline environments. Additionally, the review analyzes the TPM’s role in cross-functional alignment, emphasizing coordination with data scientists, ML engineers, security teams, legal/compliance units, and infrastructure architects to maintain traceability, audit readiness, and technical feasibility at scale. Using evidence from high-stakes operational contexts such as healthcare AI systems, fintech anti-fraud engines, and autonomous decision-support tools the paper highlights emerging challenges and best practices for TPM leadership in managing model drift, data governance bottlenecks, adversarial risk, and lifecycle documentation. The proposed framework provides TPMs with structured guidance for designing AI-enabled infrastructures that are not only high-performance and cost-optimized, but also ethically aligned, regulation-aware, and resilient to evolving compliance and security requirements.
本报告最终将 AI for SOC 的研究划分为六大维度。核心演进路径清晰地展示了从基础的“深度学习检测算法”到“SIEM/SOAR 自动化集成”,再到当前最前沿的“LLM 与智能体自主运营”的跨越。研究重点已从单纯的检测精度提升,转向解决“告警疲劳”、增强“人机协同可解释性”以及适配“云原生与零信任”等复杂架构。此外,报告严格区分了安全运营中心(SOC)与硬件芯片/电池管理(SoC)的同名异义研究,确保了行业研究的严谨性与聚焦度。