应急管理与自动化分类场景
重大自然灾害监测与应急响应系统
这些文献集中研究利用遥感技术、无人机(UAV)、卫星图像及深度学习算法,对洪水、地震、山体滑坡等自然灾害进行现场损害评估、受灾区域识别及自动化分类。
- Automated Building Damage Classification using Deep Learning for Efficient Disaster Response and Recovery(Ch.Upendar Rao, B. Avinash, B. Roshini, Y. Kumar, M. Rishi, Sai Koushik, 2025, 2025 8th International Conference on Computing Methodologies and Communication (ICCMC))
- Patch Size Comparison of U-Net Deep Learning Model for Landslide Hazard Area Detection Using Drone Imagery(Chae Yeon Oh Chae Yeon Oh, Kye Won Jun Kye Won Jun, 2025, Crisis and Emergency Management: Theory and Praxis)
- Disaster Response Using Drones(T. Neha, Kavya Panchati, Vaishnavi Mallichetty, Sarika Boyapaty, M. Kausar, 2024, Nanotechnology Perceptions)
- Rapid earthquake damage assessment via hybrid LSTM-RNN with a quantum-inspired classification head based on Autonomous Perceptron Model APM(A. Alotaibi, Sattam Alharbi, Ahmed M. Elshewey, 2026, Scientific Reports)
- Automated building damage classification for the case of the 2010 Haiti earthquake(D. Dubois, R. Lepage, 2013, 2013 IEEE International Geoscience and Remote Sensing Symposium - IGARSS)
- Deep Learning Based Flood Severity Detection Using UAV Images(N. Reddy, Shaik Dileep, Jai Vardhan, J. S, Ansuman Mahapatra, 2025, 2025 International Conference on Artificial intelligence and Emerging Technologies (ICAIET))
- Deploying Rapid Damage Assessments from sUAS Imagery for Disaster Response(Thomas Manzini, Priyankari Perali, Robin R. Murphy, 2025, AAAI Conference on Artificial Intelligence)
- Landslide Assessment Classification Using Deep Neural Networks Based on Climate and Geospatial Data(Yadviga Tynchenko, Vladislav Kukartsev, V. Tynchenko, Oksana Kukartseva, Tatyana Panfilova, Alexey Gladkov, Van Nguyen, I. Malashin, 2024, Sustainability)
- Collaborative online planning for automated victim search in disaster response(Zoltán Beck, W. L. Teacy, A. Rogers, Nicholas R. Jennings, Nicholas R. Jennings, 2018, Robotics and Autonomous Systems)
- Deep-Learning-Based Aerial Image Classification for Emergency Response Applications Using Unmanned Aerial Vehicles(C. Kyrkou, T. Theocharides, 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW))
- LADOS: Aerial Imagery Dataset for Oil Spill Detection, Classification, and Localization Using Semantic Segmentation(Konstantinos Gkountakos, Maria Melitou, Konstantinos Ioannidis, Kostas Demestichas, S. Vrochidis, Y. Kompatsiaris, 2025, Data)
- Advancing Flood Disaster Risk Mapping Through Multi‐Sensor Fusion and Machine Learning(Jijing Sun, Rana Waqar Aslam, Dhouha Choukaier, Iram Naz, Danish Raza, N. J. Kavhiza, Yahia Said, 2026, Transactions in GIS)
- Urban-Hybrid-CDQNet: A Unified Deep Learning Framework for Semantic Change Detection and Quantification in Urban Monitoring and Disaster Response(Israa El Rifahi, Hussein F. Nasrallah, Hadi Noureddine, Ahmad Kobeissi, 2025, 2025 37th International Conference on Microelectronics (ICM))
- Research on natural flood disaster prediction and risk assessment based on data analysis and machine learning(Baonian Li, Xin Wang, 2025, 2025 International Conference on Artificial Intelligence and Engineering Management (ICAIEM))
- Low-AoI Data Collection for UAV-Assisted IoT With Dynamic Geohazard Importance Levels(Xiuwen Fu, Tianle Wang, P. Pace, G. Aloi, G. Fortino, 2025, IEEE Internet of Things Journal)
- UrbanFloodKG: An Urban Flood Knowledge Graph System for Risk Assessment(Yu Wang, Feng Ye, Binquan Li, Gaoyang Jin, Dong Xu, Feng Li, 2023, Proceedings of the 32nd ACM International Conference on Information and Knowledge Management)
- Deep Learning for Disaster Detection: A Framework for Automated Multimodal Data Classification(Nandhakumar C, M. J, M. S, Naveenkumar G, Sudharson T, 2025, 2025 6th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI))
- Automating Building Damage Reconnaissance to Optimize Drone Mission Planning for Disaster Response(Da Hu, Shuai Li, Jing Du, Jiannan Cai, 2023, Journal of Computing in Civil Engineering)
- Automated Hurricane Damage Classification for Sustainable Disaster Recovery Using 3D LiDAR and Machine Learning: A Post-Hurricane Michael Case Study(Jackson Kisingu Ndolo, Ivan Oyege, Leonel E. Lagos, 2025, Sustainability)
- Creating xBD: A Dataset for Assessing Building Damage from Satellite Imagery(Ritwik Gupta, B. Goodman, Nirav Patel, Richard Hosfelt, Sandra Sajeev, Eric T. Heim, Jigar Doshi, Keane Lucas, H. Choset, Matthew E. Gaston, 2019, Carnegie Mellon University)
- Heat wave hazard classification and risk assessment using artificial intelligence fuzzy logic(I. Keramitsoglou, C. Kiranoudis, B. Maiheu, K. Ridder, I. Daglis, P. Manunta, M. Paganini, 2013, Environmental Monitoring and Assessment)
- Timing is Everything - Drought Classification for Risk Assessment(V. Graw, Gohar Ghazaryan, Jonas Schreier, Javier González, A. Abdel-Hamid, Y. Walz, Karen Dall, J. Post, A. Jordaan, O. Dubovyk, 2018, IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium)
医疗应急决策与个人健康实时智能监测
该组聚焦于智慧医疗与个人护理领域,涵盖利用可穿戴设备、物联网感知进行跌倒预警、生命体征监控,以及利用机器学习进行临床辅助诊断、病患分诊与ICU风险预测。
- IoT-Enabled Glove for Women’s Safety and Sign Language Translation(B. Bairwa, Seema Magadum, Tushar C. Salian, Vivek M. Bagali, S. Y., Matam Guna Sekhar, 2025, 2025 International Conference on Intelligent Computing and Knowledge Extraction (ICICKE))
- A Smart Wearable-Based Fall Detection and Health Monitoring System for Elderly Care Using IoT and Machine Learning(G. Santhanamari, S. Choudhary, Sankara Malai Mohan Ps, Gurusharan, V. L, 2025, 2025 International Conference on Next Generation Computing Systems (ICNGCS))
- Integrated Implementation of Hybrid Deep Learning Models and IoT Sensors for Analyzing Solider Health and Emergency Monitoring(S. Usharani, R. Rajmohan, P. Bala, D. Saravanan, P. Agalya, D. Raman, 2022, 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN))
- Design and Implementation of an IoT-based Emergency Alert and GPS Tracking System using MQTT and GSM/GPS Module(P. Chinnasamy, Pranavdev P. S, Chintha Sivakrishnaiah, T. Sathiya, Irfan Alam, Divya Priya Degala, 2025, 2025 5th International Conference on Trends in Material Science and Inventive Materials (ICTMIM))
- SIPMS: IoT based Smart ICU Patient Monitoring System(Srinivasan C, V. R, Konireddy Sreelatha, Sakthipriya V, 2023, 2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF))
- Elderly Healthcare IoT through Data Analytics and Artificial Intelligence(R. Zamare, 2024, 2024 IEEE 4th International Conference on ICT in Business Industry & Government (ICTBIG))
- Developing IoT-LoRaWAn Ambulance Tracking System to Enhance Emergency Response(Nur Hayati, Dena Arifanto, Kunnu Purwanto, Magfirawaty, R. D. Mardian, 2024, 2024 International Conference on Information Technology and Computing (ICITCOM))
- Automated Risk Assessment of COVID-19 Patients at Diagnosis Using Electronic Healthcare Records(Felipe O. Giuste, Lawrence L. He, M. Isgut, Wenqi Shi, B. Anderson, May D. Wang, 2021, 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI))
- Lightweight Multi-Path CNN with Multi-Scale Feature Fusion for Interpretable Classification of Intracerebral Hemorrhage in CT Images(B. N. Rao, S. Sabut, 2025, 2025 International Conference on Cognitive, Green and Ubiquitous Computing (IC-CGU))
- Exploring Mortality and Prognostic Factors of Heart Failure with In-Hospital and Emergency Patients by Electronic Medical Records: A Machine Learning Approach(Cheng-Sheng Yu, Jenny L Wu, Chun-Ming Shih, Kuan-Lin Chiu, Yu-Da Chen, Tzu-Hao Chang, 2025, Risk Management and Healthcare Policy)
- Deep Learning for Ultrasound-Based Auxiliary Diagnosis of Emergency Ascites.(Yaoting Wang, R. Ye, Qiwen Cai, Yingnan Wu, Liping Gong, Zhe Li, Zhen-zhu Sun, Yiying Ben, 2026, Ultrasound in Medicine & Biology)
- Cloud-based Digital Twin Framework and IoT for Smart Emergency Departments in Hospitals(Haider Q. Mutashar, Sawsan M. Mahmoud, Hiba A. Abu-Alsaad, 2025, Engineering, Technology & Applied Science Research)
- Intelligent Public Health Platform: A Multi-Stage Operational Process for Emergency Prevention and Health Management(L. Chen, Rudan Lin, Xiaopeng Li, Xiaopan Ding, 2025, International Journal of Electric Power and Energy Studies)
- AI-Enabled Ambulance Coordination System for Emergency Healthcare(Indira K.R, G. G, Pon Jeyashree V J, A. R, Akshaya B, 2025, 2025 IEEE 9th International Conference on Information and Communication Technology (CICT))
- A Hybrid Attention-Based Framework for Lung Disease Classification using Chest X-Ray Images(G. Suresh, Cheedella Kathyayani, Jamjam Mournitha, B. Nandini, Goli Manojkumar, 2026, International Journal of Data Science and IoT Management System)
- A QoS-Aware IoT Edge Network for Mobile Telemedicine Enabling In-Transit Monitoring of Emergency Patients(Adwitiya Mukhopadhyay, Aryadevi Remanidevi Devidas, Venkat Rangan, M. Ramesh, 2024, Future Internet)
- Enabling Precision Medicine With Digital Case Classification at the Point-of-Care☆(Patrick E. Obermeier, Susann Muehlhans, Christian Hoppe, Katharina Karsch, Franziska Tief, Lea D Seeber, Xi Chen, Tim Conrad, Sindy Boettcher, S. Diedrich, B. Rath, 2016, EBioMedicine)
- A multi-task deep learning pipeline for classification, detection, and weakly supervised 3D segmentation of intraparenchymal hematoma on brain CT(Cheng-En Juan, Hikam Muzakky, Chia-Ching Chang, Pen-Lin Chou, Ya-Hui Li, Tung-Yang Lee, Cheng Juan, Ming-Ting Tsai, Chun-Wen Chen, Chun-Jung Juan, 2026, Scientific Reports)
- Internet of Things (IoT)-Based Smart Healthcare System for Efficient Diagnostics of Health Parameters of Patients in Emergency Care(A. Balasundaram, Sidheswar Routray, A. Jerwin Prabu, P. Krishnan, Prince Priya Malla, Moinak Maiti, 2023, IEEE Internet of Things Journal)
- An Explainable IoT-based Framework for Anomaly Detection and Emergency Decision Management in Smart Healthcare(S. Divyabharathi, S. Madhavan, 2025, 2025 3rd International Conference on Sustainable Computing and Data Communication Systems (ICSCDS))
- Automated ECG Report as a Factor in the Clinical Decision Pathway for Acute Chest Pain in the Emergency Department(Ashok Kumar Sankaranarayanan, Firas AlNajjar, Anas Musa, M. W. Kuthbudeen, Afrah Ghayoor Abdul Wahab, 2026, Cureus)
- Application of Interpretable Machine Learning Algorithms to Predict Acute Kidney Injury in Patients with Cerebral Infarction in ICU.(Xiaochi Lu, Yi Chen, Gongping Zhang, Xu Zeng, Linjie Lai, Chaojun Qu, 2024, Journal of Stroke and Cerebrovascular Diseases)
- Early Risk Prediction for Emergency Department Triage using Machine Learning Techniques(B. Lalithadevi, J. S. Prasanna, JG.Sutha Sri, 2026, 2026 International Conference on AI-Driven Smart Systems and Ubiquitous Computing (ICAUC))
- Human Centric Cloud Based Portable ICU for Advance Assistance System(Pooja Pimpalshende, A. Bagde, Aditya Atkare, Aman Parve, Deepak Khambalkar, 2025, International Journal on Advanced Computer Engineering and Communication Technology)
- Deep Learning Approaches for Multilabel Classification of Brain Hemorrhages in CT Imaging(Anusha Daivajnya, V. B, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- POTTER-ICU: An artificial intelligence smartphone-accessible tool to predict the need for intensive care after emergency surgery.(A. Gebran, Annita Vapsi, L. Maurer, M. El Moheb, L. Naar, Sumiran S. Thakur, Robert D. Sinyard, D. Daye, G. Velmahos, D. Bertsimas, H. Kaafarani, 2022, Surgery)
- Machine Learning-Augmented Triage for Sepsis: Real-Time ICU Mortality Prediction Using SHAP-Explained Meta-Ensemble Models(Hülya Yılmaz Başer, Turan Evran, Mehmet Akif Cifci, 2025, Biomedicines)
- Implementing IoT Technologies for Improved Patient Outcomes and Faster Response Times in Emergency Care: The Internet of Healthcare Things (IoHT)(B. Dhevi, V. Pandi, R. Deepa, G. Karthikeyan, Sravanthi, M. Y. Al-Safarini, 2025, 2025 IEEE International Conference on Blockchain and Distributed Systems Security (ICBDS))
- Predictive Modelling of Critical Vital Signs in ICU Patients by Machine Learning: An Early Warning System for Improved Patient Outcomes(S. S, Kumaragurubaran T, V. S R, Vigneshwaran R, 2024, 2024 3rd International Conference for Innovation in Technology (INOCON))
- MediServe: An IoT-Enhanced Deep Learning Framework for Personalized Medication Management for Elderly Care(Smita Kapse, G. Yenurkar, V. Nyangaresi, Gunjan Balpande, Shravani Kale, Manthan Jadhav, Sahil Lawankar, Vikrant Jaunjale, 2025, Computers, Materials & Continua)
- SETU: Revolutionizing Emergency Medical Response(S. Mathur, Ujjawal Mishra, Anurag Gutte, Keerthikrishna Jog, 2025, 2025 IEEE 5th International Conference on ICT in Business Industry & Government (ICTBIG))
- An IoT-Enabled Health Monitoring and Emergency Hospital Navigation System Using Disease Classification and Predictive Modelling(S. Delsi Robinsha, B. Amutha, D. Vanusha, A. Rege, 2025, 2025 2nd International Conference on Computing and Data Science (ICCDS))
- Prediction of Respiratory Tract Infections Using IoT and RNN Techniques(Latika Pinjarkar, S. Sagayamary, R. P., Sivaprasad Lebaka, Porandla Srinivas, Rajendar Sandiri, Jayabharathi Ramasamy, Srinivasan C., 2025, Engineering, Technology & Applied Science Research)
- Developing an IoT Based Wheelchair: Biomedical Data Logging & Emergency Contingency Services(Tahmidul Ashraf, Nadia Islam, Shanto Lawrence Costa, Md. Shamsul Arefin, A. Azad, 2021, 2021 IEEE International Conference on Consumer Electronics (ICCE))
- Wireless Sensor Networks for fall incident detection: a smart wearable approach using Kalman Filter and k-NN with LoRa WAN, Node Red, and Telegram integration(Moh Samudro, Ahmad Firdausi, G. Hakim, Umaisaroh Umaisaroh, 2025, International Journal of Electronics and Telecommunications)
公共交通、设施安全与工业环境监测
针对关键社会设施,通过计算机视觉、物联网与传感器网络实现交通碰撞检测、火灾预警、工业设备状态监控及网络安全应急响应。
- An Intelligent IoT based Advanced Accident Detection and Sensor Fusion Categorization System(K. Pradeep, P. Tamilvani, P. Palanisamy, M. Mohammadha Hussaini, S. Ragul, N. Selvam, 2023, 2023 2nd International Conference on Automation, Computing and Renewable Systems (ICACRS))
- IoT-Based Car Safety System With Airbag Notification for Emergency Assistance(Dr. L. Ramalingam, Dr. Umamagewaran Jambulingam, Dr. S. Muthumarilakshmi, N. Malathi, M. Venkatesh, 2023, 2023 Second International Conference On Smart Technologies For Smart Nation (SmartTechCon))
- Portable Smart Emergency System Using Internet of Things (IOT)(Batool Jamal, Muneera Alsaedi, Parag Parandkar, 2023, Mesopotamian Journal of Big Data)
- Vehicle Sound Recognition Assistance in IoT Systems for Hearing-Impaired Drivers(Osman Salem, A. Mehaoua, R. Boutaba, 2025, IEEE Internet of Things Magazine)
- Human Scream Detection and Analysis for Controlling Crime Rate(Saikumar Birru, 2025, INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- Ground-Based Cloud Type Classification for Aviation Weather Hazard Detection Using Deep Learning(M. Cote, M. Splitt, S. Lazarus, Ryan T. White, Cecilia G. Baker, 2025, IEEE Access)
- A Deep Learning-Based Fire Detection System for Ships Using CNN-LSTM Networks(Van Nguyen Thanh, Hung Nguyen Van, 2025, 2025 International Russian Automation Conference (RusAutoCon))
- A Surveillance Camera Based Fire Detection and Localization(R. Deepika, S. M., V. K, Skanda Ganesh G, 2025, 2025 International Conference on Computing and Communication Technologies (ICCCT))
- A Novel Approach for Emergency Vehicle Detection(S. Shrivastava, Vaidehi Vinchurkar, S. Raghuram, Nupur Agarwal, P. H. Prasad, 2023, 2023 IEEE 20th India Council International Conference (INDICON))
- Internet of Things-Based Accident Detection and Hazard Response System for Intelligent Transportation(Anis ur Rehman, Mohammad J. Sanjari, Bo Du, Junwei Lu, 2025, 2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC))
- AI-Powered System Cybersecurity Operations and Incident Response(Pranita Chaudhary, Varad Sawant, Anurag Karpe, Lokesh Kad, Pratham Kubetkar, 2025, 2025 9th International Conference on Computing, Communication, Control and Automation (ICCCBEA))
- Edge-Collaborative Intelligent Monitoring System for Distribution IoT: An Integrated Design for Multi-Source Data Processing and Bayesian Fault Early Warning(Yunhai Song, Xianbiao Chen, Liwei Wang, Dianrui Yu, Yaohui Xiao, Wenrong Li, Junsong Yu, 2025, 2025 International Conference on Big Data and Data Mining (BDDM))
- Integrated Traffic Incident Classification using SegFormer and Faster R-CNN: A Multi-Stage Approach for Enhanced Detection and Analysis(Sankar Ganesh Karuppasamy, Sajeev Ram Arumugam, S. P, Divya Muralitharan, T. S., G. K, 2025, 2025 3rd International Conference on Sustainable Computing and Data Communication Systems (ICSCDS))
- From Traffic Analysis to Real-Time Management: A Hazard-Based Modeling for Incident Durations Extracted Through Traffic Detector Data Anomaly Detection(D. Pan, Samer H. Hamdar, 2023, Transportation Research Record: Journal of the Transportation Research Board)
- Real-Time Traffic Monitoring and Ambulance Prioritization Using YOLOv9 and Deep Learning(P. Kaladevi, B. M, G. D, G. S, 2025, 2025 3rd International Conference on Communication, Security, and Artificial Intelligence (ICCSAI))
- AI-Driven Pedestrian and Accident Detection with Real-Time Emergency Response and Safety System Using IoT(A. S. Mahajan, P. Deshmukh, 2025, 2025 1st International Conference on Data Science and Intelligent Network Computing (ICDSINC))
- TAERM: Traffic accident emergency response management framework for detection and classification using IoT and YOLOv9(Ayman Noor, Hanan Almukhalfi, Talal H. Noor, R. Ranjan, 2026, Future Generation Computer Systems)
- Enhancing Sustainable Transportation Infrastructure Management: A High-Accuracy, FPGA-Based System for Emergency Vehicle Classification(Pemila Mani, Pongiannan Rakkiya Goundar Komarasamy, N. Rajamanickam, Mohammad Shorfuzzaman, W. Abdelfattah, 2024, Sustainability)
- Traffic Collision Detection Using DenseNet(Daniel Kaluza, Marco Seiler, Rasha Kashef, 2023, 2023 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM))
- An IoT Enabled Vehicular Decision Fusion Framework for Accident Detection and Classification(Nikhil Kumar, D. Acharya, Divya Lohani, 2020, Int. J. Next Gener. Comput.)
- Emergency vs Non-Emergency Vehicle Classification: Enhancing Intelligent Traffic Management Systems(M. S, Kishan Shetty, Ashwini Kodipalli, Trupthi Rao, K. S, 2023, 2023 International Conference on Network, Multimedia and Information Technology (NMITCON))
- An IoT-Based Vehicle Accident Detection and Classification System Using Sensor Fusion(Nikhil Kumar, D. Acharya, Divya Lohani, 2021, IEEE Internet of Things Journal)
- Advanced Traffic Incident Detection and Classification with Real-Time Computer Vision(Kolhe Parag Namdeo, Dr. Rajesh Keshavrao Deshmukh, 2024, International Journal of Scientific Research in Science and Technology)
- Green Lights Ahead: An IoT Solution for Prioritizing Emergency Vehicles(Soham Methul, Saket Kaswa, 2023, Journal of Ubiquitous Computing and Communication Technologies)
- Privacy Preserving Image Encryption with Optimal Deep Transfer Learning Based Accident Severity Classification Model(U. Sirisha, B. Chandana, 2023, Sensors)
- Resource-aware On-device Deep Learning for Supermarket Hazard Detection(M. G. Sarwar Murshed, James J. Carroll, Nazar Khan, Faraz Hussain, 2020, 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA))
- Smart Buildings and Digital Twin to Monitoring the Efficiency and Wellness of Working Environments: A Case Study on IoT Integration and Data-Driven Management(G. Piras, Sofia Agostinelli, F. Muzi, 2025, Applied Sciences)
- Automated Drilling and Production Event Detection Using Advanced Time-Series Pattern Recognition Techniques(Abraham C. Montes, K. Sudyodprasert, Yuxing Wu, P. Ashok, E. van Oort, 2025, SPE/IADC International Drilling Conference and Exhibition)
- Intelligent IoT Traffic Classification Using Novel Search Strategy for Fast-Based-Correlation Feature Selection in Industrial Environments(Santiago Egea, Albert Rego Mañez, B. Carro, Antonio J. Sánchez-Esguevillas, Jaime Lloret, 2018, IEEE Internet of Things Journal)
- Live Event Detection For Public Safety Using Sparse LSTM Networks In Hazard Monitoring Systems(Pooja Barve, S. Bere, Dinesh Hanchate, 2025, International Journal For Multidisciplinary Research)
- Optimizing Laboratory Security with IoT-Based Emergency Monitoring(Erika Loniza, Kurnia Chairunnisa, Febryadi Mokodompis, Sigit Widadi, Irvan Eko Kris Maryanto, Bambang Untara, 2025, 2025 International Conference on Advancement in Data Science, E-learning and Information System (ICADEIS))
- ToN_IoT: The Role of Heterogeneity and the Need for Standardization of Features and Attack Types in IoT Network Intrusion Data Sets(Tim M. Booij, Irina Chiscop, Erik Meeuwissen, Nour Moustafa, F. D. Hartog, 2021, IEEE Internet of Things Journal)
- SakhiSuraksha: an AI and IoT-Based Intelligent Emergency Response System for Women's Safety(Mallegowda M, S. S, Yash Suresh Gavas, Venella Rudraraju, Anita Kanavalli, 2026, 2026 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE))
- IoT and AI-Enhanced Autonomous Fire-Fighting Robot with Real-Time Hazard Monitoring(A. P, S. Ravindran, Suresh Sundaram, Faiz Mohammad Karobari, K. E, M. K., 2025, 2025 9th International Conference on Computational System and Information Technology for Sustainable Solutions (CSITSS))
- A Comprehensive Model for Human Factor Risk Assessment: HFACS-FFT-ANN(Yu Liu, Yang Liu, Xiaoxue Ma, Weiliang Qiao, 2019, Proceedings of the 5th Annual International Conference on Management, Economics and Social Development (ICMESD 2019))
- Real-Time AI-Driven Hazard Detection: Integrating Computer Vision and Sensor Networks for Enhanced Mining Safety(Vivekananda Reddy Uppaluri, 2025, International Journal of Scientific Research in Computer Science, Engineering and Information Technology)
- MAC Scheduling and Traffic Control Prioritization with Object Detection Using Artificial Intelligence in IoT-SDN based Smart Agriculture(K. Malathi, T. S. Murthy, M. Y. Al-Safarini, Sameer Kumar, S. N. Kumar, Vijilius Helena Raj, 2025, 2025 International Conference on Recent Innovation in Science Engineering and Technology (ICRISET))
- Smart-Sec: DL-based Cyber Threat Detection for Autonomous Smart Home System to Enhance Human Life Expectancy(Naman Jain, Manas Patel, Fenil Ramoliya, Rajesh K. Gupta, Sudeep Tanwar, Deepak Garg, 2024, 2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT))
- Bus-Centric Temporal Graph Neural Network Framework for Fault Localization and- Risk Profiling Using Pmu Time Series Data(Kunal Samad, Arunkumar Patil, Amarendra Matsa, 2025, 2025 13th International Conference on Smart Grid (icSmartGrid))
- Prototype of an Emergency Response System Using IoT in a Fog Computing Environment(Iván Ortiz-Garcés, R. Andrade, Santiago Sánchez-Viteri, W. Villegas-Ch., 2023, Computers)
- AI-Based Detection of Fire and Smoke in Healthcare Facilities(Ali Rhayem, Mostafa Rizk, Jad Abou Chaaya, Abbass Nasser, 2025, 2025 IEEE International Conference on Emerging Trends in Engineering and Computing (ETECOM))
- Real-Time IoT enabled Vehicle Collision Detection and Emergency Response System(S. Bhuvaneswari, Hanushka Srinivasan, J. G., Srihari M M, Thamizh Mullai R A, Natchatraa V, 2025, 2025 International Conference On Emerging Computation and Information Technologies (ICECIT))
- Transforming Incident Management: Leveraging Artificial Intelligence for Enhanced Detection, Classification, and Resolution(Naveen Kumar Chandu, 2025, Journal of Computer Science and Technology Studies)
- Smart Bicycle Helmet for Rider Safety with Accident Detection and Turn Indicators(S. S, N. N, Vijendra Babu D, 2025, 2025 International Conference on Emerging Technologies in Computing and Communication (ETCC))
- Cyber-Physical Systems in Disaster Management: Real-Time Data Collection and Analysis for Improved Response(Pavan Chaudhary, Pachayappan R, Ishika Soni, Anmol Pattanaik, Pradeep Marwaha, Gadug Sudhamsu, 2025, 2025 International Conference on Automation and Computation (AUTOCOM))
- The Social Impact of IoT in Disasters(Haoming Xiang, Take Itagaki, 2025, 2024 International Conference on IT Innovation and Knowledge Discovery (ITIKD))
- Developing real-time IoT-based public safety alert and emergency response systems(Han Zhang, Runze Zhang, Jiamanzhen Sun, 2025, Scientific Reports)
- Deep Learning Enabled Smart Surveillance System for Accident Severity Classification and Emergency Response Optimization(M. Revathi, J. Gold, Beulah Patturose, 2025, 2025 International Conference on Recent Innovation in Science Engineering and Technology (ICRISET))
- An IoT System for Social Distancing and Emergency Management in Smart Cities Using Multi-Sensor Data(R. Fedele, M. Merenda, 2020, Algorithms)
- Artificial intelligence integration in cyber incident response teams to enable faster containment, forensic accuracy, and resilient business continuity(Kwaku Boamah, A. Asante, Ashley Tieman, Kwadwo Fining Okai, 2025, International Journal of Science and Research Archive)
- Integrated Image Processing and Gas Sensing for Enhanced Hazard Detection.(Prateek Buthale, Vaishali Savale, Aditya Suryawanshi, Omprakash Suryawanshi, Siddhesh Waghmare, 2024, International Journal For Multidisciplinary Research)
- An Emergency Rescue Framework through Smart IoT LPWAN(K. Jain, H. Saini, 2023, 2023 International Conference on Advancement in Computation & Computer Technologies (InCACCT))
- IoT-Based Vibration Sensor Data Collection and Emergency Detection Classification using Long Short Term Memory (LSTM)(C. I. Nwakanma, F. Islam, Mareska Pratiwi Maharani, Dong-Seong Kim, Jae-Min Lee, 2021, 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC))
- Design and Development of an Intelligent Quadruped Rescue Robot for Disaster Response(Meng Yuan, 2025, 2025 5th International Conference on Sensors and Information Technology)
- AI-Based Security Surveillance and Hazard Detection for Train Platform Safety(Alvaro Aparicio Serna, Xinrui Yu, J. Saniie, 2024, 2024 IEEE International Conference on Electro Information Technology (eIT))
- IoT based smart emergency response system (SERS) for monitoring vehicle, home and health status(A. S. Mohsin, Munyem Ahammad Muyeed, 2024, Discover Internet of Things)
- Automated Anomaly Detection and Threat Classification in Network Traffic(Krishnaja Venkata Naga Sri Lasya P, Munisaiteja Sharan Tupakula, Naresh Sammeta, 2025, 2025 2nd International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE))
- Fault-Tolerant Scheduling of Heterogeneous UAVs for Data Collection of IoT Applications(Hui Yan, Weidong Bao, Xiaoqing Li, Xiaomin Zhu, Yaohong Zhang, Ji Wang, Ling Liu, 2024, IEEE Internet of Things Journal)
- Real-time incident reporting and intelligence framework: Data architecture strategies for secure and compliant decision support(Shamnad Mohamed Shaffi, Jezeena Nikarthil Sidhick, 2025, World Journal of Advanced Research and Reviews)
- Near-Incident Detection in Railroad Environments: Lateral Distance Estimation froM Train-Mounted Monocular Camera(Yilei Wang, Giacomo D'Amicantonio, Egor Bondarev, 2025, 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW))
- Heuristic Optimal Scheduling for Road Traffic Incident Detection Under Computational Constraints(Hao Wu, Jiahao Yang, Ming Yuan, Xin Li, 2024, Sensors)
- IoT-Enabled Smart Helmet and Wearable System with Real-Time Sleep Monitoring, Alcohol Detection, and Emergency Alerting Using AI(Nikkil Nishanth. S, Kameshwaran. S, Shyam. J, Adchaya Saran. P, L.Hubert Mary, 2025, 2025 2nd International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF))
- Disturbance Classification and Hybrid Simulation Analysis for Extreme Events(Yurou Jiang, Kai Jiang, Zhongliang Han, Nian Liu, Jie Huang, Zhen Wang, 2025, 2025 IEEE 9th Conference on Energy Internet and Energy System Integration (EI2))
- Detecting Human Trafficking: Automated Classification of Online Customer Reviews of Massage Businesses(Ruoting Li, Margaret Tobey, M. Mayorga, Sherrie Caltagirone, Osman Y. Özaltın, 2023, Manufacturing & Service Operations Management)
组织业务韧性与灾难恢复管理
关注组织层面的应急管理流程,探讨如何通过AI驱动的决策支持、自动化业务连续性规划(BCP)及协作式风险评估来应对突发事件并提升系统鲁棒性。
- Exploring the Feasibility of Automated Data Standardization using Large Language Models for Seamless Positioning(M. Lee, Ju Lin, Li-Ta Hsu, 2024, 2024 14th International Conference on Indoor Positioning and Indoor Navigation (IPIN))
- Development of an AI-Driven Operational Assistant for Disaster Preparedness and Response in Quezon, Nueva Ecija(Mariecris A. Cairlan, Rachel T. Alegado, Rolaida L. Sonza, 2026, International Journal of Innovative Science and Research Technology)
- Towards Energy-Efficient Data Collection by Unmanned Aerial Vehicle Base Station With NOMA for Emergency Communications in IoT(Shu Fu, Xiaohui Guo, Fang Fang, Zhiguo Ding, Ning Zhang, Ning Wang, 2023, IEEE Transactions on Vehicular Technology)
- Machine Learning for Crowd-Sourcing a Social Media Data Source to Improve Response and Recovery After the Earthquake Disaster(Büsra Yesilbas, I. B. Parlak, T. Acarman, 2024, 2024 10th International Conference on Control, Decision and Information Technologies (CoDIT))
- Crowdsourced Disaster Management(Farhat A. Patel, Adnan Memon, Soham Nikam, A. Marathe, Shrushty Meshram, 2025, International Research Journal on Advanced Engineering Hub (IRJAEH))
- Incident Response and Disaster Recovery in Cloud Computing(Naga Sai Krishna Mohan, 2025, Journal of Artificial Intelligence & Cloud Computing)
- Hybrid Intelligent System of Crisis Assessment using Natural Language Processing and Metagraph Knowledge Base(Anton Kanev, V. Terekhov, Maria Kochneva, Valery Chernenky, M. Skvortsova, 2021, 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus))
- Understanding How Intelligent Process Automation Impacts Business Continuity: Mapping IEEE/2755:2020 and ISO/22301:2019(José Cascais Brás, Ruben Filipe de Sousa Pereira, Sérgio Moro, I. Bianchi, Rui Ribeiro, 2023, IEEE Access)
- Automated Business Continuity and Disaster Recovery in Hybrid Cloud Enterprise Architectures(Vinay Chowdary Duvvada, 2025, European Modern Studies Journal)
- Enhancing Disaster Recovery and Business Continuity in Cloud Environments through Infrastructure as Code(O. Abieba, C. Alozie, O. Ajayi, 2025, Journal of Engineering Research and Reports)
- Exploring Information Systems for Business Continuity Planning in IT-Driven Organizations Post-Pandemic: Insight into Enhancing Future Resilience(Sunish Vengathattil, 2023, International Journal For Multidisciplinary Research)
- Predicting Workplace Hazard, Stress and Burnout Among Public Health Inspectors: An AI-Driven Analysis in the Context of Climate Change(Ioannis Adamopoulos, A. Valamontes, P. Tsirkas, G. Dounias, 2025, European Journal of Investigation in Health, Psychology and Education)
- Business Continuity Planning Enhanced by AI-Driven Cyber Threat Intelligence(Geetha Manoharan, V.K. Elavarasi, Prashant Panwar, Melanie Lourens, Figo Martin D, Shishir Kumar Gujrati, 2025, 2025 World Skills Conference on Universal Data Analytics and Sciences (WorldSUAS))
- Adaptive Business Continuity Planning Using Self-Organizing Maps and Scenario-Based Generative AI Models(R. P. K, N. Bhasin, Anita Gorkhe, V. Srinivas, Sajiv G, M. Karthik, 2025, 2025 IEEE Pune Section International Conference (PuneCon))
- Lifecycle Management, Business Continuity and Disaster Recovery Planning for the LHCb Experiment Control System Infrastructure(P. Cifra, F. Sborzacchi, N. Neufeld, L. Cardoso, 2024, EPJ Web of Conferences)
- The Role of Artificial Intelligence Technology in Predictive Risk Assessment for Business Continuity: A Case Study of Greece(Stavros Kalogiannidis, D. Kalfas, Olympia Papaevangelou, Grigoris Giannarakis, F. Chatzitheodoridis, 2024, Risks)
- Supervised Learning in Business Continuity Planning(N. K., Binita Nanda, Shivani Sharma, Ajay Sidana, Prashant A. Patil, N. Rajas, 2025, 2025 World Conference on Cutting-Edge Science and Technology (WCCEST))
- Enhancing Cybersecurity through Integrated Business Continuity Management and Cyber Threat Intelligence(G. Kokkinis, Genny Dimitrakopoulou, Ludovico Tortora, Demos Doumenis, Eirini Keremidou, Fabbio Rizzoni, Nikolaos Kapsalis, Sotirios Spantideas, Pasquale Mari, 2025, 2025 6th International Conference in Electronic Engineering & Information Technology (EEITE))
- A method to analysis the risk factors in information system(Ning Yang, G. Li, 2022, International Conference on Network Communication and Information Security (ICNIS 2021))
- Assessment of Aeronautical Information Business Continuity Capability Based on D-ANP(震亚 苏, 2024, Software Engineering and Applications)
- Research on Automated Traffic Scheduling Technology for Dual Active Disaster Recovery Centers in Two Regions(Xiaoliang Zhang, Jiaqi Duan, Shunming Lv, Mei Yan, Fenggang Lai, Liang Chen, 2025, 2025 International Conference on Electronics and Computing, Communication Networking Automation Technologies (ICEC2NT))
- Space engineering risk analysis from risk assessment matrix using text mining(Ning Wang, S. An, Q. Mai, 2016, 2016 International Conference on Management Science and Engineering (ICMSE))
灾害辅助决策、多模态信息提取与数据模型框架
该类文献研究通过社交媒体分析、多模态数据集成、知识图谱构建及大语言模型(LLM)实现应急信息的自动分类、情境感知与应急响应协调。
- Research on Automated Classification Method of Network Attacking Based on Gradient Boosting Decision Tree(Ren Wen, Kaiwen Zhang, 2022, 2022 International Conference on Machine Learning and Knowledge Engineering (MLKE))
- Exploring CNN and XAI-based Approaches for Accountable MI Detection in the Context of IoT-enabled Emergency Communication Systems(Helene Knof, Prachi Bagave, Michell Boerger, Nikolay Tcholtchev, A. Ding, 2023, Proceedings of the International Conference on the Internet of Things)
- A Screening and Prioritization Method for Urban Road Hazard Points: Large-Scale Validation Analysis in 18 Cities(Wanfu Liu, Jinguang Liu, Shuai Dai, Herui Hao, 2026, IEEE Access)
- Forest/rural road network detection and condition monitoring based on satellite imagery and deep semantic segmentation(Dimitrios Kelesakis, Konstantinos Marthoglou, Eleni Tokmaktsi, Emmanouel Tsiros, A. Karteris, Anastasia Stergiadou, G. Kolkos, P. Daras, Nikos Grammalidis, 2024, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences)
- An Ethical Framework for Message Prioritization in Disaster Response(Grace Diehl, J. Adams, 2021, 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR))
- A Taxonomy of Semantic Information in Robot-Assisted Disaster Response(Tianshu Ruan, Hao Wang, Rustam Stolkin, Manolis Chiou, 2022, 2022 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR))
- A collaborative taxonomy of social media indicators for localised disaster response(Priscilla Carvalho, Zainab Akhtar, M. Nowbuth, Y. Boafo, Ebenezer Forkuo Amankwaa, C. Spataru, Ferda Ofli, Muhammad Imran, 2025, Jàmbá Journal of Disaster Risk Studies)
- Development of IoT-Based Automated Dynamic Emergency Response System Against Fire Incidents in Academic Building(Syed Mohammed Zakaria Al-Hady, Md. Rafiqul Islam, M. Rashid, 2023, International Journal of Engineering Materials and Manufacture)
- UAV Emergency Rescue System Based on Combined LoRa and NB-IoT Communication Technologies(Xu Chen, Shangzhong Jin, Sun-Den Chen, Zhuo Fang, 2025, Proceedings of the 2025 4th International Conference on Big Data, Information and Computer Network)
- Transforming disaster response: The role of agentic AI in crisis management(Eswaran Ushaa, J. Suman, Jaishree, Manisha, Jeevitha, Kowshika, Kesavan, 2024, i-manager's Journal on Structural Engineering)
- AI and NLP in Social Good: Enhancing Disaster Response and Crisis Management with Text Analytics(Radhakrishnan P, R. B, Haftom Gebregziabher, Zatin Gupta, A. Pandey, Mukesh Soni, 2025, 2025 World Conference on Cutting-Edge Science and Technology (WCCEST))
- Automated Generation of Disaster Response Networks through Information Extraction(Yitong Li, Duoduo Liao, Jundong Li, Wenying Ji, 2021, International Conference on Information Systems for Crisis Response and Management)
- Reducing Urban Emergency Fatalities: A Holistic AI-Driven Rescue Model(Amit Vishwakarma, A. Vishwakarma, Ravindra Chauhan, 2025, International Journal For Multidisciplinary Research)
- Intelligent Emergency Evacuation System for Industrial Environments Using IoT-Enabled WSNs(V. Agarwal, S. Tapaswi, P. Chanak, Neeraj Kumar, 2023, IEEE Transactions on Instrumentation and Measurement)
- Leveraging AI-Powered Small Language Models for Real-Time Disaster Communication and Response Optimization(Vamshi Paili, 2025, Universal Library of Innovative Research and Studies)
- Research on the Construction of a Low-Altitude Security System for Major Event Security(Jianchun Li, 2025, Journal of Modern Education and Culture)
- Iot For Disaster Preparedness: Real-Time Monitoring and Response Systems(Dr. Hiroko Yamashita, 2023, American Journal Of Internet Of Things)
- Hybrid IoT System for Emergency Responders(Vidushi Jain, Kaikai Liu, 2023, 2023 11th IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud))
- An event classification schema for evaluating site risk in a multi-unit nuclear power plant probabilistic risk assessment(S. Schroer, M. Modarres, 2013, Reliability Engineering & System Safety)
- A Deep Learning Method to Accelerate the Disaster Response Process(Vyron Antoniou, C. Potsiou, 2020, Remote Sensing)
- RAPID: Resilient Automated Planning for Intelligent Disaster Response(Md. Monjurul Islam, Sabah Binte Noor, Fazlul Hasan Siddiqui, 2025, 2025 2nd International Conference on Next-Generation Computing, IoT and Machine Learning (NCIM))
- Neural Network-Based Sentiment Analysis and Anomaly Detection in Crisis-Related Tweets(Josip Katalinic, Ivan Dunđer, 2025, Electronics)
- An automated end-to-end pipeline for identifying fine-grained waterlogging locations from Chinese social media(Jinxiao Ji, Yi Tan, Jingru Li, Xiaoling Wang, 2025, Geomatics, Natural Hazards and Risk)
- Developing an Automated Analytical Process for Disaster Response and Recovery in Communities Prone to Isolation(Byungyun Yang, Minjun Kim, Chang-Boong Lee, Su-Au Hwang, Jinmu Choi, 2022, International Journal of Environmental Research and Public Health)
- Twitter-Based Disaster Response Using Recurrent Nets(Rabindra Lamsal, T. Kumar, 2021, International Journal of Sociotechnology and Knowledge Development)
- Multi-Modal Deep Learning Framework for Disaster Response(Ifthekhar Hussain, Protik Barua, Arafath Al Fahim, Sahariar Reza, 2024, 2024 IEEE International Conference on Computing, Applications and Systems (COMPAS))
- AI-Integrated Disaster Risk Reduction System for Real-Time Response and Prevention in Vulnerable Regions(R. Ahila, Bekmirzayev Mirjalol Xusanboy ugli, Haayder M. Abbas, Jainish Roy, S. Balambigai, P. Haridha, Dilli Ganesh V, D. B., 2025, 2025 International Conference on Intelligent Systems and Pioneering Innovations in Robotics and Electric Mobility (INSPIRE))
- Enhancing Disaster Response with Automated Text Information Extraction from Social Media Images(H. Firmansyah, J. Fernandez-Marquez, J. Cerquides, Valerio Lorini, Carlo Alberto Bono, Barbara Pernici, 2023, 2023 IEEE Ninth International Conference on Big Data Computing Service and Applications (BigDataService))
- An Intelligent Indoor Emergency Evacuation System Using IoT-Enabled WSNs for Smart Buildings(Archana Ojha, Anshul Jindal, P. Chanak, 2024, IEEE Internet of Things Journal)
- Improving Disaster Response by Combining Automated Text Information Extraction from Images and Text on Social Media(H. Firmansyah, Carlo Alberto Bono, Valerio Lorini, Jesús Cerquides, J. Fernandez-Marquez, 2023, Frontiers in Artificial Intelligence and Applications)
- A novel emergency situation awareness machine learning approach to assess flood disaster risk based on Chinese Weibo(H. Bai, Hualong Yu, Guang Yu, Xing Huang, 2020, Neural Computing and Applications)
- Relevancy assessment of tweets using supervised learning techniques: Mining emergency related tweets for automated relevancy classification(Matthias Habdank, N. Rodehutskors, R. Koch, 2017, 2017 4th International Conference on Information and Communication Technologies for Disaster Management (ICT-DM))
- BERT-based chinese text classification for emergency management with a novel loss function(Zhongju Wang, Long Wang, Chao Huang, Shutong Sun, Xiong Luo, 2022, Applied Intelligence)
- A knowledge-informed large language model framework for U.S. nuclear power plant shutdown initiating event classification for probabilistic risk assessment(Min Xian, Tao Wang, Sai Zhang, Fei Xu, Zhegang Ma, 2024, Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability)
- AI-Driven Hierarchical Taxonomy Generation from Emergency Call Transcripts(Juan Gabriel Flores Sanchez, Marcos Orellana, Patricio Santiago García-Montero, Jorge Luis Zambrano-Martinez, 2026, Journal of the Brazilian Computer Society)
- Increasing Emergency Response Time: A Smart GPS and IoT-Based Approach for Ambulance Optimization(P. V, Rahul Iv, S. A, Chayadevi Ml, 2024, 2024 Second International Conference on Advances in Information Technology (ICAIT))
- A Spiking Neural Networks Model with Fuzzy-Weighted k-Nearest Neighbour Classifier for Real-World Flood Risk Assessment(Mohd Hafizul Afifi Abdullah, Muhaini Othman, S. Kasim, Shaznoor Shakira Saharuddin, S. A. Mohamed, 2020, Advances in Intelligent Systems and Computing)
- Binary Classification for Failure Risk Assessment.(Ali Foroughi pour, I. Loveless, G. Rempała, Maciej Pietrzak, 2021, Methods in Molecular Biology)
- Cloud for Humanitarian Aid: How Automated Systems Are Transforming Disaster Response(Sumanth Kadulla, 2025, Technix International Journal for Engineering Research)
- Towards Automated Adaptation of Disaster Response Processes - An Approach to InsertTransport Activities(Marlen Hofmann, 2014, Multikonferenz Wirtschaftsinformatik)
- A critical view of severity classification in risk assessment methods(A. Pasquini, Simone Pozzi, L. Save, 2011, Reliability Engineering & System Safety)
- Dynamic Emergency Transit Forecasting with IoT Sequential Data(Bin Sun, Renkang Geng, T. Shen, Yuan Xu, Shuhui Bi, 2022, Mobile Networks and Applications)
- Scenario clustering and dynamic probabilistic risk assessment(D. Mandelli, Alper Yilmaz, T. Aldemir, K. Metzroth, R. Denning, 2013, Reliability Engineering & System Safety)
- Global Social Event Extraction and Analysis by Processing Online News(B. Zhu, Yu Wang, Cheng-Long He, 2016, 2016 International Conference on Information System and Artificial Intelligence (ISAI))
- Towards the Application of Machine Learning in Emergency Informatics(S. R. N. Kalhori, 2022, Studies in Health Technology and Informatics)
本次研究报告系统整理了应急管理与自动化分类领域的文献,将研究划分为五大核心方向:1) 针对重大自然灾害的实时监测与损害评估;2) 医疗临床紧急救治、分诊及患者实时智能监控;3) 公共交通、工业与基础设施的安全监控与预警;4) 组织业务韧性与灾后恢复规划;5) 基于多模态信息提取、知识图谱与智能决策的应急响应协同机制。研究结果表明,当前领域正从传统的单点感知转向整合IoT、边缘计算、深度学习及大语言模型的智能化、全流程自动化应急管理体系。
总计190篇相关文献
Due to its dense population, India frequently experiences traffic congestion, which puts lives in danger by trapping vehicles like ambulances, police cars, and fire trucks which run on road for emergency purpose. It becomes crucial to give these vehicles priority and allow for their easy passage. However, it becomes challenging or even impossible for traffic police to effectively handle such circumstances. Therefore, there is a requirement for an automated system that can locate emergency vehicles in high-traffic areas, alert the controller, or drive itself to direct other vehicles to make room. This study suggests an automated method for identifying emergency vehicles from CCTV footage that makes use of deep convolutional neural networks (CNN). The objective is to effectively identify and classify emergency vehicles in real-time, leveraging the power of advanced object detection techniques. With an accuracy of 91.73% and a loss of 0.2120, the suggested technique outperformed existing optimizers in accurately recognizing and classifying emergency vehicles.
Background Electrocardiographic analysis algorithms have consistently evolved, becoming essential tools for physicians in diverse settings, particularly in assessing patients with acute chest pain. Moving forward, it is crucial to classify unstructured automated ECG reports into clinically relevant outcomes using advanced large language models. This approach holds significant potential to enhance an accelerated clinical decision pathway in clinical settings. Objective This study aims to integrate automated electrocardiogram algorithms with advanced machine learning techniques, enhancing the classification of ECG reports within emergency department settings. Specifically, it investigates how natural language processing can augment traditional methods to accelerate the electrocardiographic-directed management of acute chest pain. Methods Employing a retrospective observational dataset from Rashid Hospital, Dubai, spanning from June 2022 to August 2022, we analyzed 860 ECGs from patients presenting with acute chest pain. The ECGs were categorized into four classes, namely, STEMI, NSTEMI, normal ECG, and new arrhythmia using a hybrid model that combines the established Glasgow algorithm with a large language model, GPT-4. The Glasgow algorithm produced structured text inputs, which were then classified by GPT-4 using few-shot prompting (temperature = 0.2, top_p=1.0). Results The model demonstrates high predictive accuracy for normal ECGs, achieving an F1 score of 0.93, followed by STEMI with an F1 score of 0.80. New arrhythmias, however, present more challenges, reflected by the lowest F1 score of 0.45. Notably, the model excels in discriminating between STEMI and normal ECGs (AUC=0.92) and between STEMI and new arrhythmias (AUC=0.91). Overall accuracy was 85.9% (95% CI: 0.816-0.895) Conclusion The findings suggest that leveraging deep learning alongside traditional algorithms can significantly improve the rapid classification of ECGs, supporting accelerated decision-making pathways in clinical practice.
No abstract available
Unmanned Aerial Vehicles (UAVs), equipped with camera sensors can facilitate enhanced situational awareness for many emergency response and disaster management applications since they are capable of operating in remote and difficult to access areas. In addition, by utilizing an embedded platform and deep learning UAVs can autonomously monitor a disaster stricken area, analyze the image in real-time and alert in the presence of various calamities such as collapsed buildings, flood, or fire in order to faster mitigate their effects on the environment and on human population. To this end, this paper focuses on the automated aerial scene classification of disaster events from on-board a UAV. Specifically, a dedicated Aerial Image Database for Emergency Response (AIDER) applications is introduced and a comparative analysis of existing approaches is performed. Through this analysis a lightweight convolutional neural network (CNN) architecture is developed, capable of running efficiently on an embedded platform achieving ~3x higher performance compared to existing models with minimal memory requirements with less than 2% accuracy drop compared to the state-of-the-art. These preliminary results provide a solid basis for further experimentation towards real-time aerial image classification for emergency response applications using UAVs.
Abstract Social media-based waterlogging locations identification provides a timely, cost-effective solution for urban flood emergency management. However, Chinese toponymic complexities challenge waterlogging location extraction from social media. This study proposes a pipeline integrating reverse filtering, text classification, sentence segmentation, Named Entity Recognition (NER), and Waterlogging Location Refinement (WLR) to identify Fine-grained waterlogging locations (Fg_wls). The WLR algorithm innovatively combines language rules with a Chinese NER model, enhancing completeness, accuracy, and granularity while avoiding time-consuming dataset annotation. Using Shenzhen as a case study, 622 Fg_wls were extracted from 7,243 Weibo posts between 2018 and 2022, including 395 point-level, 146 line-level, and 81 polygon-level locations. The WLR algorithm helps improve the accuracy of fine-grained location identification from 59.4% when using only the PFR-NER model to 92.2% when adding WLR. The proposed pipeline delivers urban-level flood risk information to emergency responders, enabling precise disaster mitigation.
As a serious neurological emergency, intracranial hemorrhage requires an early and precise diagnosis to improve the patient's chances of recovery. Using the RSNA Intracranial Hemorrhage Challenge dataset, this study introduces a deep learning system for the automated detection and categorization of brain hemorrhages from computed tomography (CT) scans. To improve the data and strengthen the model, preprocessing methods such as contrast improvement, scaling, augmentation, and intensity normalization were used. There were five different kinds of hemorrhages: subarachnoid, intraventricular, intraparenchymal, and epidural. To improve the generalization abilities and lessen the overfitting tendencies of ResNet and DenseNet121, two sophisticated convolutional neural network architectures, we used regularization techniques like dropout, label smoothing, and data augmentation. The results of the experiment showed that both models performed well across all classifications and correctly differentiated between different types of bleeding. The findings imply that decision-making in radiology workflows can be aided by deep learning algorithms. By detecting cerebral hemorrhage more rapidly and precisely, this could result in a quicker diagnosis, less disagreement among observers, and better emergency care. Keywords: Intracranial Hemorrhage, Deep Learning, CT Imaging, ResNet, DenseNet121, Medical Image Classification.
This article presents a case study on hierarchical topic modeling for emergency call transcripts from Ecuador's ECU 911 service. We introduce a hybrid methodology that first generates a taxonomy from unlabeled data using BERTopic and agglomerative clustering, and then employs embedding-based similarity for multi-label classification. By leveraging multilingual embeddings (LaBSE) and clustering algorithms (UMAP & HDBSCAN), we identified 23 coherent topics, demonstrating a practical balance between accuracy and operational applicability. The key result is a significant reduction in Hamming Loss and an F1-score of 0.4951, achieved without the need for pre-labeled data. This underscores the method's primary practical significance: offering a scalable, automated solution for emergency management centers to rapidly categorize complex incidents, thereby enhancing situational awareness and resource allocation. The integration of LLaMA 3 for automated label generation further optimized semantic interpretation, highlighting the potential of language models in critical, resource-constrained domains.
A potentially fatal neurological emergency, intracerebral hemorrhage (ICH) is caused by bleeding into the brain tissue and frequently leads to increased intracranial pressure and rapid neurological deterioration. For efficient clinical management, non-contrast CT scans must be used for prompt and precise identification. But interpreting CT scans manually is time-consuming and subject to inter-observer variability. Deep learning (DL) based computer-aided diagnosis (CAD) systems offer an efficient and scalable alternative for automated detection of ICH. This study proposes a lightweight multi-path Convolutional Neural Network (CNN) architecture for the classification of CT images into hemorrhagic and normal categories. The model captures hierarchical and multi-scale features by using progressive down-sampling across various input streams. Concatenation and 1×1 convolutions are used to fuse features, while a dense layer is used for final classification. The model achieved a classification accuracy, sensitivity of 99.7%, and 100% specificity, which shows its diagnostic robustness. For models’ transparency, we employed Gradient-weighted Class Activation Mapping (Grad-CAM) to highlight attention regions. For Hemorrhagic cases, regardless of lesion size, the superimposed Grad-CAM heatmaps were correctly aligned with lesion locations in the original images, whereas for Normal cases, the model produced low-activation, non-focused attention. The visual results of the Grad-CAMw confirm that the model is lesion-aware and clinically interpretable. Due to the high diagnostic accuracy with visual explainability, the proposed model is capable of supporting radiologists in emergency diagnosis.
The Emergency departments are concerned with major challenges such Crowding, waiting times, and sometimes irrational triage can be observed in many crisis or emergency departments. In this regard, this article introduces an automated Predictive Triage and Queue Management System using machine learning algorithms that predicts patient severity and prioritizes patient treatment delivery. To this end, this system utilizes gradient boosting algorithms XGBoost and LightGBM. To establish the efficacy and efficiency of this system, Random Forest classification has been used as the baseline model for performance comparisons. Based on patient vital signs and symptoms, patient predictive classification results in immediate output for priority-based patient queuing for effective crisis or emergency treatment. To measure the efficiency and effectiveness of this approach, results show that boosting algorithms outperform the baseline models by providing improved patient classification accuracy, precision, and F1-score performance. To provide additional interpretation and improvements in doctor-patient communication and understanding for effective treatment, SHAP-based interpretability has been implemented in this system. For an enhanced and more effective system with improved functionality, several extensions need to be implemented in the future. These extensions would include automated vital patient monitoring and integrating this system with EHRs (Electronic Health Record system services)
OBJECTIVE The Focused Assessment with Sonography for Trauma (FAST) enables rapid detection of free intraperitoneal fluid, facilitating timely management of internal hemorrhage. This study aims to develop a Transformer-based model for automated detection on FAST images and evaluate its feasibility in assisting non-specialist operators. METHODS Between June 2019 and June 2024, 1829 FAST-positive images demonstrating free intraperitoneal fluid and 303 FAST-negative images without fluid were retrospectively collected from Zhejiang Provincial People's Hospital. A Transformer-based model integrating segmentation and classification modules was developed and internally validated using five-fold cross-validation. External validation was performed on 848 images (424 positive/424 negative) from Hubei Provincial People's Hospital. Three operators with varying expertise-a junior sonographer, a clinician, and a non-clinical operator-evaluated all external images before and after model assistance to compare segmentation performance. RESULTS Five-fold cross-validation yielded the following segmentation metrics: mean intersection over union (IoU) 0.671 ± 0.009, Dice coefficient 0.799 ± 0.013, and pixel accuracy (PA) 0.809 ± 0.010. Classification performance showed a mean accuracy of 0.922 ± 0.013, sensitivity of 0.938 ± 0.015, and specificity of 0.905 ± 0.015. External validation demonstrated accuracy 0.883, sensitivity 0.901, specificity 0.837, AUC 0.871, with segmentation IoU 0.683, Dice coefficient 0.812, and PA 0.781. The model performed comparably to junior Sonographer and outperformed non-specialists. After model assistance, all groups improved and inter-group differences disappeared (all p > 0.05). CONCLUSION The Transformer model delivers diagnostic performance comparable to junior Sonographer for automated free intraperitoneal-fluid detection in FAST examinations and significantly improves detection accuracy among non-specialist operators.
In India, Lung diseases are a major health crisis, with Pulmonary Fibrosis and pneumonia being the most prevalent, making India a global hotspot for respiratory issues. Recent global health data indicate that respiratory infections like pneumonia remain a leading cause of mortality, accounting for over 2.4 million deaths annually, while interstitial lung diseases such as pulmonary fibrosis show a rising prevalence of approximately 30 to 70 cases per 100,000 people. The need for automated diagnostic systems is paramount in emergency departments and rural clinics where 24/7 access to expert thoracic radiologists is often unavailable. Such applications provide a vital second-opinion tool that can triage urgent cases of pneumonia or track the progression of chronic conditions like pulmonary fibrosis in resource-limited settings. While classifying C-XR images as Normal, Pneumonia and Pulmonary Fibrosis, Traditional manual interpretation of chest X-rays is frequently hindered by high inter-observer variability and a significant risk of human fatigue during high-volume shifts. Furthermore, subtle earlystage lesions or complex fibrotic patterns can be easily overlooked by non-specialists, leading to delayed treatment or misdiagnosis. The proposed methodology utilizes the C-XR datasets, which provide a robust collection of labelled images for Normal, Pneumonia, and Pulmonary Fibrosis classes. The proposed system implements a Vision Transformer (ViT) for feature extraction, which discards traditional convolutional layers in favour of a self-attention mechanism. This approach breaks the CXR image into fixed-size patches and uses an encoder to capture global spatial dependencies and longrange relationships between distant lung regions. While existing benchmarks often rely on Multi-Layer Perceptron (MLP), Random Forest Classifier (RFC), and Extreme Gradient Boosting (XGB) had lower performance, so this research proposes the integration of a Light Gradient Boosting Machine (LGBM). It is selected for its leaf-wise tree growth strategy and histogram-based binning, which significantly reduces training time and memory consumption while maintaining superior accuracy on the highdimensional feature vectors produced by the Transformer.
This study developed a multi-task deep learning pipeline for the automated assessment of acute intracranial hemorrhage and perihematomal edema on non-contrast brain computed tomography. Acute hemorrhage remains a major neurological emergency, and rapid and reliable image interpretation is essential for timely management. To address this clinical need, the proposed framework combined three complementary tasks in a single workflow: classification to identify the presence and subtype of hemorrhage, detection of perihematomal edema, and three-dimensional segmentation of intraparenchymal hematoma (IPH) using a weakly supervised strategy with pseudo-labels derived from edema masks. A total of 10,922 images were analyzed: 6,000 RSNA images, 728 PHE-SICH-CT-IDS slices from 120 patients, and 4,194 BHSD images. These models achieved high sensitivity (0.9584 ± 0.0068) for hemorrhage detection, robust performance (AUC, 0.9330 ± 0.0072) in differentiating subtypes, and reliable identification of edema (detection rate, 0.9873). Segmentation accuracy was excellent for IPH (Dice Similarity Coefficient [DSC], 0.9803 ± 0.0069) and moderate for edema (0.4569 ± 0.1809), with external validation confirming generalizability across centers (0.7549 ± 0.1143) for IPH. By integrating classification, detection, and segmentation, this pipeline demonstrates the potential of deep learning to provide accurate and scalable support for clinical decision-making, enhance diagnostic confidence, and streamline clinical workflow in the acute care setting.
Effective accident management acts as a vital part of emergency and traffic control systems. In such systems, accident data can be collected from different sources (unmanned aerial vehicles, surveillance cameras, on-site people, etc.) and images are considered a major source. Accident site photos and measurements are the most important evidence. Attackers will steal data and breach personal privacy, causing untold costs. The massive number of images commonly employed poses a significant challenge to privacy preservation, and image encryption can be used to accomplish cloud storage and secure image transmission. Automated severity estimation using deep-learning (DL) models becomes essential for effective accident management. Therefore, this article presents a novel Privacy Preserving Image Encryption with Optimal Deep-Learning-based Accident Severity Classification (PPIE-ODLASC) method. The primary objective of the PPIE-ODLASC algorithm is to securely transmit the accident images and classify accident severity into different levels. In the presented PPIE-ODLASC technique, two major processes are involved, namely encryption and severity classification (i.e., high, medium, low, and normal). For accident image encryption, the multi-key homomorphic encryption (MKHE) technique with lion swarm optimization (LSO)-based optimal key generation procedure is involved. In addition, the PPIE-ODLASC approach involves YOLO-v5 object detector to identify the region of interest (ROI) in the accident images. Moreover, the accident severity classification module encompasses Xception feature extractor, bidirectional gated recurrent unit (BiGRU) classification, and Bayesian optimization (BO)-based hyperparameter tuning. The experimental validation of the proposed PPIE-ODLASC algorithm is tested utilizing accident images and the outcomes are examined in terms of many measures. The comparative examination revealed that the PPIE-ODLASC technique showed an enhanced performance of 57.68 dB over other existing models.
Emergency vehicles such as ambulances or police vehicles require special treatment in traffic scenarios. To this end, their detection needs an automated approach, to allow them to scale to large urban areas in the country. Recently, vision-based systems have become a key enabler in traffic management, in this paper we propose a novel approach for emergency vehicle detection, which has a lesser occurrence of false positives and false negatives compared to existing vision-based approaches. The key idea of our approach is to detect the siren on top of the vehicle, rather than the vehicle itself, thereby giving us two main advantages: first, if the vehicle is of any type, it is still detected as an emergency vehicle irrespective of the vehicle shape and second, non-emergency vehicles of similar make and model are not detected. We have created, and made publicly available, a dataset using image searches and traditional enhancement techniques to increase its size. We have then trained the classification layer of the popular YOLO architecture using this enhanced dataset. We obtain a training accuracy of 77.1% for detecting the siren and also show with multiple examples how false positives and false negatives are avoided using this approach. This approach hence provides a reliable methodology for emergency vehicle detection, an important component of traffic management systems as part of the smart cities’ objective.
This research introduces a comprehensive crowdsourced disaster management system utilizing artificial intelligence to enhance real-time response, decision-making, and disaster mitigation. The system integrates deep learning models for disaster detection, categorization, and prediction, leveraging cloud-based AWS services for scalability, reliability, and accessibility. The methodology includes real-time data gathering from social media platforms, IoT sensors, governmental databases, and user-generated reports, ensuring a robust and multi-source approach for situational awareness. By actively involving community participation through mobile and web-based applications, the system strengthens resilience and ensures immediate response to emergency situations. The project addresses critical challenges such as misinformation filtering, automatic classification of disaster severity, automated response recommendations, and infrastructure scalability. With advancements in AI-driven data analytics, the platform ensures efficient disaster response by optimizing resource allocation, reducing response time, and improving the coordination between emergency services and affected populations. The paper highlights the transformative potential of AI in disaster preparedness, mitigation, and response through intelligent automation and crowdsourced intelligence.
Emergency care is one of the cornerstone parts of the world health organization's action plan. Rapid response and immediate care are considered in agile emergency care. Artificial intelligence (AI) and informatics have been applied to fulfill these requirements through automated emergency technology. Machine learning (ML) is one of the main parts of some of these proposed technologies. There are various ML algorithms and techniques which are potentially applicable for different purposes of emergency care. AI-based approaches using classification and clustering algorithms, natural language processing, and text mining are some of the possible techniques that could prove useful for investigating models of emergency prevention and management and proposing improved procedures for handling such critical situations. ML is known as a field of AI which attempts to automatically learn from data and applies that learning to make better decisions. Decision-support tools can apply the results of either supervised or various semi-supervised or unsupervised learning methods to tackle the how decisions about emergency situations are typically handled by the best professionals at the scene of an emergency, in the pre-hospital, and in healthcare facility settings. Enhanced and rapid communication at the moment of emergency, with the most effective decision making for triaging to estimate the acute nature of injuries and possible complications, how to keep a patient stable on the way to the care facility, and also avoiding adverse drug reactions, are some of the possible directions for exploring how ML can help to gather the data and to make emergency management more efficient and effective. The wide range of scenarios present in emergency situations and the complexity of different legal and ethical constraints on what responding personnel are allowed to perform on an injured subject before reaching a hospital makes for a most challenging set of problems for investigating the components of "intelligent" decision support that could help in these highly interactive and humanly tragic situations.
No abstract available
Infectious and inflammatory diseases of the central nervous system are difficult to identify early. Case definitions for aseptic meningitis, encephalitis, myelitis, and acute disseminated encephalomyelitis (ADEM) are available, but rarely put to use. The VACC-Tool (Vienna Vaccine Safety Initiative Automated Case Classification-Tool) is a mobile application enabling immediate case ascertainment based on consensus criteria at the point-of-care. The VACC-Tool was validated in a quality management program in collaboration with the Robert-Koch-Institute. Results were compared to ICD-10 coding and retrospective analysis of electronic health records using the same case criteria. Of 68,921 patients attending the emergency room in 10/2010–06/2013, 11,575 were hospitalized, with 521 eligible patients (mean age: 7.6 years) entering the quality management program. Using the VACC-Tool at the point-of-care, 180/521 cases were classified successfully and 194/521 ruled out with certainty. Of the 180 confirmed cases, 116 had been missed by ICD-10 coding, 38 misclassified. By retrospective application of the same case criteria, 33 cases were missed. Encephalitis and ADEM cases were most likely missed or misclassified. The VACC-Tool enables physicians to ask the right questions at the right time, thereby classifying cases consistently and accurately, facilitating translational research. Future applications will alert physicians when additional diagnostic procedures are required.
Landslides are natural disasters that cause severe damage to human life and socio-economic infrastructure, making precise spatial delineation and early detection of hazardous areas essential for disaster management. This study compares automated classification performance of landslide hazard zones using U-Net deep learning architecture with different patch sizes (128×128 and 256×256). The study area is a steep slope in Bogok-ri, Hyoja-myeon, Yecheon-gun, Gyeongbuk Province, where a high-resolution DEM (0.1 m) was constructed using drone imagery and point cloud data. The 128×128 patch model achieved 70.5% accuracy with F1-scores of 0.582 and 0.599 for non-hazard and hazard classes, respectively, indicating balanced performance. In contrast, the 256×256 patch model yielded 65.96% accuracy, much lower F1-scores of 0.248 and 0.007, and a higher NoData F1-score of 0.786, reflecting predictions focused on NoData zones over hazard areas. Results suggest smaller patch sizes better capture local terrain variability and enhance landslide detection performance.
The growing complexity and frequency of incidents across many fields, particularly cybersecurity, healthcare, critical infrastructure, and emergency response, highlight the pressing need for automated, intelligent, and effective frameworks for incident reporting. Traditional manual methods often face constraints regarding latency, vulnerability to errors, and lack of analytical insights that are vital to supporting timely decision-making. This research explores the conceptual model and implementation of an Automated Incident Reporting and Intelligence Framework that enhances the speed, accuracy, and strategic value of incident management processes. The system proposed in this research leverages cutting-edge technologies like machine learning, natural language processing, decision support systems, real-time analytics, and Artificial Intelligence to support the detection, classification, and reporting of incidents. It also includes predictive intelligence and contextual analysis to develop actionable insights to aid stakeholders in prioritization of interventions and prevention of future incidents. The system architecture presented in this paper emphasizes scalability, interoperability, and modularity to cater to a diversity of organizational types while ensuring protection, confidentiality, and compliance with local and international regulations and standards. By integrating literature, technological innovations, and empirical case studies, this paper outlines fundamental design principles, deployment strategies, and assessment metrics essential to the effectiveness of an automated incident reporting system.
This research presents an automated human scream detection system designed to enhance public safety and contribute to crime reduction initiatives. The system utilizes deep learning techniques to accurately distinguish human screams from other environmental sounds, offering a potential early warning mechanism for emergency situations. The methodology employs Mel-frequency cepstral coefficients (MFCCs) for audio feature extraction and a bidirectional long short-term memory (BiLSTM) neural network architecture for classification. A dataset comprising labeled scream and non-scream audio samples was used to train and validate the model, achieving 92.5% accuracy on test data. Additionally, a graphical user interface was developed to facilitate real-time scream detection and visualization of audio waveforms. The system demonstrates potential for integration with existing surveillance infrastructure to expedite emergency response times. This research contributes to the growing field of acoustic event detection with specific applications in public safety, crime prevention, and smart city initiatives. The findings suggest that automated scream detection systems can serve as a valuable supplementary tool for law enforcement agencies to monitor high-risk areas and respond more efficiently to potential criminal activities. Keywords- Scream detection, audio analysis, deep learning, BiLSTM, crime prevention, acoustic surveillance, public safety, MFCC, neural networks
Flood severity detection enables rapid, accurate, and automated assessment of flood impact for effective disaster response and resource allocation. Aerial imagery obtained from Unmanned Aerial Vehicles (UAVs) is utilized to assess flood severity by categorizing the impacted regions into two classes: “High Floods” and “Low Floods”. This research introduces a hybrid deep learning methodology for post-flood severity assessment utilizing two advanced models: EfficientNetB2 and Vision Transformer (ViT). Data augmentation methods are utilized to enhance the model's generalization skills. The proposed technique attains 93 % accuracy and an $\mathbf{F 1}$-score of $\mathbf{0. 9 3}$, signifying strong classification efficacy. This methodology can assist emergency responders and planners in making informed decisions on catastrophe management and recovery.
Enhancing urban traffic management, as an important exercise, would entail putting viable solutions in place through which high-density traffic, as encountered, can cause congestion, delays, and emergencies. In this paper, an automated traffic monitoring system designed to permit real-time applications utilizes computer vision and deep learning for the task of vehicle detection and classification while giving priority to emergency vehicles. Continuous transmission of traffic data from the webcam is done utilizing the YOLOv9 model, which indeed is the proposal of choice because of speed and accuracy in dynamic environments like roads and streets, hence enabling vehicle detection and counts to be achieved with performance guarantees across different lanes. An embedded deep reinforcement learning system with spatiotemporal convolutional neural networks forms the background work of this program, wherein various lanes connect the dynamic timing of lights through an efficient intelligent lane allocation mechanism with congestion reduction aimed at maximizing response time for emergency vehicles. Thus, the DRL agent helps train itself through the historical records and also takes real-time feedback to try and improve overall flow and prioritization towards emergency vehicles with advances in Traffic Light Synchronization using Computer Vision. Real-time information comprising standard parameters like the total lane counts, average speed, and traffic flow data that might assist the operators would then be furnished. Once any lane is identified as the emergency lane, it instantly receives a “high-priority” tag, permitting it clearance for ambulances and similar vehicles by coordinating traffic light signals or that of closely hanging traffic systems. Overall, such a system introduces priority of emergency venues as efficient and improves urban traffic flow, dealing with current traffic issues rather well.
Abstract. Sustainable forest and emergency management require comprehensive data on the forest road network and its condition. This paper presents the final framework of the INFOROAD project (https://inforoad.karteco.gr/), which integrates cutting-edge remote sensing and machine learning technologies for automated periodic extraction and monitoring of the forest road network. The framework includes gravel road extraction, road graph extraction, and gravel road condition monitoring, with a focus on the periurban forest in Thessaloniki. The road extraction employs the U-TAE network architecture, with a proposed modification using inverted residual blocks for improved accuracy. Road graph extraction involves creating a graph from road segmentation output or OSM data, enabling efficient road segment analysis. Gravel-road width calculation utilizes road segmentation results and a series of image processing steps, while road condition monitoring employs ML/AI classification algorithms. Worldview 3 high-resolution satellite images and various auxiliary data sources (e.g. DEM) are used as input, including field measurements for the training of classification algorithms. Results showcase the effectiveness of the proposed framework, with gravel road extraction accuracy improved by the modified U-TAE model. Regarding gravel road condition monitoring, algorithms achieving satisfactory results are identified, despite the challenges that arise, due to the significant surface and texture variations in forest and agricultural roads. A WebGIS platform facilitates information presentation, user interaction, and management of geospatial information, supporting various functionalities such as layer management and spatial data visualization. The INFOROAD project represents a significant advancement in leveraging technology for sustainable forest road management and emergency preparedness. Future steps may involve further enhancements and adaptations for improvement of results.
There has been a sharp rise in the number of fatalities and injuries caused by traffic accidents in urban areas. Cities often have video and image resources that can be analyzed manually using operators to address this problem. This paper introduces an automated collision detection system that utilizes publicly available images captured by Toronto's traffic camera system. The system is based on a Deep Learning model, specifically a DenseNet-161, employed to classify accidents and non-accidents. The results of this classification are then displayed on a graphical user interface. The primary aim of this study is to reduce medical response time and ultimately save lives by issuing automatic alerts. The proposed system has the potential to minimize the severity of accidents and decrease the number of fatalities by notifying emergency services once an accident is detected.
No abstract available
Traffic congestion is a prevalent problem in modern civilizations worldwide, affecting both large cities and smaller communities. Emergency vehicles tend to group tightly together in these crowded scenarios, often masking one another. For traffic surveillance systems tasked with maintaining order and executing laws, this poses serious difficulties. Recent developments in machine learning for image processing have significantly increased the accuracy and effectiveness of emergency vehicle classification (EVC) systems, especially when combined with specialized hardware accelerators. The widespread use of these technologies in safety and traffic management applications has led to more sustainable transportation infrastructure management. Vehicle classification has traditionally been carried out manually by specialists, which is a laborious and subjective procedure that depends largely on the expertise that is available. Furthermore, erroneous EVC might result in major problems with operation, highlighting the necessity for a more dependable, precise, and effective method of classifying vehicles. Although image processing for EVC involves a variety of machine learning techniques, the process is still labor intensive and time consuming because the techniques now in use frequently fail to appropriately capture each type of vehicle. In order to improve the sustainability of transportation infrastructure management, this article places a strong emphasis on the creation of a hardware system that is reliable and accurate for identifying emergency vehicles in intricate contexts. The ResNet50 model’s features are extracted by the suggested system utilizing a Field Programmable Gate Array (FPGA) and then optimized by a multi-objective genetic algorithm (MOGA). A CatBoost (CB) classifier is used to categorize automobiles based on these features. Overtaking the previous state-of-the-art accuracy of 98%, the ResNet50-MOP-CB network achieved a classification accuracy of 99.87% for four primary categories of emergency vehicles. In tests conducted on tablets, laptops, and smartphones, it demonstrated excellent accuracy, fast classification times, and robustness for real-world applications. On average, it took 0.9 nanoseconds for every image to be classified with a 96.65% accuracy rate.
The time and location of occurrence, among other characteristics of incidents, are needed to distribute traveler information and dispatch emergency service for traffic congestion and safety mitigation purposes. However, the corresponding clearance/recovery time is an important, yet under-studied, subject of research. Existing studies on incident duration modeling mainly utilize duration data gathered and provided by authorities through the efforts of individual agents. This type of manual data gathering may suffer coverage and consistency issues. In addition, the explanatory variables considered in these studies are static, that is, observed at the incident formation stage and assumed to be consistent over the incident periods. This setup, however, overlooks traffic flow dynamics; time-varying traffic variables during the incident episodes affect incidents clearance and disruption characteristics. In line with the above limitations, the objective of this study is to utilize hazard-based modeling to explain incident durations mined from traffic detector data while factoring in traffic flow dynamics. The 2014–2016 Virginia statewide traffic detector data were utilized to extract incident durations by implementing a machine-learning-based incident detection algorithm followed by a semi-parametric proportional hazard function. Such a function accounts for the time-varying traffic-descriptive covariates for incident duration modeling. The resulting monotonically decreasing baseline hazard reveals the snowball effect and the inertia effect of disruptions induced by incidents, which leads to the advice that incidents should be detected and mitigated within a 30 min duration. The normalized flow, density, and speed measures, and their temporal differences are all shown to be significant covariates scaling up or down the hazard.
This article presents a comprehensive analysis of real-time hazard detection systems in mining operations through the integration of computer vision and sensor networks. The article explores how artificial intelligence and advanced monitoring technologies are transforming traditional mining safety protocols, introducing innovative solutions for early hazard detection and emergency response. The article examines the implementation of sophisticated model architectures for video analytics, multilayered sensor networks, and data integration frameworks that enable precise tracking of worker behavior, equipment proximity, and environmental conditions. Through detailed investigation of system performance metrics, implementation challenges, and validation processes, this article demonstrates the significant impact of AI-driven safety systems on reducing workplace incidents and improving operational efficiency. The article also addresses critical challenges in underground mining environments, including environmental factors, technical constraints, and data quality management, while providing insights into future developments and best practices for industry adoption. This comprehensive approach to mining safety represents a significant advancement in protecting worker safety while maintaining productive operations.
A Sparse Long Short-Term Memory (LSTM) network is used in this study's real-time audio categorization system to detect potentially harmful noises. Both live-recorded audio and pre-labeled datasets with different sound classes are processed by the system. To guarantee high-quality input for the model's training and real-time predictions, both kinds of data are cleaned and preprocessed. By extracting temporal elements from the audio input, the Sparse LSTM network—which was created to reduce computing costs—allows for accurate categorization. The technology analyzes incoming audio during live prediction, and if it detects "Danger," it sounds an alert. The system terminates the procedure without taking any further action if no threat is recognized. This framework is perfect for use in safety monitoring and alarm systems since it offers a quick and effective solution for audio-based hazard identification.
A traffic incident refers to any incidence or situation that interrupts the regular movement of car traffic or presents a hazard to individuals using the road. These incidents comprise a wide range of accidents involving vehicles, including rear-end crashes, head-on accidents, or side-impact crashes, which can lead to trauma, destruction, or property damage. This also includes vehicle failures caused by mechanical failures, such as engine problems or flat tires, which render cars immobile. The proposed paper has a state-of-the-art method for categorizing traffic incidents, which combines two sophisticated computer vision models: SegFormer and Faster R-CNN. SegFormer, an advanced semantic segmentation feature, creates detailed pixel-by-pixel classification maps of traffic scenes. This feature facilitates the clear distinction of various traffic actions and components. The final segmentation result offers a broad understanding of the spatial arrangement and corresponding information within the image. In addition, a Faster R-CNN for object detection has outstanding performance in detecting and categorizing distinct items such as automobiles and pedestrians. In the segmented regions, faster R-CNN improves the detection and classification of specific traffic-related items. The efficiency of the proposed approach is measured using a collection of traffic images, showing improved ability in identifying and classifying several forms of traffic hazards. The findings demonstrate that this hybrid method greatly surpasses conventional single-model techniques, providing a more robust and complete solution for the study and control of traffic incidents.
General aviation pilots who encounter hazardous weather face a heightened risk of fatal accidents compared to those in other sectors of aviation. To help avoid unplanned weather encounters, accurate information on cloud type and sky conditions can enhance situational awareness and hazard recognition. Among available weather information resources, ground-based webcam networks are growing for aviation meteorology and other interests such as wildfire monitoring. These webcams can provide near-real-time visual weather information to pilots, especially in regions of complex terrain where traditional weather observation methods may lack adequate spatial and temporal coverage. To harness the benefits from webcams while reducing the need for manual interpretation, transfer learning is applied using off-the-shelf convolutional neural networks on a newly constructed meteorological dataset. This dataset, consisting of more than 15 500 rigorously labeled images from public webcam networks, is categorized into nine cloud types and weather conditions relevant to general aviation. Leveraging a five-model ensemble approach with the Inception-v3 architecture, a validation accuracy of 97.1% is achieved. Dataset classes are grouped into those that are typically considered nonhazardous or hazardous to general aviation operations, and a hazard-based classification accuracy of 99.5% is attained with the ensemble model.
Safety in train platforms is a common concern in railway systems around the world. There are grave risks associated with the tracks and the crowds that often end up in injuries or deaths. Traditional approaches for ensuring safety in train transport networks rely on passive closed-circuit television (CCTV) monitorization that needs constant human attention. This paper proposes an AI-based alternative for detecting anomalies in surveillance videos, that is more efficient and cost-effective than traditional methods. Through Video Instance Segmentation (VIS), this work detects common risks such as overcrowding on the platform, people or objects standing on the edge of the platform, individuals or objects falling onto the tracks, the presence of firearms, and the presence of unattended baggage. The proposed algorithm combines state-of-the-art models like YOLOv8, ByteTrack, and Segment Anything Model (SAM) to classify, track, and segment object detections respectively. Additionally, this paper presents a custom-trained YOLOv8 model for gun detection. The results show that the system can successfully analyze video, create surveillance annotations, detect hazardous situations to alert authorities, and help prevent accidents and incidents on train platforms.
In industrial and public safety applications, detecting hazardous situations in a timely manner is critical for averting accidents and maintaining human safety. This work offers a novel method for identifying dangerous environments by using cutting-edge image processing algorithms. Using sophisticated image processing algorithms, the technology extracts pertinent information from visual data that is recorded by cameras or other sensors. Gas sensors are simultaneously used to track the chemical composition of the surrounding air, with a special emphasis on chemicals that may pose a threat. The integration of gas and optical sensor data allows for a more thorough analysis, which enhances the system's precision in identifying and categorizing dangerous situations.
This research explores how artificial intelligence can fundamentally change the way enterprises handle incident management. As companies face increasingly complex IT environments, traditional manual approaches to spotting, categorizing, and fixing problems have struggled to keep pace with operational demands. By bringing together AI capabilities like machine learning for spotting unusual patterns, natural language processing for understanding automated reports, and predictive analytics for assessing problem severity, we can dramatically improve how efficiently incidents are managed. This investigation examines how AI-based approaches enable systems to automatically generate incident reports, make smart severity assessments, calculate changing impacts in real-time, and create thorough documentation through meeting transcription and summarization. While highlighting the benefits of increased speed, better accuracy, and improved scalability, this work also addresses real-world implementation hurdles including data quality concerns, potential algorithmic biases, and the complexities of integrating with existing systems. Companies that successfully implement AI-powered incident management solutions stand to gain stronger operational resilience, faster problem resolution times, and happier customers, giving them a competitive edge in our increasingly digital business landscape.
This research addresses key challenges in accident detection, location tracking and hazard response systems through Internet of Things (IoT). The system is composed of four integrated modules: (1) a fire detection unit that automatically identifies fire incidents, (2) a ventilation system for raised carbon monoxide levels, (3) an automatic braking mechanism for emergencies and (4) accident detection and location tracking module that immediately identifies the accident site for emergency response. The complete prototype is implemented on an Android-controlled electric vehicle robot, utilising ESP32 microcontrollers, global positioning system modules and various sensors. Each component is individually tested and calibrated and experimental results demonstrate the effectiveness of the proposed prototype in real-time accident detection, hazard management, and precise location-based alerting.
With the increasing number of incidents in complex rail-road environments, there is an urgent need for automated systems that can predict dangerous events through accurate detection and distance measurement between trains and various hazards. To address these challenges, we introduce Near-Miss Detector (NMD), an integrated frame-work that leverages specialized models to accurately detect possible collisions between trains and people or objects in the railroad environment via monocular cameras installed in front of a train. NMD constructs a comprehensive, 3-dimensional view of a given scene via object detection, instance segmentation and depth estimation. In this view, the risk of an accident is measured through the distance between the moving train and detected objects in the scene. In order to apply NMD in real-world scenarios, we present a novel depth-calibration mechanism based on constant geo-metrical properties of a railroad environment, such as the gauge of the rail track. To validate our work, we collected a dataset of measurements from multiple train stations in order to accurately represent the diversity and challenges of a complex railroad environment. NMD demonstrates robust performance in object detection, track segmentation and distance measurement while maintaining suitable processing latency. This work contributes to the field of automated railway safety monitoring by showing the feasibility of monocular vision-based distance measurement in complex railway environments, offering a cost-effective solution for improving railway safety systems.
Falls are a common problem in many environments and affect people of all ages. Although some people fall to be minor incidents, they can have serious consequences, especially for vulnerable groups like the elderly and stroke survivors. This study aimed to develop a system for detecting falls in patients using sensor fusion and machine learning methods to accurately identify the positions of the falls. The system combines data from accelerometers and gyroscopes using the Kalman filter to categorize falls into four types: supine, prone, left, and right. The system uses the k-Nearest Neighbors (k-NN) algorithm for threshold fall motion detection to reduce false detections. A fall detection triggers the system to send the position data via LoRaWAN communication, making the data accessible through Node-RED and Telegram. The system performance was evaluated through several tests: MPU6050 sensor measurement to calibrate and respond to the Euler accelerometer and gyroscope sensor, kalman filter measurement, threshold fall detection with the k-NN algorithm measurement, and performance LoRaWAN communication. The results showed that calibrating the MPU6050 sensor effectively minimized sensor drift and noise. The implementation of the kalman filter successfully reduced noise in the sensor readings, the k-NN algorithm provided optimal system values and performance, and data transmission via LoRaWAN to Node Red and Telegram was effective.
To constructively enhance traffic safety measures in Saudi Arabia, a significant number of AI-based traffic surveillance technologies have emerged over the past years, including the widely known Saher system. Rapid detection of vehicle incidents is crucial for improving the response speed of incident management, which in turn minimizes road injuries resulting from accidents. To meet the growing demand for road traffic security and safety, this paper presents a real-time traffic incident detection and alert system based on a computer vision approach. The proposed framework comprises three models, each integrated within a prototype interface to fully visualize the system’s overall architecture. Vehicle Detection and Tracking Model: This model uses the YOLOv5 object detector combined with the DeepSORT tracker to detect and track vehicle movements, assigning a unique identification number (ID) to each vehicle. The model achieved a mean average precision (mAP) of 99.2%, ensuring high accuracy in vehicle detection and tracking. Traffic Accident and Severity Classification Model: Utilizing the YOLOv5 algorithm, this model detects and classifies the severity level of traffic accidents. It attained a mAP of 83.3%. Upon detecting a severe accident, the system sends an immediate alert message to the nearest hospital, ensuring timely medical response. Fire Detection Model: This model employs the ResNet152 algorithm to detect fire ignition following an accident. It achieved an accuracy rate of 98.9%. If a fire is detected, an automated alert is sent to the fire station, facilitating quick firefighting response. An innovative parallel computing technique was employed to reduce the overall complexity and inference time of the AI-based system, enabling the proposed system to operate concurrently and in parallel. This parallel processing capability ensures that the detection, classification, and alerting processes occur swiftly and efficiently, enhancing the overall effectiveness of the system. By integrating these advanced AI models, the real-time traffic incident detection and alert system significantly contributes to improving traffic safety and incident management in Saudi Arabia. The system not only detects and tracks vehicles with high precision but also classifies the severity of accidents and identifies subsequent hazards like fires, ensuring comprehensive and timely responses to traffic incidents. This innovative approach sets a new benchmark for AI-driven traffic safety solutions and offers a scalable model that can be adapted to other regions and contexts.
The intelligent monitoring of road surveillance videos is a crucial tool for detecting and predicting traffic anomalies, swiftly identifying road safety risks, rapidly addressing potential hazards, and preventing accidents or secondary incidents. With the vast number of surveillance cameras in operation, conducting traditional real-time video analysis across all cameras at once requires substantial computational resources. Alternatively, methods that employ periodic camera patrol analysis frequently overlook a significant number of anomalous traffic events, thereby hindering the effectiveness of traffic event detection. To overcome these challenges, this paper introduces a heuristic optimal scheduling approach designed to enhance traffic event detection efficiency while operating within limited computational resources. This method leverages historical data and prior knowledge to compute a weighted event feature value for each camera, providing a quantitative measure of its detection efficiency. To optimize resource allocation, a cyclic elimination mechanism is implemented to exclude low-performing cameras, enabling the dynamic reallocation of resources to higher-performing cameras, thereby enhancing overall detection performance. Finally, the effectiveness of the proposed method is validated through a case study conducted in a representative region of a major metropolitan city in China. The results revealed a substantial improvement in traffic event detection efficiency, with increases of 40%, 28%, 17%, and 28% across different time periods when compared to the pre-optimized state. Furthermore, the proposed method outperformed existing resource scheduling algorithms in terms of average load degree, load balance degree, and higher computational resource utilization. By avoiding the common issues of resource wastage and insufficiency often found in static allocation models, this approach offers greater flexibility and adaptability in computational resource scheduling, thereby effectively addressing the practical demands of traffic anomaly detection and early warning systems.
The number of vehicular accidents has increased significantly in comparison to previous time periods. An estimated seventeen crashes occur every hour. Bicycle accidents account for a sizable proportion of all accident incidences. This is due to the fact that two-wheeled vehicles lack a number of safety measures seen on automobiles and tractors. This paper outlines the methods for disaster mitigation through the creation of an effective system that quickly identifies and alerts the appropriate authorities and individuals. The accelerometer assesses potential accidents by measuring head tilt. Vibration sensors are capable of detecting collisions and sending out timely warnings about potential hazards. When the engine is turned off, the MQ3 monitor sends a text message to the rider’s mobile device. The failure to wear a helmet significantly increases the risk of brain injury during an accident. Within the constraints of this unique situation, the use of a helmet may help to reduce the severity of head injuries. The quick installation of a level monitor has the potential to drastically prevent gas theft. The gasoline theft detection sensor is located at the fuel tank’s aperture. The speed-regulating system functions as an independent turbine. The emission of an aural signal, also referred to as an "overspeed beep," serves to notify the operator when they have exceeded a predetermined velocity threshold. The spinning of the turbine motor indicates the speed of the bicycle. Incidents have occurred as a result of the presence of an unusually large number of people. The installation of a load cell sensor may be considered to reduce the risk of vehicle overload. Data on distances, velocities, fuel consumption, and miles were collected and saved using the Internet of Things (IoT).
The increasing severity of climate-related workplace hazards challenges occupational health and safety, particularly for Public Health and Safety Inspectors. Exposure to extreme temperatures, air pollution, and high-risk environments heightens immediate physical threats and long-term burnout. This study employs Artificial Intelligence (AI)-driven predictive analytics and secondary data analysis to assess hazards and forecast burnout risks. Machine learning models, including eXtreme Gradient Boosting (XGBoost 3.0), Random Forest, Autoencoders, and Long Short-Term Memory (LSTMs), achieved 85–90% accuracy in hazard prediction, reducing workplace incidents by 35% over six months. Burnout risk analysis identified key predictors: physical hazard exposure (β = 0.76, p < 0.01), extended work hours (>10 h/day, +40% risk), and inadequate training (β = 0.68, p < 0.05). Adaptive workload scheduling and fatigue monitoring reduced burnout prevalence by 28%. Real-time environmental data improved hazard detection, while Natural Language Processing (NLP)-based text mining identified stress-related indicators in worker reports. The results demonstrate AI’s effectiveness in workplace safety, predicting, classifying, and mitigating risks. Reinforcement learning-based adaptive monitoring optimizes workforce well-being. Expanding predictive-driven occupational health frameworks to broader industries could enhance safety protocols, ensuring proactive risk mitigation. Future applications include integrating biometric wearables and real-time physiological monitoring to improve predictive accuracy and strengthen occupational resilience.
Supermarkets need to implement safety measures to create a safe environment for shoppers and employees. Many of these injuries, such as falls, are caused by a lack of safety precautions. Such incidents are preventable by timely detection of hazardous conditions such as undesirable objects on supermarket floors. In this paper, we describe EdgeLite, a new lightweight deep learning model specifically designed for local and fast inference on edge devices which have limited memory and compute power. We show how EdgeLite was deployed on three different edge devices for detecting hazards in images of supermarket floors. On our dataset of supermarket floor hazards, EdgeLite outperformed six state-of-the-art object detection models in terms of accuracy when deployed on the three small devices. Our experiments also showed that energy consumption, memory usage, and inference time of EdgeLite were comparable to that of the baseline models. Based on our experiments, we provide recommendations to practitioners for overcoming resource limitations and execution bottlenecks when deploying deep learning models in settings involving resource-constrained hardware.
Oil spills on the water surface pose a significant environmental hazard, underscoring the critical need for developing Artificial Intelligence (AI) detection methods. Utilizing Unmanned Aerial Vehicles (UAVs) can significantly improve the efficiency of oil spill detection at early stages, reducing environmental damage; however, there is a lack of training datasets in the domain. In this paper, LADOS is introduced, an aeriaL imAgery Dataset for Oil Spill detection, classification, and localization by incorporating both liquid and solid classes of low-altitude images. LADOS comprises 3388 images annotated at the pixel level across six distinct classes, including the background. In addition to including a general oil class describing various oil spill appearances, LADOS provides a detailed categorization by including emulsions and sheens. Detailed examination of both instance and semantic segmentation approaches is illustrated to validate the dataset’s performance and significance to the domain. The results on the test set demonstrate an overall performance exceeding 66% mean Intersection over Union (mIoU), with specific classes such as oil and emulsion to surpass 74% of IoU part of the experiments.
The novel Surveillance Camera-Based Fire Detection System maximizes the utilization of the existing surveillance infrastructure. Using state-of-the-art computer vision and AI algorithms, it can instantly identify flames, smoke, and other fire-related phenomena in real-time video feeds from strategically placed security cameras. By merging information from several cameras, it decreases blind spots and raises the possibility of early detection. When a fire event is identified, it immediately alerts both nearby monitoring centers and staff who are on-site, providing vital details about the location and seriousness of the situation. Technology advances quickly, adapts to various environments, and integrates with fire suppression systems to trigger automatic responses. For post-incident analysis and hazard prediction based on historical trends, it also retains historical. This cutting-edge technology improves fire safety by offering thorough real-time monitoring and proactive fire prevention while utilizing current infrastructure for cost-effective deployment.
Fire incidents pose severe threats to human life, property, and the environment, demanding rapid and efficient intervention. Traditional firefighting methods expose personnel to extreme hazards, including high temperatures, toxic gases, and structural collapses. This paper presents an IoT and AI-enhanced autonomous fire-fighting robot, designed to detect, navigate, and extinguish fires with minimal human involvement. The proposed system integrates flame, temperature, and gas sensors with an Arduino-based control unit and an AI-assisted flame classification model to minimize false alarms. Real-time environmental data is transmitted via IoT to a remote monitoring station, enabling both autonomous and manual override modes. The navigation system employs ultrasonic sensors and obstacle avoidance algorithms, while the fire suppression mechanism uses a servo-controlled water or CO2-based extinguisher. Experimental results demonstrate a detection accuracy of 96.5% and a fire extinguishing time under 25 seconds in controlled environments. The proposed solution significantly enhances firefighting efficiency, reduces human risk, and offers scalable applications for industrial, commercial, and residential safety systems.
Fire outbreaks pose a significant threat to lives and property, making early detection crucial for minimizing damage. Traditional fire detection methods often rely on manual monitoring or conventional image analysis techniques, which can lead to delayed detection and lower accuracy. To address these challenges, this project implements an AI-powered fire detection system using the yolo8 object detection model. The model has been trained on a dataset of 2,509 images, with 1,004 used for training, 754 for validation, and 751 for testing. The system processes video input in real time, detecting fire and marking affected areas with a bounding box and confidence score. Detection details, including the timestamp, fire status, and confidence level, are logged in a CSV file for record-keeping. Additionally, an automated alert system is integrated using Twilio’s SMS service, which immediately notifies designated authorities upon fire detection. The model achieves a mean Average Precision (mAP) of 91.3%, a precision of 90.3%, and a recall of 86.9%, demonstrating high reliability in identifying fire incidents. With its ability to detect fire efficiently and provide real-time alerts, this system offers a scalable and effective solution for fire monitoring and prevention.
This study addresses critical challenges in urban road network safety management—specifically, the inefficiency of manual hazard screening and its heavy reliance on empirical judgment. We propose a screening and prioritization method for urban road hazard points, facilitating a paradigm shift from reactive, post-incident mitigation to proactive, comprehensive pre-evaluation. This framework provides an evidence-based foundation for prioritizing hazard remediation according to severity and urgency. First, a three-layer screening framework is established, comprising 1) a road network topology layer, 2) a traffic operation layer, and 3) a conflict identification layer. Guided by the principles of accessibility, representability, and computability, this architecture systematically integrates 10 indicators across these layers. Second, leveraging both static physical infrastructure data and dynamic traffic operational data from urban roads, we develop classification and grading rules for hazard identification. Each indicator is categorized into a four rating scale (0-3 points) reflecting hazard levels. Subsequently, a scenario-based weighted summation of these indicators facilitates a graded screening and prioritization of urban road networks. High-priority locations and segments for safety interventions are identified using predetermined percentile thresholds. For the top-tier hazard points, management suggestions can be formulated based on their performance in the conflict identification indicator set. Finally high-priority intersections are pinpointed through a large-scale case study encompassing 18 major Chinese cities. The analysis reveals recurrent hazard patterns and typical scenarios, thereby offering targeted support for addressing critical issues at urban road junctions.
The intensity of cyber security hazards is increasing, smart, integrated solutions require which can immediately identify, classify and respond immediately. This paper presents a comprehensive AI-powered cyber security system that integrates modules for simplified monitoring and a centralized dashboard for coordination, detection of malware, network infiltration and email classification. To increase the accuracy of detecting danger and reducing false positivity, the system appoints machine machine learning and intensive learning methods. The email classification system uses supervised learning and natural language processing (NLP) to effectively identify spam and fishing email. The malware detection system appoints nerve network classifier with stable and dynamic analysis methods to identify malicious programs. Network infiltration detection system (NIDS) examines the network traffic behavior through models detecting discrepancy to identify suspicious activities. The dashboard compiles insight from each module, providing real -time analytics, alert management and user response tools to increase the system flexibility. Experimental findings encourage classification accuracy in various modules, in which email classifier reaches more than 90 % of accuracy. Even though delay and throwput assessment was not done, modular and scalable design guarantees flexibility of the system in various network settings. This study highlights the ability of AI-operated cyber security systems to identify the active danger in the corporate environment and improve the event management.
During crises, people use X to share real-time updates. These posts reveal public sentiment and evolving emergency situations. However, the changing sentiment in tweets coupled with anomalous patterns may indicate significant events, misinformation or emerging hazards that require timely detection. By using a neural network, and employing deep learning techniques for crisis observation, this study proposes a pipeline for sentiment analysis and anomaly detection in crisis-related tweets. The authors used pre-trained BERT to classify tweet sentiment. For sentiment anomaly detection, autoencoders and recurrent neural networks (RNNs) with an attention mechanism were applied to capture sequential relationships and identify irregular sentiment patterns that deviate from standard crisis talk. Experimental results show that neural networks are more accurate than traditional machine learning methods for both sentiment categorization and anomaly detection tasks, with higher precision and recall for identifying sentiment shifts in the public. This study indicates that neural networks can be used for crisis management and the early detection of significant sentiment anomalies. This could be beneficial to emergency responders and policymakers and support data-driven decisions.
The fabrication of the Smart Helmet system utilizes Arduino and ESP32 microcontroller components to create realtime hazard detection systems that protect riders on the road. The system equips a vibration sensor to spot upcoming incidents and an ultrasonic sensor to identify objects while the buzzer activates when risks approach. The ESP32 works with a GPS module that allows the system to generate precise location information during impact incidents. Through Wi-Fi the system transmits data by means of its built-in SMTP client to deliver pre-programmed recipient emails containing Google Maps location information. The device incorporates an LED turn indicator controlled by DPDT switch for safer road operations. Diesel fuel operating costs are minimal while the design maintains both reduced power usage and budget efficiency and it has potential development opportunities for solar power integration. The presented paper develops an affordable IoTbased safety system with modular design elements which shows how embedded systems together with wireless connectivity help decrease emergency response times and enhance secure cycling behaviors.
In the maritime field, major ship fire incidents cause significant property damage and seriously threaten crew members’ safety. Therefore, deploying early fire detection and warning systems is necessary to prevent fire spread in particularly harsh environmental conditions on ships. This paper proposes an intelligent fire detection model for shipboard applications using a deep learning architecture combining Convolutional Neural Networks (CNN) and Long Short-Term Memory networks (LSTM). The CNN module extracts spatial features from image sensor data, combining the LSTM component with temporal patterns to enhance the ability to detect early fires. The training dataset is based on a 3D ship model in Unity, allowing for testing in real-world scenarios while ensuring safety. The experimental results show that the CNNLSTM model fully meets the requirements for fire hazard detection compared to conventional methods, increasing accuracy and fire safety on ships in the maritime environment.
Fire incidents pose severe risks for patients and staff in healthcare facilities especially when these facilities lack automated early detection systems. In complex and large hospital settings, traditional smoke and fire detection technologies suffer from significant delays and cannot work in real time. This paper proposes an artificial intelligence (AI)-based fire and smoke hazard detection system that exploits the video streams of existing surveillance camera networks and applies emerging deep learning methods for prompt and effective fire recognition. This research work focuses on the advances in object detection, particularly with YOLOv11, a cutting-edge model from You Only Look Once (YOLO) family. A custom data set consisting of 17,525 images is created with 27,314 annotations of fires and smoke for both indoor and outdoor scenes for training and evaluation. All variants of YOLOv11 (from nano to xlarge) are trained and evaluated. The experiments carried out show that all variants achieved high detection performance with a mean average precision of around 90% for the medium model. In addition, the results highlight the trade-offs between the size of the model, detection accuracy and inference speed, emphasizing the practical implications for deploying AI-based fire detection systems in real-world healthcare environments.
Effective disaster management hinges on prompt, informed decisions, where social media has emerged as a real-time information source. However, current artificial intelligence (AI) systems for disaster response rely on universal taxonomies that assume information relevance is consistent across geographical and cultural contexts – an assumption that fails to account for regional variations in disaster types, response capabilities and local priorities. This study questions the ‘one-size-fits-all’ approach by developing context-specific social media indicator taxonomies through participatory engagement with 104 stakeholders across Ghana and Mauritius. We developed a taxonomy of 39 social media indicators across four categories: urgent needs, impact assessment, situational awareness and vulnerable populations. Our findings reveal significant regional variations in disaster information priorities that contradict assumptions underlying existing universal frameworks. While impact assessment indicators showed convergence between countries, other categories revealed that there are still important areas for future research on incorporating local stakeholder knowledge into AI system design. Our participatory methodology provides a replicable framework for developing adaptive, context-aware machine learning classifiers that can transform static universal categorisations into dynamic systems aligned with unique regional priorities and operational contexts. Contribution We suggest future research areas that span across developing transfer learning approaches that leverage pre-trained multilingual models while incorporating region-specific context, creating active learning frameworks with local validation loops, implementing feedback mechanisms and establishing fair human-in-the-loop annotation processes that maintain quality.
An efficient disaster response plan is necessary for affected areas to recover quickly and with fewer fatalities. Automated planning is an area of artificial intelligence (AI) that finds a plan efficiently, considering relevant aspects of a problem. This work advances the field of automated planning for disaster response by offering a systematic and computationally effective framework for organizing resilience efforts during disasters. We formalize a domain using the planning domain definition language (PDDL) to use automated planning in various disaster resilience scenarios. Our proposed domain includes important activities like evacuation, rescue, medical support, and resource distribution. The domain allows prioritization of affected areas and considers various constraints such as the accessibility of a location and the capacity of different vehicles. The domain uses elegant PDDL components, for example, quantified preconditions and conditional effects, to incorporate those constraints. This domain is flexible to apply in various types of crises, such as floods and cyclones. We found plans with up to 279 actions for various instances of our domain with differing levels of complexity within a time limit of 30 minutes using a numeric planner, which explored over 1 million states to find the plan. A numeric planner with the proposed domain can generate plans that reduce response time and operational cost for disaster response and resilience planning.
In order to provide a timely response and recovery operations to a disaster event, executing precise building damage estimations is crucial. Traditional damage assessment techniques mainly depend on on-site inspections which are inefficient and subjective. This study explores the application of deep learning, and more specifically, Convolutional Neural Networks (CNNs), in the automatic classification of building damage from remote sensing images. The CNN model analyzes aerial or satellite images and assigns building damage levels as no damage, moderate, or severe. Unlike the customary methods, the approach proposed in the study has more rapid and precise evaluation of damages. This astonishing acceleration in the evaluation of damage can be very beneficial to management teams that handle the disaster. With accurate and timely damage evaluations performed by the system, decisions like when and how to allocate resources, when to plan for the response and recovery efforts become much easier and reliable. This study illustrates how disaster management operations can be improved with the implementation of models based on deep learning techniques and ensures timely recovery after an event. This method automates the assessment process which mitigates the need for manual evaluations, providing a comprehensive solution for assessing damages in areas affected by mass disasters.
No abstract available
During disasters, social media can serve as a valuable source of real-time information about the impacts on people and infrastructure. However, due to the lack of geographical information in most social media posts, this information is often underutilized by first responders. Previous research has attempted to estimate the location of individual social media posts using text and image analysis, but limitations still exist in fine-grained disaster area mapping. To address this issue, this paper explores the feasibility of automatically extracting textual information from social media images to enhance the creation of a disaster area map. This paper evaluates the effectiveness of the proposed approach and its potential impact on improving the geolocation of social media information during a disaster.
. During disasters, social media can serve as a valuable source of real-time information about the impacts on people and infrastructure. However, due to the lack of geographical information in most social media posts, this information is often underutilized by first responders. Previous research has attempted to estimate the location of individual social media posts using text and image analysis, but limitations still exist in fine-grained disaster area mapping. To address this issue, this paper analyses the performance of combining text from social media post with textual information from the images on improving the geolocation of social media information during a disaster.
Today, unpredictable damage can result from extreme weather such as heat waves and floods. This damage makes communities that cannot respond quickly to disasters more vulnerable than cities. Thus, people living in such communities can easily become isolated, which can cause unavoidable loss of life or property. In the meantime, many disaster management studies have been conducted, but studies on effective disaster response for areas surrounded by mountains or with weak transportation infrastructure are very rare. To fill the gap, this research aimed at developing an automated analysis tool that can be directly used for disaster response and recovery by identifying in real time the communities at risk of isolation using a web-based geographic information system (GIS) application. We first developed an algorithm to automatically detect communities at risk of isolation due to disaster. Next, we developed an analytics module to identify buildings and populations within the communities and efficiently place at-risk residents in shelters. In sum, the analysis tool developed in this study can be used to support disaster response decisions regarding, for example, rescue activities and supply of materials by accurately detecting isolated areas when a disaster occurs in a mountainous area where communication and transportation infrastructure is lacking.
This paper proposes a taxonomy of semantic information in robot-assisted disaster response. Robots are increasingly being used in hazardous environment industries and emergency response teams to perform various tasks. Operational decision-making in such applications requires a complex semantic understanding of environments that are remote from the human operator. Low-level sensory data from the robot is transformed into perception and informative cognition. Currently, such cognition is predominantly performed by a human expert, who monitors remote sensor data such as robot video feeds. This engenders a need for AI-generated semantic understanding capabilities on the robot itself. Current work on semantics and AI lies towards the relatively academic end of the research spectrum, hence relatively removed from the practical realities of first responder teams. We aim for this paper to be a step towards bridging this divide. We first review common robot tasks in disaster response and the types of information such robots must collect. We then organize the types of semantic features and understanding that may be useful in disaster operations into a taxonomy of semantic information. We also briefly review the current state-of-the-art semantic understanding techniques. We highlight potential synergies, but we also identify gaps that need to be bridged to apply these ideas. We aim to stimulate the research that is needed to adapt, robustify, and implement state-of-the-art AI semantics methods in the challenging conditions of disasters and first responder scenarios.
Due to advancement in technologies and growth of social media it makes wide range of contribution in decision making process including prices management and disaster response for man-made and climate related events to provide optimal solutions. Different types of experiences and reports originates from crisis management process in recent disorders highlights the requirement for innovative and resilient support system for making decisions with developed and model communities that works based on their real time data. The mere aim of the paper is the use of artificial intelligence at national language processing to end hands disaster response and crisis management with text analytics in social good. By analyzing the textual content available in social media using AI and MLP the required solution can be given. The study focused on understanding of contextual information from the post available in social media during the disaster to develop taxonomy for effective categorization and classification with wide range of disgusted topics. The data set comprised of social media post related to disaster are collected under the preprocessed for ensuring reliability and data quality.
Rapid urbanization and disaster-induced transformations necessitate automated, interpretable, and quantitative monitoring frameworks. This paper presents Urban-Hybrid-CDQNet, a unified deep learning architecture that integrates pixel-level change detection, semantic object classification, and quantitative assessment from bi-temporal satellite imagery. At its core, a Siamese Transformer U-Net (STransUNet), enhanced with Conditional Random Fields (CRF), generates refined change maps by suppressing pseudochanges and sharpening boundaries. These maps guide a Mask R-CNN module for object-level classification into five urban categories: buildings, damaged buildings, roads, damaged roads, and vehicles. A dedicated quantification module further computes both global and per-class statistics, yielding interpretable measures of urban transformation. Experiments on the LEVIR-CD benchmark and a custom damage dataset curated from the Gaza Strip in Palestine demonstrate robust performance (peak validation $\mathbf{F 1}$-score $=\mathbf{0. 8 3 7 9}$; $\mathbf{I o U}=\mathbf{0. 7 2 1 1}$), with qualitative results confirming effectiveness under complex urban conditions. The proposed framework provides a rigorous, reproducible, and actionable methodology for urban change assessment, supporting disaster response, infrastructure planning, and sustainable development.
In recent years, intelligent robotic arms have attracted a lot of attention due to their precision and flexibility in automated tasks, especially in dynamic environments that require high-precision operation. Based on the existing model, this paper proposes an innovative system that integrates high-precision servo control, computer vision and optimized motion planning. The robotic arm uses six high-torque serial bus servos (25kg/cm, 0-300°), which are precisely controlled by independent IDs, and integrate RGB cameras with Jetson Nano to achieve “hand-eye collaboration” perception. The Aruco code distance estimation and inverse kinematics optimization algorithm are innovatively introduced to obtain the target position and attitude in real time, and accurately calculate the rotation angle of the joint. In addition, the combination of color filtering, target recognition and distance estimation improves the accuracy of autonomous grasping. Experimental results show that the model achieves high-precision and stable control in complex environments, and provides technical support for the application of intelligent robotic arms in the fields of industry, rescue and human-computer interaction.
Natural disasters such as earthquakes, floods are becoming more frequent and severe due to climate change and urban expansion. The ability to quickly detect and assess the severity of such events is necessary requirements for timely disaster response and mitigation. Traditionally, disaster detection relied on isolated data sources like weather reports, seismic sensors, and ground reports. However, the availability of multi-modal data has opened up opportunities for more holistic monitoring systems in disasters. Traditional disaster detection methods remain limited by sheer volumes and complexity, making automated systems a necessity for real-time analysis. This project will explore the concepts of deep learning frameworks for automating classification and analysis of multimodal data for disaster scenarios. DL, deep learning, with powerful tools automatically examines and classifies such enormous heterogeneity in a vast data field. Therefore, from the perspective of DL algorithms and architectures, extraction of meaningful patterns from multimodal data becomes feasible towards rapid and correct disaster detection.
Accurate mapping of hurricane-induced damage is essential for guiding rapid disaster response and long-term recovery planning. This study evaluates the Three-Dimensional Multi-Attributes, Multiscale, Multi-Cloud (3DMASC) framework for semantic classification of pre- and post-hurricane Light Detection and Ranging (LiDAR) data, using Mexico Beach, Florida, as a case study following Hurricane Michael. The goal was to assess the framework’s ability to classify stable landscape features and detect damage-specific classes in a highly complex post-disaster environment. Bitemporal topo-bathymetric LiDAR datasets from 2017 (pre-event) and 2018 (post-event) were processed to extract more than 80 geometric, radiometric, and echo-based features at multiple spatial scales. A Random Forest classifier was trained on a 2.37 km2 pre-hurricane area (Zone A) and evaluated on an independent 0.95 km2 post-hurricane area (Zone B). Pre-hurricane classification achieved an overall accuracy of 0.9711, with stable classes such as ground, water, and buildings achieving precision and recall exceeding 0.95. Post-hurricane classification maintained similar accuracy; however, damage-related classes exhibited lower performance, with debris reaching an F1-score of 0.77, damaged buildings 0.58, and vehicles recording a recall of only 0.13. These results indicate that the workflow is effective for rapid mapping of persistent structures, with additional refinements needed for detailed damage classification. Misclassifications were concentrated along class boundaries and in structurally ambiguous areas, consistent with known LiDAR limitations in disaster contexts. These results demonstrate the robustness and spatial transferability of the 3DMASC–Random Forest approach for disaster mapping. Integrating multispectral data, improving small-object representation, and incorporating automated debris volume estimation could further enhance classification reliability, enabling faster, more informed post-disaster decision-making. By enabling rapid, accurate damage mapping, this approach supports sustainable disaster recovery, resource-efficient debris management, and resilience planning in hurricane-prone regions.
No abstract available
In response to the contradiction between real-time traffic scheduling and cost control in dual active disaster recovery centers in two locations, this study proposes the AutoFlow DR automated scheduling framework, which achieves intelligent management of cross center traffic by constructing a three-layer architecture of “monitoring decision execution”. Research and design a multidimensional health assessment model that integrates link status and business priority to dynamically adjust routing weights, combined with MP2P aggregation flow optimization algorithm and P2MP distribution flow cost model, to solve the problems of high fault switching delay and low resource utilization in traditional solutions. The experiment shows that the framework controls the fault switching delay within $\mathbf{5 0 m s}$, improves the cross center link resource utilization rate to $\mathbf{7 5. 2 \%}$, reduces bandwidth costs by 41.7%, and effectively balances disaster recovery reliability and operational economy. The research results provide an engineering solution for traffic automation scheduling in large-scale dual active disaster recovery scenarios, which has practical application value for improving critical business continuity.
Abstract Collaboration is essential for effective performance by groups of robots in disaster response settings. Here we are particularly interested in heterogeneous robots that collaborate in complex scenarios with incomplete, dynamically changing information. In detail, we consider an automated victim search setting, where unmanned aerial vehicles (UAVs) with different capabilities work together to scan for mobile phones and find and provide information about possible victims near these phone locations. The state of the art for such collaboration is robot control based on independent planning for robots with different tasks and typically incorporates uncertainty with only a limited scope. In contrast, in this paper, we take into account complex relations between robots with different tasks. As a result, we create a joint, full-horizon plan for the whole robot team by optimising over the uncertainty of future information gain using an online planner with hindsight optimisation. This joint plan is also used for further optimisation of individual UAV paths based on the long-term plans of all robots. We evaluate our planner’s performance in a realistic simulation environment based on a real disaster and find that our approach finds victims 25% faster compared to current state-of-the-art approaches.
Twitter is a website for social networking and news delivery that has significant importance at the time of any disaster by providing valuable information related to the disaster. The analysis of social media has become crucial in monitoring and gathering important information during disasters, including reports on the impact of the event, as well as updates on injuries and damage to infrastructure. This information is valuable for disaster management teams in their response efforts. During a disaster, an automated system that can retrieve relative information and massive Twitter data could help disaster relief volunteers complete their duties efficiently amidst the chaos. The majority of researchers use a unimodal (only images or text) approach. The use of a single modality (text or images) frequently results in the loss of important insights. Though some of the researchers used a multimodal approach to classify the disaster types, most of the works were based on the pre-trained CNN model for example VGG16, and InceptionV3 for image classification which can lead to vanishing gradient problem, and the CNN or RNN-based model for textual analysis, which does not work for longer input sequences and is not focused on a critical part of the text, then combining these two models to classify the six different damage types. In our proposed model, we have used a multi-head attention-based pre-trained BERT model for text feature extraction which works on long input sequences and also gives attention to the specific portion of the text, and the ResNet50 model for image features extraction which addresses the vanishing gradient problem. Then we used an early fusion approach to concatenate these two networks followed by the soft-max classifier. We are able to achieve 93.16% weighted F1 score in classifying different damage types.
Current research and development efforts at DLR`s Center for Satellite Based Crisis Information (ZKI) focus on deploying automated image analysis methods as part of rapid mapping processing routines. The use of machine learning methods enables processing of large amounts of heterogeneous satellite, aerial and drone images at varying spatial scales and temporal frequencies. In this work, we introduce an automated and scalable image processing chain for rapid building damage assessment, optimize it for inference on different hardware and provide application examples from recent natural disasters. We show the scalability of the method from high-frequency live-mapping with drones on a laptop to large-scale processing of satellite and aerial images on a high-performance computing cluster.
Disaster management is one of the applications where cyber-physical systems play a significant role by allowing real-time data collection and analysis. Such systems consist of physical, computational and communication elements and enable the automatic exchange of information between machines and humans and decision-making in disaster situations. This technology can change the method of disaster response to a more automated one, and it could reduce the amount of manual data collection and analysis. In disaster management, CPS consists of a set of interconnected devices, including sensors, drones and mobile devices monitoring different factors of the disaster: assessing damage, environmental conditions and survivor whereabouts. This information is sent to a central hub for processing and real-time analysis. This information is then fed into models to assess risk, identify hot spots and guide rescue and relief operations. CPS enables disaster response teams to utilize live data, which means decisions can be made faster and more accurately.
In vulnerable regions, disasters present a significant threat, and such areas need advanced risk reduction systems that can provide real-time response and prevention. The ability to predict, respond efficiently, and allocate resources to the earthquake disaster has been proposed by the Adaptive AI-Driven Disaster Resilience Framework (AIDR-F), an innovative framework of AI-integrating systems. This framework leverages a modal AI model for real-time disaster forecasting, edge computing for low-latency decision-making, and blockchain for secure, transparent coordination. AIDR-F amalgamates a self-taught AI-based early warning system with continual improvement of the prediction accuracy through ongoing integration of real-time sensor data, satellite imagery, and historical records. Furthermore, an automated response network employing autonomous UAVs, IoT-enabled infrastructure monitoring, and dynamic evacuation route optimisation is also used. It relies on smart contracts that operate on a blockchain’s enabled coordination layer, enabling equitable resource distribution and rapid financial aid deployment. The system incorporates edge AI and equips it to make real-time decisions in a connectivity-constrained environment. Simulation results show that AIDR-F significantly reduces response time, increases preparedness for disasters, and improves coordination efficiency. This research highlights the benefits and applications of an AI-decentralised, community-based disaster management system for mitigating risks and increasing resilience in disaster-prone countries. Work will then be expanded to develop large-scale deployment and policy integration in global disaster management.
This paper investigates the use of SLMs in automated response planning and in real-time communication during disasters in a scenario where there is no extreme bandwidth and communication is scarce. The need to alert the population in the case of a disaster is pointed out. The article establishes the relevance of the topic: the growing frequency and scale of disasters render the speed and reliability of alerting systems critically important, whereas large cloud-hosted LLMs are impractical due to their substantial bandwidth and energy requirements. The objective of this study is to assess the feasibility and operational value of SLMs within post-disaster communication networks and to formulate governance-informed implementation practices for their deployment. The architectural and empirical work on the model and its prototype is the novel aspect of the research. The novelty of this work lies in a systematic comparison of architectures and prototype validation: a review of the literature together with experimental case studies demonstrates the feasibility of local SLM inference (Llama-3 8B, Qwen-2.5 7B) on single-board accelerators (Jetson Orin AGX) with INT4 quantization and parameter-efficient fine-tuning (LoRA/LoRI). The research spans fields such as power and usage latency, document semantic trust normalization, misinformation detection, hybrid BLE–LoRa networking, and Delay-Tolerant Store-and-Forward routing. The assessment indicates that for primary response purposes, SLMs can be used with the level of accuracy needed in the first hour of response at practically zero cost and therefore can be utilized in the first response hour. This logic will prove helpful to AI practitioners solving operational problems in assistance and rescue, architects of emergency communication systems, and disaster planners.
This paper presents the first AI/ML system for automating building damage assessment in uncrewed aerial systems (sUAS) imagery to be deployed operationally during federally declared disasters (Hurricanes Debby and Helene). In response to major disasters, sUAS teams are dispatched to collect imagery of the affected areas to assess damage; however, at recent disasters, teams collectively delivered between 47GB and 369GB of imagery per day, representing more imagery than can reasonably be transmitted or interpreted by subject matter experts in the disaster scene, thus delaying response efforts. To alleviate this data avalanche encountered in practice, computer vision and machine learning techniques are necessary. While prior work has been deployed to automatically assess damage in satellite imagery, there is no current state of practice for sUAS-based damage assessment systems, as all known work has been confined to academic settings. This work establishes the state of practice via the development and deployment of models for building damage assessment with sUAS imagery. The model development involved training on the largest known dataset of post-disaster sUAS aerial imagery, containing 21,716 building damage labels, and the operational training of 91 disaster practitioners. The best performing model was deployed during the responses to Hurricanes Debby and Helene, where it assessed a combined 415 buildings in approximately 18 minutes. This work contributes documentation of the actual use of AI/ML for damage assessment during a disaster and lessons learned to the benefit of the AI/ML research and user communities.
One revolutionary step in redefining disaster response procedures is the use of agentic AI in crisis management. Conventional methods of disaster management mostly depend on human judgement, which is frequently sluggish, prone to mistakes, and overpowered by the complexity of ever-changing emergency situations. A new paradigm for handling such difficulties is provided by agentic AI, which is distinguished by its capacity for autonomous decisionmaking, adaptive learning, and real-time data processing. This paper examines how agentic AI can be incorporated into disaster response systems, emphasising how it can automate crucial decision-making, maximise resource allocation, and offer real-time insights in emergency scenarios. We explore the underlying technologies, including natural language processing (NLP), machine learning, and multi-agent systems, and show how they can be used to improve situational awareness, coordination, and the precision of decisions. We offer experimental data demonstrating the effectiveness of Agentic AI in enhancing resource distribution efficiency and disaster response times using mathematical modelling. Furthermore, we provide case studies from both man-made and natural disasters to highlight the practical benefits and difficulties of implementing such systems. We describe the possible development of AI-driven crisis management systems by talking about prospective trends, touching on scalability and ethical issues. With insights into its real-world uses and future potential to provide more robust, efficient, and effective disaster response frameworks, this paper provides a thorough knowledge of how agentic AI might reinvent crisis management
Background: This project aims to revolutionize disaster response by deploying drones equipped with high-resolution cameras and real-time image recognition technology to quickly identify and locate individuals in disaster areas. By integrating custom drones, advanced software, and reliable communication systems, the project seeks to enhance rescue speed and accuracy. Methods: Drone Technology Integration: Custom-designed drones with high-resolution cameras capture detailed images in varied and challenging conditions, enabling thorough aerial surveillance. Human Recognition and Location Identification: Machine learning and deep learning algorithms allow the system to identify human figures, distinguishing them from debris or environmental elements. Communication and Coordination: Identified humans are immediately relayed to rescue teams through integrated communication systems, allowing for coordinated and rapid deployment. Custom Infrastructure for Resilience: The drones and software are built for robustness, with specialized hardware and secure protocols to ensure functionality in adverse weather or limited visibility conditions. Findings: Initial testing indicates that the drones can accurately detect individuals and relay their locations quickly to ground teams. The system performs reliably across varied conditions, enhancing communication between aerial and ground units for effective rescue coordination. Novelty: This approach introduces significant improvements in disaster response by automating search and rescue tasks, increasing accuracy with advanced algorithms, and providing adaptable, scalable infrastructure. It ultimately optimizes response time, making a critical difference in life-saving efforts and reducing disaster impacts on affected communities.
In this paper, the methodology of detecting rescue messages extracted from social media data is presented. Rescue messages were originated after an earthquake, they are tweets that may also deliver information about position and time. A massive amount of social media data has been extracted after the two earthquake disasters of magnitude of Mw 7.7 and Mw 7.6 occurred on February 6, 2023 in Turkiye. The procedure of manual labelling and automated labelling is presented. For labeling purposes, nine BERT language models, which are based on attention and transformers, were used. The supervised learning methods were applied to assess the precision of the labels and perform classification. Furthermore, the dataset was processed with deep learning methods: Convolutional Neural Networks, Deep Neural Networks, and Long Short-Term Memory. Accuracy of data toward detection of rescue and non-rescue tweets is compared. Keywords are extracted to determine hazard situations and emergency needs toward coordination purposes including spatio-temporal information when provided by tweets. Deep learning and BERT models detect rescue and non-rescue classes assuring a level in 0.8972 and 0.9808 in recall, respectively.
Disaster preparedness of Local Government Units (LGUs) in the Philippines is hindered by the collisions of disorganized systems, delays in reporting processes, and a scarcity of available current (in real-time) raw data. These restrictions often inhibit the ability for LGUs to make timely and accessible decisions and coordinate effectively when an emergency event occurs. This study has created an AI-Driven Operational Assistant for Disaster Preparedness and Response in Quezon, Nueva Ecija. The Assistant provides real-time access to crucial disaster information, performs risk analysis, and generates automated responses to guide LGU staff in the management of disaster operations. A mixed-methods evaluation conducted with IT experts and LGU end-users, using structured assessment tools, revealed that LGUs were able to achieve high and very high system quality and user acceptability ratings. Ultimately, the findings of this research indicate that the AI-driven Operational Assistant enhances the speed of decision-making, improves how easily disparate data sources are integrated, and improves the overall efficiency of disaster response as compared to traditional manual systems.
No abstract available
Twitter has become the major source of data for the research community working on the social computing domain. The microblogging site receives millions of tweets every day on its platform. Earlier studies have shown that during any disaster, the frequency of tweets specific to an event grows exponentially, and these tweets, if monitored, processed, and analyzed, can contain actionable information relating to the event. However, during disasters, the number of tweets can be in the hundreds of thousands thereby necessitating the design of a semi-automated artificial intelligence-based system that can extract actionable information based on which steps can be taken for effective disaster response. This paper proposes a Twitter-based disaster response system that uses recurrent nets for training a classifier on a disaster specific tweets dataset. The proposed system would enable timely dissemination of information to various stakeholders so that timely response and proactive measures can be taken in order to reduce the severe consequences of disasters. Experimental results show that the recurrent nets outperform the traditional machine learning algorithms with regard to accuracy in classifying disaster-specific tweets.
Disaster response robots show promise, but risk adding significant load to already overburdened communication networks. Previous work addressed this problem by prioritizing certain messages; however, the ethical implications of message prioritization in disaster response have not been studied comprehensively. This manuscript proposes an ethical framework for evaluating message prioritization mechanisms. Additionally, a taxonomy of the existing message prioritization approaches is introduced, highlighting the ethical principles that most require further study for each approach.
No abstract available
The increasing frequency and intensity of natural disasters underscore the need for advanced technologies that enable real-time monitoring and rapid response. The Internet of Things (IoT) offers a transformative solution to disaster preparedness by deploying sensor networks, automated alert systems, and cloud-based data analytics. This paper explores the application of IoT in various phases of disaster management—ranging from early warning systems to post-disaster recovery. We analyze the technological infrastructure, operational frameworks, and integration challenges of IoT-based disaster response systems, drawing on global case studies. The findings highlight how IoT enhances decision-making, mitigates risk, and strengthens community resilience.
This paper presents an end-to-end methodology that can be used in the disaster response process. The core element of the proposed method is a deep learning process which enables a helicopter landing site analysis through the identification of soccer fields. The method trains a deep learning autoencoder with the help of volunteered geographic information and satellite images. The process is mostly automated, it was developed to be applied in a time- and resource-constrained environment and keeps the human factor in the loop in order to control the final decisions. We show that through this process the cognitive load (CL) for an expert image analyst will be reduced by 70%, while the process will successfully identify 85.6% of the potential landing sites. We conclude that the suggested methodology can be used as part of a disaster response process.
Cloud computing has transformed IT service delivery with a pay-as-you-go model that simplifies software creation, deployment, and maintenance. It has also reshaped how businesses address security challenges, particularly in Incident Response (IR) and Disaster Recovery (DR). IR is a proactive approach to detecting, containing, and mitigating security risks, while DR focuses on restoring systems after failures caused by cyberattacks, system errors, or natural disasters. Unlike traditional on-premises IT environments, where organizations have full control, cloud-based environments rely on third-party providers, introducing new processes and responsibilities for managing IR and DR. Cloud security is now a shared responsibility between providers and customers, requiring close collaboration to ensure effective protection. This paper analyzes how cloud security management differs from traditional approaches, focusing on key principles and best practices for incident response and disaster recovery from a business perspective. It also examines a real-world cloud security breach to highlight the challenges businesses face in responding to incidents and recovering from disruptions. Additionally, it explores the latest advancements in automated disaster recovery, which enhance resilience and reliability. By understanding these concepts, businesses can strengthen their security posture, improve response strategies, and ensure seamless business continuity.
Identifying and classifying shutdown initiating events (SDIEs) is critical for developing shutdown probabilistic risk assessment for nuclear power plants. Existing computational approaches cannot achieve satisfactory performance due to the challenges of unavailable large, labeled datasets, imbalanced event types, and label noise. To address these challenges, we propose a hybrid pipeline that integrates a knowledge-informed machine learning model to prescreen non-SDIEs and a large language model (LLM) to classify SDIEs into four types. In the prescreening stage, we proposed a set of 44 SDIE text patterns that consist of the most salient keywords and phrases from six SDIE types. Text vectorization based on the SDIE patterns generates feature vectors that are highly separable by using a simple binary classifier. The second stage builds Bidirectional Encoder Representations from Transformers (BERT)-based LLM, which learns generic English language representations from self-supervised pretraining on a large dataset and adapts to SDIE classification by fine-tuning it on an SDIE dataset. The proposed approaches are evaluated on a dataset with 10,928 events using precision, recall ratio, F 1 score, and average accuracy. The results demonstrate that the prescreening stage can exclude more than 97% non-SDIEs, and the LLM achieves an average accuracy of 95.1% for SDIE classification.
With the intensification of global climate change, the frequency and intensity of natural floods are increasing, causing significant impacts on human society, economy, and environment. Traditional flood prediction and risk assessment methods often rely on physical models, which have limitations such as high data requirements, complex computational processes, and poor real-time performance. This paper aims to use data analysis and machine learning techniques to build efficient flood disaster prediction and risk assessment models, thereby improving prediction accuracy and evaluation efficiency, providing a scientific basis for flood disaster management. By introducing an improved K-means clustering method and a random forest classifier, precise classification and feature selection of flood event risks have been achieved. A flood disaster prediction model based on multiple machine learning algorithms has been constructed, and its effectiveness and reliability have been verified through practical case studies.
No abstract available
No abstract available
Increasing numbers of people live in flood-prone areas worldwide. With continued development, urban flood will become more frequent, which has caused casualties and property damage. Researchers have been dedicating to urban flood risk assessments in recent years. However, current research is still facing the challenges of multi-modal data fusion and knowledge representation of urban flood events. Therefore, in this paper, we propose an Urban Flood Knowledge Graph (UrbanFloodKG) system that enables KG to support urban flood risk assessment. The system consists of data layer, graph layer, algorithm layer, and application layer, which implements knowledge extraction and storage functions, integrates knowledge representation learning models and graph neural network models to support link prediction and node classification tasks. We conduct model comparison experiments on link prediction and node classification tasks based on urban flood event data from Guangzhou, and demonstrate the effectiveness of the models used. Our experiments prove that the accuracy of risk assessment can reach 91% when using GEN, which provides a a promising research direction for urban flood risk assessment.
Drought is one of the most severe natural disasters with the highest risk for human livelihoods. Remote sensing based drought indices can identify dry periods using e.g. precipitation or vegetation information. Besides frequency, duration and intensity, the timing of a drought onset is an important variable to measure drought risk. This study classifies drought events based on the timing of drought onsets and duration. Drought and non-drought seasons are analyzed in two study sites in South Africa and Ukraine where drought characteristics are different. South Africa drought highly depends on the starting point and duration of rainfall, whereas in Ukraine soil moisture and temperature play key roles. Weighted linear combination is applied based on vulnerable growing stages in the seasonal phenology to classify droughts. By integration of socio-economic information this hazard information supports the quantification of the actual risk of a drought event.
This study presents a method for classifying landslide triggers and sizes using climate and geospatial data. The landslide data were sourced from the Global Landslide Catalog (GLC), which identifies rainfall-triggered landslide events globally, regardless of size, impact, or location. Compiled from 2007 to 2018 at NASA Goddard Space Flight Center, the GLC includes various mass movements triggered by rainfall and other events. Climatic data for the 10 years preceding each landslide event, including variables such as rainfall amounts, humidity, pressure, and temperature, were integrated with the landslide data. This dataset was then used to classify landslide triggers and sizes using deep neural networks (DNNs) optimized through genetic algorithm (GA)-driven hyperparameter tuning. The optimized DNN models achieved accuracies of 0.67 and 0.82, respectively, in multiclass classification tasks. This research demonstrates the effectiveness of GA to enhance landslide disaster risk management.
COVID-19 causes significant morbidity and mortality and early intervention is key to minimizing deadly complications. Available treatments, such as monoclonal antibody therapy, may limit complications, but only when given soon after symptom onset. Unfortunately, these treatments are often expensive, in limited supply, require administration within a hospital setting, and should be given before the onset of severe symptoms. These challenges have created the need for early triage of patients likely to develop life-threatening complications. To meet this need, we developed an automated patient risk assessment model using a real-world hospital system dataset with over 17,000 COVID-positive patients. Specifically, for each COVID-positive patient, we generate a separate risk score for each of four clinical outcomes including death within 30 days, mechanical ventilator use, ICU admission, and any catastrophic event (a superset of dangerous outcomes). We hypothesized that a deep learning binary classification approach can generate these four risk scores from electronic healthcare records data at the time of diagnosis. Our approach achieves significant performance on the four tasks with an area under receiver operating curve (AUROC) for any catastrophic outcome, death within 30 days, ventilator use, and ICU admission of 86.7%, 88.2%, 86.2%, and 87.8%, respectively. In addition, we visualize the sensitivity and specificity of these risk scores to allow clinicians to customize their usage within different clinical outcomes. We believe this work fulfills a clear clinical need for early detection of objective clinical outcomes and can be used for early screening for treatment intervention.
Rapid post-event assessment of earthquake damage is essential for resilient emergency response and risk mitigation. We present a multi-scenario deep learning framework that uses stacked LSTM and a hybrid LSTM–RNN to (i) forecast structural response variables (displacement \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u$$\end{document}, velocity \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$v$$\end{document}, acceleration \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a$$\end{document}, and Damage Index \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$DI$$\end{document}), (ii) classify damage status, (iii) conditionally estimate the weight factor \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$w$$\end{document} for damaged cases (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$DI\ge 1$$\end{document}), and (iv) identify features most associated with negligible damage via GridSearchCV. In addition, we benchmark a quantum-inspired Activation-based Probabilistic Machine (APM) classifier head attached to the shared sequence encoder to probe whether compact state encodings and Hadamard-style interactions can improve damage discriminability under the same leakage-safe pipeline. Models were trained on 40-step windows for 100 epochs with Adam, using linear heads for regression and Softmax for classification (APM and standard heads share the global training schedule). Across sequence-regression tasks, the LSTM–RNN consistently outperformed stacked LSTM: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${R}^{2}$$\end{document} improved from 98.37 to 99.42% for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u$$\end{document}, 89.57 to 97.58% for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$v$$\end{document}, 97.69 to 99.8% for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a$$\end{document}, and 99.68 to 99.97% for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$DI$$\end{document}. For \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$DI$$\end{document}, error metrics were markedly lower with LSTM–RNN (MAE 0.0031 vs. 0.0132, RMSE 0.0047 vs. 0.0163, MAPE 1.51 vs. 5.25, MedAE 0.0022 vs. 0.0118), indicating tighter tracking of the damage signal. The conditional \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$w$$\end{document} estimation and feature-ranking scenario offers practical levers for risk-informed prioritization, while the APM head provides a compact, quantum-inspired alternative for damage classification within the same framework. Overall, the results support sequence models, particularly LSTM-RNNs, as an effective basis for rapid, data-driven earthquake damage modeling and decision support, with quantum-inspired heads as a complementary option.
No abstract available
No abstract available
With the development of technology and reliability, the human factors are becoming the most contributing casual for the occurrence of accident. In the present study, a comprehensive human factor analysis model, including Human Factors Analysis and Classification System (HFACS), Fuzzy Fault Tree Analysis (F-FTA) and Artificial Neural Network (ANN), is proposed to assess human facors involved in accident or risk event. Under the framework of proposed model, the FFT stemmed from HFACS is mapped into ANN based on fuzzy theory, which may be beneficial for the prediction and assessent of human factors.
No abstract available
The identification of the well/rig state in time is a key component in the construction of accurate risk-assessment, event-detection, and efficiency-tracking tools in drilling and production operations. Traditionally, this state identification has relied on insufficient rule-based systems, which often results in inaccurate predictions and leads to non-reliable risk-assessment tools, imprecise event detection, and biased estimated efficiency. This paper compares three state-of-the-art, scalable methods for automatically identifying the well/rig state and presents three use cases in drilling and production stages. Identifying the well/rig state is a time-series multi-class classification problem, in which the data is collected at high frequency (typically 0.02–1 Hz) with sensors installed inside the well, on the wellhead, or on the equipment intervening the well, such as a rig or a coiled-tubing unit. This paper presents three solutions to this classification problem, namely a first-order logic inference system, a recurrent neural network (RNN) classifier, and a transformer-based classifier. We implement and compare these methods in three applications in drilling and production operations, including the detection of stuck pipe incidents and pressure trend abnormality. The models were evaluated on a withheld, pre-labeled test dataset consisting of 75 hours of 1-Hz drilling data and 1072 h of 0.17-Hz production data. This evaluation showed that the transformer-based classifier outperformed the other three methods in all three applications. Additionally, we observed that the deep learning-based classifiers were only slightly more computationally expensive than inference systems, making all models suitable for real-time prediction. In the test, we also investigated the value of accurate well/rig state identification in the assessment of drilling and well-integrity risks. The test revealed that the lack of accuracy in this identification task can severely bias the risk assessment, for instance, by overestimating the risk of pipe sticking and generating spurious predictions of abnormal pressure trends. This, in turn, can result in unnecessary, costly preventative measures. The applicability of the analyzed methods is not limited to the examples here provided; they are applicable to any task requiring the classification of time-series data and thus, can be employed in other well operations/stages and applications of event detection. The novelty of this paper is twofold. First, it is the first study to compare various methods for automatically identifying well/rig state, including one not evaluated before—a transformer model. Second, it is the first to propose an automated engine for well state identification during the production stage. This paper also analyzes the advantages and shortcomings of well/rig state identification models through three different applications.
With the opening of low-altitude airspace and the proliferation of drone technology, low-altitude security has become a critical component of the security system for major events. Addressing the diversified and dynamic characteristics of contemporary low-altitude threats, this paper systematically constructs a low-altitude security system for major event security. By defining the conceptual connotation and layered architecture of the low-altitude security system, it proposes a core model based on Cyber-Physical Systems and system resilience theory. The paper further analyzes the classification characteristics and behavioral patterns of multi-source, heterogeneous low-altitude threats, establishing a dynamic risk assessment framework that incorporates spatiotemporal constraints. Building upon this foundation, it designs a technical pathway for collaborative perception and intelligent response via a multi-dimensional sensor network, enabling closed-loop management from threat detection to response decision-making. The research demonstrates that this system can significantly enhance the real-time perception capability, risk assessment accuracy, and response efficiency regarding low-altitude threats, providing theoretical support and technical solutions for low-altitude security during major events.
No abstract available
The rapid growth of renewable energy penetration presents substantial challenges in maintaining supply and demand balance in China’s power spot market, which has emerged as a key mechanism to determine system operation. To reveal this issue, this study reviews representative extreme-event cases and classifies balance disturbances from the perspective of the Cyber–Physical–Social System of Energy (CPSSE). A comprehensive spot market model for joint energy and reserve clearing is established. By incorporating both source and load side characteristics suitable for market-based environments, a set of risk assessment indicators is developed. Numerical simulations are conducted to systematically evaluate the balance risks of conventional market mechanisms under hybrid disturbance scenarios.
In modern power systems, rapid and accurate detection of dynamic events at the bus level is critical for ensuring grid reliability and operational resilience. The paper proposes a novel bus-centric event detection framework that integrates a Temporal Graph Neural Network (TGNN) with unsupervised clustering and supervised classification for event localization and risk assessment using Phasor Measurement Unit (PMU) data. A rich set of twelve statistical and dynamic features per bus including voltage/frequency derivatives, phase angle trends, and rolling statistics is extracted to represent temporal-spatial behaviour withing power grid nodes. a buswise risk index is formulated by combining voltage, frequency, and angle deviations with regression-based R2 stability trends, enabling the ranking of critical buses. Furthermore, the performance of the proposed framework will be rigorously evaluated using a comprehensive classification metrics, including Accuracy, F1-score and Matthews Correlation Coefficient (MCC). The proposed framework offers a scalable and interpretable solution for real-time contingency analysis, with strong implications for preventive control in smart grids.
No abstract available
Flood disaster risk assessment in large river basins remains a critical challenge due to sensor limitations, cloud contamination, and insufficient integration of hazard, exposure, and vulnerability indicators. This study presents an AI‐driven multi‐sensor flood risk mapping framework applied to the extreme 2020 flood event, integrating Synthetic Aperture Radar (SAR), optical remote sensing indices, and machine learning‐based land cover information. A hybrid flood detection algorithm was developed using Sentinel‐1 SAR VV backscatter (−17 dB threshold), Normalized Difference Water Index (NDWI), and Modified Normalized Difference Water Index (MNDWI) through logical OR fusion, ensuring robust flood delineation under persistent cloud cover. Agricultural damage was assessed using NDVI and EVI indices, while land cover dynamics and exposure were analyzed through Google's Dynamic World near‐real‐time classification products. Results indicate extensive flood inundation, with widespread expansion of surface water and significant disruption of cropland areas during the 2020 monsoon season. Vegetation condition analysis revealed marked declines in NDVI across flooded agricultural zones, indicating severe crop stress and productivity loss. Land cover transitions showed substantial temporary conversion of cropland and built‐up areas into flooded water classes, with total land cover area declining from 185,942.3 to 166,870.0 km 2 , highlighting high exposure in low‐lying agricultural corridors. The integrated flood risk model, combining hazard intensity, land cover exposure, and vegetation vulnerability, successfully identified high‐risk agricultural and peri‐urban hotspots across central and eastern area. The proposed framework demonstrates a scalable, automated approach for flood risk and impact assessment, offering valuable insights for disaster preparedness, agricultural resilience, and flood management strategies in flood‐prone regions.
Information system has become the main target of network attack in various fields. The attack on information system has become an important event that damages national security, political stability, economic lifeline and citizen security. Risk factor is the weak link in the information system that may be threatened to cause damage, and once it is successfully used, it may cause damage to assets. Although the existing research on vulnerability management and scientific and standardized risk assessment is relatively perfect, the scope is not enough to support and cover the assessment of information system risk factors. In this paper, the alarm data of an information system for half a year are analyzed and studied, and a standard method for risk factor analysis of information system is proposed, which can provide an important reference for risk factor classification of information system in finance, public communication and energy industries.
No abstract available
The actual task of risk assessment is the ranking of news in accordance with the degree of a conflict situation. One of the metrics for ranking international events is the Goldstein scale. With this metric, each event is compared with the numerical value of conflict or cooperation. The meaning of these phrases may change if they are analyzed with the bag-of-words model. Therefore, the authors propose to search for phrases in texts using a hybrid intelligent system consisting of a natural language processing pipeline and a metagraph knowledge base. A modified TF-IDF metric is used to classify texts by the Goldstein scale. It counts the number of concepts in the knowledge base instead of words. The paper compares the precision of text classification using words and concepts analyzed by natural language processing. The study was performed on the OpenCorpora dataset.
No abstract available
No abstract available
Disasters, both natural and man-made, pose significant global threats, causing loss of life, economic damage, and social disruption. The rising frequency and severity of such events highlight the urgent need for effective disaster management. The Internet of Things (IoT) offers transformative potential to meet these challenges, particularly by improving early warning systems, enhancing emergency responses, and facilitating post-disaster recovery. This paper explores the role of the IoT in disaster management, highlighting its architecture and applications. It covers benefits such as improved response times, enhanced resource allocation, and reduced casualties, while also discussing challenges namely communication reliability in harsh environments, data security, and standardization issues. Additionally, the paper emphasizes the need for region-specific solutions, particularly in areas like Chongqing and Sichuan in China, which confront unique geological and meteorological risks, suggesting approaches for future research.
Autonomous Smart Home (ASH) systems incorporate various sensors and Internet of Things (IoT) modules to automate and enhance residential functionality. ASH represents an IoT communication paradigm for decision-making, data analysis, task automation during triggered events, and remote accessibility. However, the connectivity of modules via wired and wireless channels can introduce cyber-security challenges, including data privacy concerns, device tampering, network weaknesses, lack of standardization, and risks associated with firmware and software vulnerabilities. Cyber breaches in ASH can have catastrophic effects, such as unauthorized control of critical home, medical systems, emergency response interference, automated lock system failures, and critical home-appliance sabotage. To address this concern, we propose Smart-Sec, which leverages a deep learning-based Convolutional Neural Network (CNN) architecture. The performance of Smart-Sec was evaluated using various optimization algorithms, accuracy comparison, loss depiction, confusion matrix, precision, recall, and F1-score. Among all algorithms, our one-dimensional CNN architecture performed well with the RMSProp optimizer.
For emergency communications in an internet of thing (IoT) network, a large number of gateways are distributed to gather the data traffic. Considering the practical difficulty of deploying multiple territorial base stations (TBSs) in a wide range, unmanned aerial vehicle base station (UAV-BS) can fly to a specific point and hover above there to collect data traffic from gateways. In this paper, we aim to maximize the UAV-BS energy efficiency under the constraints of total serving delay, UAV-BS flying speed, and the maximum available transmitting power of gateways, etc. Firstly, we propose a distributed gateway cluster (GC) algorithm to group gateways into multiple GCs based on the distances among gateways. Next, the UAV-BS flies and hovers above each GC, where the gateways in the GC simultaneously transmit data to the UAV-BS by non-orthogonal multiple access (NOMA). By analyzing the NOMA feature, we propose theorems optimizing the UAV-BS hovering height to minimize the transmitting power of the gateway with the maximum transmitting power among the gateways in a GC. Based on the proposed theorems, we formulate the joint optimization problem to maximize the UAV-BS energy efficiency with only the variables of UAV-BS flying speed and the serving time for each GC. The optimization problem is effectively solved by the geometric programming (GP) method. Finally, we verify the effectiveness of the proposed algorithms by extensive simulation results.
The Internet of Things (IoT) is reshaping our connected world as the number of lightweight devices connected to the Internet is rapidly growing. Therefore, high-quality research on intrusion detection in the IoT domain is essential. To this end, network intrusion data sets are fundamental, as many attack detection strategies have to be trained and evaluated using such data sets. In this article, we introduce the description, statistical analysis, and machine learning evaluation of the novel ToN_IoT data set. A comparison to other recent IoT data sets shows the importance of heterogeneity within these data sets, and how differences between data sets may have a huge impact on detection performance. In a cross-training experiment, we show that the inclusion of different data collection methods and a large diversity of the monitored features are of crucial importance for IoT network intrusion data sets to be useful for the industry. We also explain that the practical application of IoT data sets in operational environments requires the standardization of feature descriptions and cyberattack classes. This can only be achieved with a joint effort from the research community.
We propose a feasibility study for real-time automated data standardization leveraging Large Language Models (LLMs) to enhance seamless positioning systems in IoT environments. By integrating and standardizing heterogeneous sensor data from smartphones, IoT devices, and dedicated systems such as Ultra-Wideband (UWB), our study ensures data compatibility and improves positioning accuracy using the Extended Kalman Filter (EKF). The core components include the Intelligent Data Standardization Module (IDSM), which employs a fine-tuned LLM to convert varied sensor data into a standardized format, and the Transformation Rule Generation Module (TRGM), which automates the creation of transformation rules and scripts for ongoing data standardization. Evaluated in real-time environments, our study demonstrates adaptability and scalability, enhancing operational efficiency and accuracy in seamless navigation. This study underscores the potential of advanced LLMs in overcoming sensor data integration complexities, paving the way for more scalable and precise IoT navigation solutions.
Emergency response time has always been one of the most important factors in saving lives, yet many healthcare systems struggle with delays and resource inefficiencies. This paper proposes the use of cloud-based digital twin technology, integrated with an Internet of Things (IoT) hub, to enhance patient care and optimize emergency department workflows. The system utilizes an Azure IoT virtual hub and an Azure Digital Twins model for real-time data transmission and processing of bed availability and capacity in a hospital emergency department. Azure Digital Twins can create virtual models of any physical environment, such as an emergency department, which is essential for creating an automated decision-making system using advanced and modern technologies. Unlike previous solutions that lack coherence in resource allocation systems or real-time decision-making mechanisms, our system dynamically updates emergency department bed availability in real-time through embedded system devices in hospitals, and makes automated decisions based on both patient location and resource availability. The system is implemented using a mobile application and validated using a case study. The case study includes data from multiple emergency departments in hospitals within a specific area. The system provides the best options for emergency patients based on the hospital location and bed availability.
An optimally optimized emergency response system can prove to be lifesaving by reducing response time during the state of emergency. This paper proposes an IoT-based emergency alert and GPS tracking system implemented on an ESP32 microcontroller and an A9G GSM/GPS module. The system has a continuous log of user position and device battery level and pushes data in intervals through MQTT to the Adafruit IO platform for remote monitoring. To achieve best battery life, the A9G module goes into low power mode in idle mode but keeps GPS monitoring running. One more feature highlighted is the SOS button, which makes an emergency call by sending the user's location as an SMS and calling a pre-configured contact number. The system also maintains data integrity with automatic reconnect of MQTT broker when it loses connection. Low power and battery portable, the system is applicable in personal protection, home health care for elderly, outdoor adventure, and remote area surveillance. The article explains hardware design, software process, MQTT-based data communication, and testing of the performance of the system based on precision, network, and response to emergencies. The results verify reliability and cost saving of the system in real-time monitoring and reporting of emergencies. With its energy-saving and usability design, it helps in the development of IoT-backed safety solutions for real-time tracking and emergency response in case of a need.
Quality and efficiency of the work environment are essential to the well-being, health and productivity of employees. Despite the increasing focus on these aspects, many workplaces currently do not fully meet the needs and expectations of employees, with negative consequences for their well-being and productivity. The research aims to develop a system based on the Smart Building and Digital Twin paradigm, focusing on the implementation of various IoT components, the creation of automation flows for energy-efficient lighting, HVAC and indoor air quality control systems, and decision support through real-time data visualization enabled by user interfaces and dashboards integrating the geometric and information model (BIM). The system also aims to provide a tool for both monitoring and simulation/planning/decision support through the processing and development of machine learning (ML) algorithms. In relation to emergency management, real-time data can be acquired, allowing information to be shared with users and building managers through the creation of dashboards and visual analysis. After defining the functional requirements and identifying all3 the monitorable quantities that can be translated into requirements, the system architecture is described, the implementation of the case study is illustrated and the preliminary results of the first data collection campaign and initial estimates of future forecasts are shown.
The research proposes an affordable and real-time framework of monitoring health status, disease categorizing and navigation around an emergency room, which is enabled using IoT. Live sensors provide us with physiological data such as ECG, temperature, pulse, blood pressure, and SpO 2 and a machine learning classifier based on real patient data to interpret such data. It identifies traffic and weather with the assistance of Google Maps and Open Weather APIs in forecasting paths. A logistic regression model incorporates real-time contextual settings in working out the ease of accessing a hospital within half an hour of an emergency. The system demonstrates an ingenious integration of edge connected health monitoring, real time geospatial analytics, and evidence-based triage decision support assistance with a Flask web front end. The new architecture is focused around its scalability, real-time applicability alongside clinical validity, which makes up the necessitated holes in the rule based medical diagnosis models as well as siloed data modeling.
Laboratory safety is paramount to guarantee efficient operations and the safety of users. This study establishes an emergency monitoring tower system utilizing the Internet of Things (IoT) in conjunction with a mobile application to facilitate real-time monitoring of laboratory conditions. This system is engineered to identify potential crises, including fires, gas leaks, and equipment failures, utilizing temperature, smoke, and gas sensors linked to a central tower. Sensor data is transmitted to a mobile application utilized by security professionals to get alerts and respond promptly to incidents. System testing was performed in the Transmission Electron Microscopy (TEM) Laboratory at Universitas Muhammadiyah Yogyakarta. The test results indicate that the system can identify and transmit alerts in an average duration of 1960.75 ms. This program offers an easy interface featuring visual and auditory notifications to improve user awareness of emergencies. This research advances the creation of an IoT-based system designed for early hazard detection, expediting emergency response, and improving safety in laboratory settings. These findings are anticipated to be implemented on a broader scale for risk avoidance in diverse high-mobility facilities.
As the usage of IoT technology in health care has grown, patient monitoring in real time has become more feasible. However, existing systems do not usually possess accurate anomaly detection, timely emergency response, and interpretable decision support. To address these limitations, this paper proposes a Decision-Integrated Anomaly Detection and Emergency Management framework with eXplainable models (DIADEM-X). DIADEM-X employs physiological data from IoT sensors to identify unusual health patterns using an XGBoost classifier augmented with SHapley Additive exPlanations (SHAP) explanations for clinical interpretability. An innovative Emergency Decision Fusion Engine combines several sensor readings and patient history to create priority-ordered alerts, reducing false alarms and alert fatigue. For deployment on the edge, the system uses low-latency in constrained resources to make it deployable in hospital wards, home care, and mobile units of health. Comparison with SVM, CNN, and Bi-LSTM models shows that DIADEM-X reports 96.1% accuracy, 94 F1-score, 370 ms alert latency, and a 3.5% false alarm rate, which is a considerable improvement over current method. The system's reliability, speed, and explainability make it a potential candidate as a next-generation intelligent health care monitoring system. In this paper, the author identifies the possibilities of using machine learning, explainable AI, and edge computing to achieve trusted, efficient, and clinically relevant IoT-based health systems.
The destruction of the infrastructure after a disaster greatly hinders the rescue efforts. Timely response and obtaining accurate information in the site is very important for a successful rescue. In this paper, we propose an UAV emergency rescue system based on LoRa (Long Range Radio) and NB-IoT (Narrow Band Internet of Things) technologies. Based on the LoRa technology with long-range, low power and self-organizing network characteristics, infrared imaging UAVs can collect online disaster areas’ information data, deliver them to the cloud platform for information storing and analysis, and allow rescue personnel to operate in a remote way.
In this paper, we shall discuss how IoHT would work in the domain of emergency care and can potentially augment the treatment of patients by enabling real-time data collection, prediction modelling, and automation in decision making. I oHT facilitates communication between patients, EMS, and hospital systems through the IoHT-enabled wearable sensors, smart medical devices and cloud-based platforms thereby allowing proactive intervention this can reduce critical delays. Remote patient monitoring (RPM), as well as the real- time location systems (RTLS) and AI-driven systems that form part of the IoHT, are therefore examined to determine their influence upon emergency response efficiency. Emergency care providers can harness IoHT for real time monitoring of vital signs, early identification of signs of deterioration, and timely mobilization of response efforts. Furthermore, it also aids in intelligent triaging and dynamic ambulance routing by supporting integrated GPS and AI-based prioritization systems that ensure the timely delivery of the appropriate care to patients. IoHT is promising, but data privacy concerns, cybersecurity threats, intermittent power supply issues, interoperability, and infrastructure limitations are some of the top barriers to successful adoption. The foundation of this research lies on what best practices can be followed for optimized, secure, and scalable implementations of IoHT systems and gives a detailed specification for a framework that harmonizes technological advancement with ethical and regulatory factors. Through exploring examples of case studies and real-world applications, this research paper emphasizes the potentialities of IoHT in transforming the future of emergency medicine. The results indicate that adoption of IoHT improves emergency response systems, lowers death rates, and increases the effectiveness of health care resources. Moreover, IoHT algorithms need refinement, data security needs to be fortified, and standardized protocols need to be established to ensure seamless integration across the healthcare ecosystem.
No abstract available
No abstract available
In this paper, we used a vibration sensor known as G-Link 200 to collect real time vibration data. The sensor is connected through the internet gateway and Long Short Term Memory (LSTM) used for the classification of sensor data. The classification allows for detecting normal and anomaly activity situation which allows for triggering emergency situation. This is implemented in smart homes where privacy is an issue of concern. Example of such places are toilets, bedrooms and dressing rooms. It can also be applied to smart factory where detecting excessive or abnormal vibration is of critical importance to factory operation. The system eliminates the discomfort for video surveillance to the user. The data collected is also useful for the research community in similar research areas of sensor data enhancement. MATLAB R2019b was used to develop the LSTM. The result showed that the accuracy of the LSTM is 97.39% which outperformed other machine learning algorithm and is reliable for emergency classification.
This paper proposes a novel approach to enhance ambulance response times and streamline patient transportation during emergencies by leveraging a smart GPS-based system integrated with Internet of Things (IoT) technology at traffic signals. Traditional methods, relying on camera detection or manual traffic clearance, often lead to delays in granting priority to ambulances. In contrast, our system pre-empts traffic signals in real-time, ensuring swift passage for ambulances by communicating with IoT devices installed at traffic control boxes. The proposed system optimizes ambulance routes using real time traffic data and machine learning algorithms, dynamically adjusting paths to avoid congestion. Components include a smart GPS navigation system, IoT devices at traffic signals, robust communication protocols, and centralized control systems. In summary what makes this design effective is its ability to prioritize clearing intersections ahead of emergency vehicles thus reducing time spent on responding to medical calls while at same time giving hope for better outcomes among patients.
Emergencies pose a significant threat to individual lives and societal stability, necessitating prompt intervention to prevent escalation. Ambulances, serving as medical transport vehicles from hospitals, are essential for transferring sick or injured patients to healthcare facilities for further treatment. The effectiveness of these crucial ambulance services can be enhanced through Internet of Things (IoT) technology. This study presents a design for an loT-Long Range Wide Area Network (LoRa WAN)-based ambulance tracking system. Leveraging LoRaWAntechnology offers an extended communication range. The system integrates a Global Positioning System (GPS) sensor with a microcontroller to track the ambulance's location. GPS data is transmitted via LoRa to a gateway, which then forwards the data to a cloud server for display in an application. Accuracy tests involved comparing GPS readings from the microcontroller-integrated system with Google Maps data. Results indicate an average error of 0.00055736% for latitude and 0.000015648% for longitude coordinates. Additionally, the developed smartphone application displays real-time GPS data with an average delay of 22 seconds.
Wheelchair being a common medium of transport among disabled and handicapped people, this paper solely focuses on the development of electrical wheelchair. Both sensor and IoT implementations are considered for a motorized wheelchair. This project was a collaborated attempt to achieve independent operations of the electrical wheelchair, with Centre for the Rehabilitation of the Paralyzed (CRP) and Control & Applications Research Centre (CARC), Brac University. The objective of this paper is to provide a module for proximity sensor, heart rate sensor, torque sensor, GPS and posture detection systems for the patient. Heart rate sensors always helps the patient to keep pulses on check while posture detection system gives reminder to change lower body posture periodically. Proximity sensors are attached on the wheelchair to detect any incoming danger and alert the passenger, also in case of any danger the user can call for help pressing the SOS button. The GPS location will be received through an online platform by the caretaker. Torque sensor system allows the wheelchair to actively function longer hours using mechanical energy, and not just being dependent on battery power. Availability of such additional features makes the existing wheelchair safer and more reliable for patients, encouraging a more independent lifestyle with assured safety for outdoor activities like going to job place, marketing and attending educational institute. Supporting CRP to provide medical treatment and rehabilitation for disabled people essential requirements are met.
An IoT System for Social Distancing and Emergency Management in Smart Cities Using Multi-Sensor Data
Smart cities need technologies that can be really applied to raise the quality of life and environment. Among all the possible solutions, Internet of Things (IoT)-based Wireless Sensor Networks (WSNs) have the potentialities to satisfy multiple needs, such as offering real-time plans for emergency management (due to accidental events or inadequate asset maintenance) and managing crowds and their spatiotemporal distribution in highly populated areas (e.g., cities or parks) to face biological risks (e.g., from a virus) by using strategies such as social distancing and movement restrictions. Consequently, the objective of this study is to present an IoT system, based on an IoT-WSN and on algorithms (Neural Network, NN, and Shortest Path Finding) that are able to recognize alarms, available exits, assembly points, safest and shortest paths, and overcrowding from real-time data gathered by sensors and cameras exploiting computer vision. Subsequently, this information is sent to mobile devices using a web platform and the Near Field Communication (NFC) technology. The results refer to two different case studies (i.e., emergency and monitoring) and show that the system is able to provide customized strategies and to face different situations, and that this is also applies in the case of a connectivity shutdown.
To address the intelligent operation and maintenance requirements of the distribution Internet of Things, this paper proposes an intelligent condition monitoring method integrating edge computing, multisource data standardization processing, conflict-optimized evidence theory and Bayesian change point detection, which realizes multi-modal data fusion and improves the accuracy and response efficiency of distribution network condition perception. A PCA (Principal Component Analysis) optimized evidence theory is employed to fuse multi-source heterogeneous data, based on which a temporal monitoring model is built to achieve online fault detection and hierarchical warning. Simulation results demonstrate that the multi-source data standardization and fusion method can effectively fuse multi-modal data, and the fusion process is stable and reliable; the proposed Bayesian-based online change point detection algorithm can accurately identify the moments when abrupt changes occur in data distribution.
After geohazards occur, conducting rapid and sustainable secondary geohazard monitoring plays a crucial role in reducing secondary geohazard risks. However, geohazard situations vary across different areas and dynamically change with the development of geohazards. Therefore, ensuring timely data collection and the ability to dynamically adjust to changes in geohazards poses significant challenges in geohazard monitoring scenarios. This article proposes a low-latency data collection scheme considering data importance levels (LLDCL), which prioritizes data collection from high-importance sensor nodes (SNs) while still collecting data from lower importance SNs. Given the potential for sudden events in geohazard monitoring scenarios that may require adjustments to the emergency levels of monitoring points, this article introduces a deep reinforcement learning (DRL) algorithm for unmanned aerial vehicles (UAVs) path planning based on weighted age of information (DRL-WAoI). This algorithm enables UAVs to respond quickly to dynamic environments by adjusting their flight paths in real time. Furthermore, considering the limited battery capacity of UAVs, this article establishes a token-based energy trading model between UAVs and the base station (BS) to facilitate UAV recharging. Simulation experiments show that the LLDCL scheme can effectively adapt to the dynamically changing conditions of geohazard monitoring scenarios, providing a viable solution for UAV data collection and transmission.
The Internet of Things (IoT) has been instrumental in bringing about several advancements and innovations in the domain of healthcare. Healthcare professionals are essentially life savers when it comes to handling emergency cases, such as accidents, heart attacks, etc. Only the patient’s vital parameters generally characterize emergency cases, and the doctors must wait for additional details for a wholesome diagnosis. As a result, the treatment processes and procedures sometimes get hastened and, in turn, put the patients’ lives at risk. It would always be helpful for doctors to be equipped with the medical requirements in advance for deciding the right course of action, thereby increasing the scope and chances of recovery. In this work, multimodel IoT (MMIoT) devices are deployed to monitor and collect health data from different body parts simultaneously. The healthcare data comprises signals and imagery captured from the MMIoT devices. Both the U-Net model and LSTM model are used to analyze the data automatically. The data processing is carried out by the server connected to the MMIoT network. All the medical IoT devices experimented with in this work are interconnected using a potential 5G network for optimal data transmission. The output obtained from the U-Net and the LSTM are channelized through a dense layer to classify the health anomalies accurately. It would not only facilitate but also educate medical professionals to handle unseen and typical cases in the future confidently. It can improve the overall quality of treatment and save lives with the best available resources.
The ageing European population and the expected increasing number of medical emergencies put pressure on the medical sector and existing emergency infrastructures, which calls for new innovative digital solutions. In parallel, the increasing utilization of the Internet of Things (IoT) has enabled the collection of real-time data, allowing for the autonomous detection of acute medical emergencies. In this context, this paper presents two distinct machine learning (ML) models that leverage electrocardiogram (ECG) sensor data to autonomously detect Myocardial Infarctions (MI), a leading cause of emergencies. These models are intended to be integrated into an IoT-enabled next-generation emergency communications system (NG112) capable of detecting emergencies, initiating emergency calls (eCalls), and providing relevant information to emergency call takers, which reduces response time. To realize this, two disparate models working on fundamentally different data structures are proposed and compared: A one-dimensional convolutional neural network (CNN) operating on the raw ECG signals and a GoogLeNet-based model trained on ECG images. The PTB-XL dataset is used to evaluate the proposed models, and the results indicate the 1D CNN exhibits a favourable trade-off between precision and recall for the eCall use case. Finally, the paper also discusses applying eXplainable AI (XAI) methods to achieve explainability for the ML models, paving the way for an accountable and reliable implementation in safety-critical systems.
with the emerging population vehicle are expanding on a streets in the present reality. Because of the deficient rescue administrations, expanded street traffic, street accidents are broadening that lead to loss of lives and property. Such outcomes results in the demise rate per year, which becomes a serious issue in supporting human existence. Notwithstanding, vehicles are implanted with pattern innovation, however the accident count will rise day by day because of such delays in communicating the data to the concerned individual or to protect group. This paper will introduce the different LoRa (Long range) design with low power utilization. Cloud technology is used for accident detection and emergency ambulance transportation from the scene of the unplanned tragedy to the closest hospital where emergency healthcare can be delivered. Likewise, emergency data can be shipped off the cloud right away, and its response in alarming the environmental elements and telling the proper clinic. The proposed model fabricated a compelling brilliant vehicular framework involving GPS for distinguishing the accident spot and getting to the scene early and impact sensors for identifying hindrances
The Industrial Internet of Things (IIoT) uses smart sensors to monitor an industrial environment. These sensors transmit the data through wireless mediums and form wireless sensor networks (WSNs). However, industrial environments are prone to accidents like leakage of harmful gases, fires, and boilers bursting, which is very dangerous for the people working there. Existing emergency evacuation systems suffer from low response time, uneven distribution, and longer or less safe paths. This article presents an intelligent emergency evacuation system (IEES) using Internet of Things (IoT)-enabled WSNs. In this article, hybrid reinforcement learning (RL) and the multiobjective gray wolf optimization (MO-GWO) algorithm are proposed to optimize the evacuation path for each evacuee jointly. Initially, the hardware modules are uniformly deployed in the monitoring environment, and the optimal paths are identified using a RL algorithm. During an emergency, the hardware modules collect real-time data and transmit it to the gateway node for further processing. In addition, safety layers are formed near the hazardous region using the transformed pooling layers with the breadth-first search (TPOOL-BFS) emulations. Finally, optimal paths are computed using the MO-GWO algorithm to find the optimal path for each evacuee. Extensive simulations show that the proposed scheme outperformed the existing state-of-the-art algorithm.
Unmanned aerial vehicle (UAV)-enabled data collection is considered a promising paradigm of emergency data transmission for IoT applications when the communication infrastructure is damaged. UAV scheduling for data collection as the critical technology has attracted widespread attention. Most studies default to the absolute reliability of UAVs for data collection, yet it is inevitable for UAVs to fail in flight. It is unacceptable if some critical data is lost due to UAV faults. Therefore, we research fault-tolerant scheduling for data collection enabled by heterogeneous UAVs. First, a three-layer data collection motivation scenario is proposed, where the fault tolerance issue is involved for the first time. Then, we propose a utility-based fault tolerance model (UBFT) to balance the reliability and efficiency of data collection. The UAV fault-tolerant scheduling is modeled as a multiobjective optimization problem to concurrently optimize the data throughput and load balancing of data collection. Combining the characteristics of optimization objectives, an alternating coordinate optimization method—ACTOR is presented to solve this problem efficiently. Numerous simulation experiments and real-machine experiments demonstrate that ACTOR-UBFT achieves excellent performance in fault tolerance, data throughput, adaptation, algorithm complexity, etc.
Currently, the internet of things (IoT) is a technology entering various areas of society, such as transportation, agriculture, homes, smart buildings, power grids, etc. The internet of things has a wide variety of devices connected to the network, which can saturate the central links to cloud computing servers. IoT applications that are sensitive to response time are affected by the distance that data is sent to be processed for actions and results. This work aims to create a prototype application focused on emergency vehicles through a fog computing infrastructure. This technology makes it possible to reduce response times and send only the necessary data to cloud computing. The emergency vehicle contains a wireless device that sends periodic alert messages, known as an in-vehicle beacon. Beacon messages can be used to enable green traffic lights toward the destination. The prototype contains fog computing nodes interconnected as close to the vehicle as using the low-power whole area network protocol called a long-range wide area network. In the same way, fog computing nodes run a graphical user interface (GUI) application to manage the nodes. In addition, a comparison is made between fog computing and cloud computing, considering the response time of these technologies.
The ongoing advancement of architectural and structural designs, high-ceiling spaces, special spaces have made fire disasters increasingly diverse and difficult to predict. It is demanding the need for improved firefighting systems. This study aims to address the need for improved firefighting systems in academic building by proposing the development of an IoT-based automated emergency response website. The proposed system leverages IoT technology, wireless and bluetooth sensor networks to gather real-time data from various sensors and devices installed in the site and uses machine learning algorithms to predict and prevent potential fire incidents. The system also includes an emergency response website that allows users to access real-time information about the fire incident, location, severity, and evacuation instructions. Additionally, the proposed system incorporates Building Information Modelling (BIM) to optimize evacuation and rescue routes, providing early detection and accurate alarm capabilities, evacuation guidance for endangered individuals, and guidance for firefighters. The integration of BIM allows the system to provide a three-dimensional visualization of the site, enabling a more efficient and effective response to fire incidents. Overall, the proposed system aims to improve the safety and security through real-time monitoring and response capabilities. By leveraging the power of IoT technology, machine learning algorithms and BIM, the proposed system aims to reduce the impact of fire disasters by providing accurate and timely information, route optimization and facilitating effective evacuation and rescue efforts.Keywords: Building Information Modeling, IoT, Sensor, Dijkstra Algorithm, Simulation
An improvised solution for the impact of IoT-enabled real-time traffic management on emergency vehicle response times has been presented in this research. The study was conducted using real-time traffic data and IoT sensors to monitor the flow of traffic and the movement of emergency vehicles. The results show that the integration of IoT technology improve emergency response times by enabling more efficient navigation of traffic. The benefits of IoT-enabled traffic management for emergency services include reduced response times, improved safety, and a more efficient use of resources. The results of this study have implications for the wider adoption of IoT-enabled traffic management in cities and other areas and suggest the need for further research in this area to explore the potential benefits and limitations of this technology.
One of the critical factors for a successful emergency mission is reliable communication between emergency responders and the community. In today’s world, there is an increasing need to collect and provide meaningful information to communities and their fire service organizations in real-time to assist with informed decision-making and early hazard alerts. Our proposed solution to this problem is a multi-modal emergency communication system based on NB-IoT, C-V2X, and nearby networking technologies. Our system is designed to converge various services, such as data collection from sensors deployed in remote industrial and residential areas, wildfire detection and monitoring, nearby communication for first responders, vehicle-to-vehicle communication for fire trucks, and ground-to-aerial communication for UAVs. We utilize the Container Registry provided by the Azure platform for over-the-air updates and to deploy new modules and updates. In addition to the hardware systems, we propose a cloud dashboard to manage all devices and offer real-time data visualization, device tracking, asset management, and live-video streaming. These features facilitate highly reliable communication among responders with low delay, enabling them to respond quickly and efficiently to emergencies.
The development of Internet of Things (IoT) technology has provided the potential for creative uses in many fields, including car safety systems. This research presents an IoT-based automotive safety system that intends to improve the safety of passengers during accidents by providing quick airbag deployment notification and assistance in an emergency. The system uses a network of in-car communication modules, actuators, and sensors. These sensors continually monitor several parameters, including impact intensity, deceleration, and acceleration. The technology senses the force of a collision and activates the airbags, significantly lowering the potential for serious injury. Additionally, the IoT-based system instantly notifies emergency response agencies of the collision, including the vehicle's position, the collision's severity, and if it is occupants. This makes it possible for first responders to respond swiftly and provide the occupants with quick assistance. The technology uses Wi-Fi communication protocols to send data between the car and emergency response services, ensuring a reliable and real-time connection. The collected data is securely transmitted and stored on a cloud-based platform, enabling effective analysis and historical accident tracking. Reduced response times, increased occupant safety, and better accident data collecting for further research and preventive measures are just several advantages of the proposed IoT-based automobile safety system with airbag notification for emergency help. IoT technology in automobile safety systems brings new benefits for enhancing general road safety and emergency response capabilities.
Recently, the Internet of Things (IoT) has played a vital role in emergency evacuation systems for smart buildings. Existing emergency evacuation systems do not consider future fire scenes, which leads to highly stretched paths or even trapping of individuals in the hazardous region. This article proposes a dynamic emergency evacuation system for shortest-safe path navigation (DESSN). The proposed work computes the shortest path for an individual toward a safe exit by considering the future spread of the fire region over time. The proposed approach creates a $fireMap$ and a $routeMap$ to show the fire spread and find a safe evacuation path. A modified Dijkstra algorithm is used to find the shortest path for a safe exit. This system is implemented using IoT-enabled WSN with sensor nodes, which are equipped with different types of sensors. Deployed sensor nodes communicate with the base station (BS) to plan every individual’s shortest and safe evacuation path. Sensor nodes are used to detect fire within the monitoring public infrastructure. Furthermore, BS is used to compute all the logical and arithmetical operations on real-time data. The proposed approach finds the shortest-safe path considering the future fire spread that enables quick evacuation of evacuees during an emergency. It also helps to avoid detours. Simulation results show that the proposed approach outperforms the existing state-of-the-art approaches.
The portable smart emergency system is a pre-test tool, implemented using a set of modern devices and technologies to monitor the patient's health. This tool has capability to send reports of the patient to the doctor treating the patient as well as to the relatives, close friends of the patient in real time. Health parameters of the patient viz. heart rate, blood oxygen and temperature are monitored using electronic devices viz. WEMOS D1, MAX30100, DS18B20, SIM808 on the LCD screen and stored using the MySQL database. PHP script is used to connect MySQ database for easy tracking and analysis of medical data. Doctors are facilitated to monitor the health update in real time, at the same time, communicate the same to the patient and their relatives, close friends through a dynamic web site constituted of HTML, CSS and JavaScript for the purpose of easy tracking and analysis of the medical data. To aid further, as a part of value addition, an Android based mobile app is also developed by using App Inventor to further facilitate patients, family members & close friends to monitor sensor data, receive messages and access medical history details, all in real time. Terminal cases, where the health update received from the sensor shows alarmingly high or low readings, then web enabled computing system, also sends a high alert message by playing a warning sound to the doctor, at the same time, also communicates patient’s location to him via text message to enable immediate help. By using Wi-Fi technology and the SIM808 module, the patient's location can be monitored in emergency situations and a text message containing the patient's geographical location can be sent to the treating doctor. This application also includes an option to enter the patient's medical history information using a PHP script into the database.
Addressing the inadequacy of medical facilities in rural communities and the high number of patients affected by ailments that need to be treated immediately is of prime importance for all countries. The various recent healthcare emergency situations bring out the importance of telemedicine and demand rapid transportation of patients to nearby hospitals with available resources to provide the required medical care. Many current healthcare facilities and ambulances are not equipped to provide real-time risk assessment for each patient and dynamically provide the required medical interventions. This work proposes an IoT-based mobile medical edge (IM2E) node to be integrated with wearable and portable devices for the continuous monitoring of emergency patients transported via ambulances and it delves deeper into the existing challenges, such as (a) a lack of a simplified patient risk scoring system, (b) the need for architecture that enables seamless communication for dynamically varying QoS requirements, and (c)the need for context-aware knowledge regarding the effect of end-to-end delay and the packet loss ratio (PLR) on the real-time monitoring of health risks in emergency patients. The proposed work builds a data path selection model to identify the most effective path through which to route the data packets in an effective manner. The signal-to-noise interference ratio and the fading in the path are chosen to analyze the suitable path for data transmission.
This article delves into how automated Business Continuity and Disaster Recovery (BCDR) solutions have evolved within hybrid cloud enterprise setups. Organizations now spread vital workloads across physical infrastructure and various cloud platforms, creating challenges that conventional recovery methods struggle to handle. The paper scrutinizes the distinctive complexities found in hybrid settings, from synchronizing data across different platforms to managing network dependencies, security demands, and coordination intricacies. Infrastructure-as-code and policy-as-code approaches emerge as cornerstones for contemporary BCDR tactics, allowing recovery objectives to be defined programmatically while ensuring uniform deployment across varied environments. Several architectural models for robust implementations receive thorough examination, including geographically distributed active-active setups, recovery paths from cloud to on-premises systems, orchestration across multiple cloud providers, and approaches using immutable infrastructure principles. The integration possibilities with native BCDR services from major cloud vendors are explored alongside the hurdles of managing recovery processes spanning multiple platforms. The concluding sections highlight cutting-edge technologies set to reshape the BCDR landscape: artificial intelligence for predicting failures, architectures capable of self-repair, and frameworks that optimize economic aspects by adjusting protection based on business needs.
Business Continuity Planning (BCP) is essential to maintaining continuity of operations in the face of disruptions and will be vital with the sophisticated cyber threat environment we face today. Traditional approaches to BCP commonly fail to keep up with the rapidly changing nature of cyber risks. This paper examines how AI-powered Cyber Threat Intelligence (CTI) improves BCP through real-time threat documentation, predictive risk management and dynamic answer procedures. Using such AI capabilities as ML, NLP, and anomaly detection, organizations can proactively identify vulnerabilities, anticipate the attack vectors and update continuity strategies dynamically. The study introduces a framework that utilizes AI to enable threat analysis to be automated, prioritize risks in terms of impact and aid in making a decision in case a crisis occurs. Case studies and simulation results show enhanced resilience, decreased downtime, and better allocation of resource. The research predicts that AI-enhanced CTI is not only capable of enhancing cyber defenses, but it is also capable of making the BCP a proactive discipline, rather than a reactive discipline and hence is a critical initiative for modern enterprise risk management.
In an increasingly unpredictable and data-driven business landscape, ensuring seamless continuity during disruptions has become vital. Traditional Business Continuity Planning (BCP) methods are often static, expert-dependent, and incapable of responding to real-time changes in risk environments. These limitations lead to delayed scenario updates, poor scalability, and limited interpretability. To overcome these challenges, this study introduces an Adaptive BCP framework that combines Self-Organizing Maps (SOM) for pattern recognition with Scenario-Based Generative AI (GAI) for dynamic scenario generation. The core aim is to enable automated detection of anomalous risk clusters and generate realistic continuity responses based on real-time data. A multi-agent system feeds live data—news feeds, logs, and alerts—into the model, enabling adaptive retraining and scenario regeneration. The proposed SOM-GAI-BCP model achieves superior performance with a clustering accuracy of $\text{9 5. 6 \%}$, scenario relevance of $\text{8 9. 2 \%}$, and update latency under 3 seconds, supported by high interpretability through U-Matrix visualizations and expert review. Tools such as Python, TensorFlow, and Matplotlib are employed for implementation and evaluation. This hybrid, cognitive computing approach ensures timely, intelligent decision-making and paves the way for next-generation business continuity solutions that are scalable, interpretable, and aligned with modern risk dynamics.
Cyber incident response teams operate in increasingly complex and fast-evolving threat environments where adversaries leverage automation, polymorphic malware, and distributed attack vectors to maximize impact and evade detection. Traditional response workflows often sequential, manual, and labor-intensive struggle to keep pace, resulting in prolonged dwell times, reduced forensic clarity, and heightened operational risk. Integrating Artificial Intelligence (AI) into incident response frameworks provides a transformative pathway for strengthening organizational cyber resilience. AI-driven analytics can continuously monitor network behavior, detect subtle anomalies, and rapidly correlate multi-source indicators of compromise, enabling earlier detection and prioritization of high-severity alerts. Machine learning-based triage accelerates containment by recommending or executing predefined mitigation playbooks, while natural language processing and reasoning agents support investigators in evidence classification, root-cause determination, and adversary attribution. Beyond immediate detection and remediation benefits, AI enhances forensic accuracy by ensuring systematic logging, timeline reconstruction, and integrity preservation across complex environments, including cloud and hybrid infrastructures. This capability strengthens legal, regulatory, and insurance-driven reporting requirements. Additionally, AI-supported simulation environments can model attack propagation, evaluate defensive posture, and guide training scenarios, empowering incident response teams to anticipate adversarial behavior rather than merely react. As organizations increasingly prioritize continuity and operational resilience, AI-enabled cyber incident response is emerging as a strategic capability rather than a supplementary tool. However, successful implementation requires cohesive governance, human-centered oversight, transparent model explainability, and alignment with ethical and regulatory frameworks. This work underscores a shift toward hybrid human-machine incident response teams capable of faster containment, higher forensic fidelity, and sustained business continuity amid evolving cyber threats.
The aeronautical information business is an important part of ensuring the safe operation of the air transport system. Addressing the issue of business continuity in aeronautical information business, based on a systematic analysis of data sources and data classification, the study first establishes a classification structure for aeronautical information business, describing the impact relationships
LHCb (Large Hadron Collider beauty) is one of the four large particle physics experiments aimed at studying differences between particles and anti-particles and very rare decays in the charm and beauty sector of the standard model at the LHC. The Experiment Control System (ECS) is in charge of the configuration, control, and monitoring of the various subdetectors as well as all areas of the online system, and it is built on top of hundreds of Linux virtual machines (VM) running on a Red Hat Enterprise Virtualisation cluster. For such a mission-critical project, it is essential to keep the system operational; it is not possible to run the LHCb’s Data Acquisition without the ECS, and a failure would likely mean the loss of valuable data. In the event of a disruptive fault, it is important to recover as quickly as possible in order to restore normal operations. In addition, the VM’s lifecycle management is a complex task that needs to be simplified, automated, and validated in all of its aspects, with a particular focus on deployment, provisioning, and monitoring. The paper describes the LHCb’s approach to this challenge, including the methods, solutions, technology, and architecture adopted. We also show limitations and problems encountered, and we present the results of tests performed.
Problem definition: Approximately 11,000 alleged illicit massage businesses (IMBs) exist across the United States hidden in plain sight among legitimate businesses. These illicit businesses frequently exploit workers, many of whom are victims of human trafficking, forced or coerced to provide commercial sex. Academic/practical relevance: Although IMB review boards like Rubmaps.ch can provide first-hand information to identify IMBs, these sites are likely to be closed by law enforcement. Open websites like Yelp.com provide more accessible and detailed information about a larger set of massage businesses. Reviews from these sites can be screened for risk factors of trafficking. Methodology: We develop a natural language processing approach to detect online customer reviews that indicate a massage business is likely engaged in human trafficking. We label data sets of Yelp reviews using knowledge of known IMBs. We develop a lexicon of key words/phrases related to human trafficking and commercial sex acts. We then build two classification models based on this lexicon. We also train two classification models using embeddings from the bidirectional encoder representations from transformers (BERT) model and the Doc2Vec model. Results: We evaluate the performance of these classification models and various ensemble models. The lexicon-based models achieve high precision, whereas the embedding-based models have relatively high recall. The ensemble models provide a compromise and achieve the best performance on the out-of-sample test. Our results verify the usefulness of ensemble methods for building robust models to detect risk factors of human trafficking in reviews on open websites like Yelp. Managerial implications: The proposed models can save countless hours in IMB investigations by automatically sorting through large quantities of data to flag potential illicit activity, eliminating the need for manual screening of these reviews by law enforcement and other stakeholders. Funding: This work was supported by the National Science Foundation [Grant 1936331]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.1196 .
This study examined the efficacy of artificial intelligence (AI) technologies in predictive risk assessment and their contribution to ensuring business continuity. This research aimed to understand how different AI components, such as natural language processing (NLP), AI-powered data analytics, AI-driven predictive maintenance, and AI integration in incident response planning, enhance risk assessment and support business continuity in an environment where businesses face a myriad of risks, including natural disasters, cyberattacks, and economic fluctuations. A cross-sectional design and quantitative method were used to collect data for this study from a sample of 360 technology specialists. The results of this study show that AI technologies have a major impact on business continuity and predictive risk assessment. Notably, it was discovered that NLP improved the accuracy and speed of risk assessment procedures. The integration of AI into incident response plans was particularly effective, greatly decreasing company interruptions and improving recovery from unforeseen events. It is advised that businesses invest in AI skills, particularly in fields such as NLP for automated risk assessment, data analytics for prompt risk detection, predictive maintenance for operational effectiveness, and AI-enhanced incident response planning for crisis management.
This paper explores the transformative impact of Infrastructure as Code (IaC) on disaster recovery and business continuity in cloud environments. Infrastructure as Code is defined as the practice of managing and provisioning infrastructure through machine-readable code, facilitating automation, consistency, and scalability. The relevance of IaC in disaster recovery is highlighted, demonstrating how it enhances operational efficiency and resilience by automating key processes such as backup, failover, and restoration. Furthermore, the paper discusses the importance of business continuity, emphasizing IaC’s role in maintaining and quickly restoring critical services. The advantages of using IaC tools and practices to enforce continuity plans are examined, alongside a set of best practices for successful implementation. Ultimately, the paper concludes that adopting Infrastructure as Code is essential for organizations seeking to enhance their disaster recovery and business continuity strategies in an increasingly complex digital landscape.
Critical infrastructure sectors are essential to societal well-being and technological progress but face escalating cyber threats that can cause cascading system failures. The EU-funded DYNAMO project addresses this challenge by integrating Business Continuity Management with Cyber Threat Intelligence into a unified platform to ensure operational continuity. This paper presents a situational awareness framework that supports decision-making across the resilience cycle: preparation, prevention, protection, response, and recovery. By incorporating Human-Centered Interaction Design within the Double Diamond Framework, the DYNAMO project follows four iterative stages: Discover, Define, Develop, and Deliver. These stages enable end-user collaboration, co-creation of solutions, and validation of the framework’s effectiveness through the elaboration of scenarios and comprehensive use cases, co-created by end-users and pertinent stakeholders. This approach is piloted in the maritime, healthcare, and energy sectors, enabling end-users to identify critical assets, mitigate cyber risks, safeguard data, and refine incident response strategies. DYNAMO provides threat detection coupled with automated incident response, and real-time situational awareness. Collaboration among relevant stakeholders stemming from technological stakeholders – such as industry operators and cybersecurity experts and including regulatory authorities—ensures regulatory and legal compliance, asset protection, and operational stability.
In the current business world, where everything is going digital, a cybersecurity threat is a big threat to the continuity of operations. To create efficient Business Continuity Plans (BCPs), it is necessary to classify and predict cyberattacks in a timely manner. The paper is an exploration of applying supervised machine learning models Random Forest, Gradient Boosting, and Logistic Regression to identify and categorize cyber threats on a real-world cybersecurity attack dataset with 39,999 instances and 25 features. The dataset was prepared using data preprocessing methods like treatment of null values, feature encoding, and train-test split. To the surprise of no one, the evaluation demonstrated that all the models worked relatively equally, with the Logistic Regression model yielding the best, albeit still poor accuracy of 33.86%. Despite the fact that the contemporary model performance leaves much to be desired, the study points to the crucial discoveries concerning the data imbalance and feature importance issues. These results indicate that, through better preprocessing of data, balancing of classes, and feature engineering, supervised learning models could play an important role in automated real-time cyber threat response mechanisms that would make business continuity planning more reliable.
With the continuous expansion of network scale and the continuous development of various network applications, various network attacks are becoming more and more rampant, bringing huge potential risks to network security. This paper is based on the method of the neural network to study the method of solving the above problems. The main research work and innovation of this paper are to detect network anomaly traffic and classify different types of network attacks by using a multilayer neural network. The core of the intrusion detection module based on a multilayer neural network is the intrusion detection algorithm based on a multilayer neural network. The classification algorithm mainly uses random forest, Kmeans, and Gradient Boosting Decision Tree. In this paper, a variety of machine learning algorithms such as Random Forest, Kmeans, and Gradient Boosting Decision Tree are used to construct the classification learner. The model is evaluated using public data sets in real business scenarios. Experimental results show that the model constructed by Gradient Boosting Decision Tree has a more accurate and efficient effect in network anomaly monitoring, and the granularity of network attack types is more refined and accurate. It can effectively and actively target network attacks, reduce the noise of different attribute characteristic domains in network traffic, eliminate the correlation between them, improve the detection rate, correctness, and accuracy of attack detection, and provide better network security defense for the field of big data.
Organizations need automated detection technology since security threats become increasingly complicated. The automatic anomaly detection framework established by research work provides organizations with real-time anomaly detection capabilities for their network traffic records analysis. The study performs performance testing for evaluating four machine learning systems. The research determined Random Forest(RF) Classifier and Logistic Regression as well as Decision Tree(DT) Classifier and K-Nearest Neighbors(KNN) Classifier as suitable options for threat classification frameworks. F1-score joins with accuracy, recall and precision to serve as the main evaluation metrics throughout the testing phase. The system employs a log reader automation that automatically handles unmarked test data for threat identification requirements with integrated security threat detection capabilities. Model choice requirements for security needs differ according to practical situations which leads to significant variations in performance based on strengths and weaknesses. Real-time corporate implementation becomes feasible through this top-performing model since it shows exceptional detection ability together with reliable operational behavior. Businesses need to select specific security models based on their personal security requirements according to the research findings. The research findings enable practitioners to select appropriate machine learning platforms which strengthen corporate cybersecurity infrastructure by running them on intrusion detection systems and malware recognition and network security operations.
Organizations have been responding to possible disruptions in the organization and, at the same time, trying to increase customer satisfaction through digitising their processes. Thus, Intelligent Process Automation has been catching the latest trends in intelligent process automation due to the ease of use associated with data and requirements compliance; intelligent process automation is a step above regular automation as it mimics human behaviours and thought patterns to automate intelligently streamlines workflows and business processes. However, the constant introduction of technology via process automation in organizations can have positive and negative impacts on business continuity that need to be addressed. Although there are recent best practices, frameworks, guidelines, and standards, few studies focus on the relationship between these realms. The relationship between two sets of requirements, one for implementation practice and management methodology for intelligent software-based process automation, found in IEEE 2755.2-2020 and the other, ISO 22301-2019, about the business, to implement, maintain, and improve a management system to protect, reduce the likelihood of occurrence, prepare, respond, and recover from outages when they arise. This research is integrated into forthcoming areas for investigating the interplay between intelligent process automation and the continuity of business operations. Both are analysed and explained so that users can develop intelligent software-based process automation that complies with both frameworks in a way to embed continuity practices in an organization while optimizing business processes using Intelligent Process Automation. The study provides a bi-directional mapping for IEEE 2755.2:2020 and ISO 22301:2019, along with introducing a visual model to enhance their utility. It offers versatile applications, benefiting a wide range of stakeholders.
Business continuity management provides holistic and proactive approaches for businesses to effectively manage the potential disruption caused by unforeseen events. Despite the benefits, most organizations are reluctant to adopt effective systems for BCP due to implementation technicalities, complacency, lack of tools and resources, and lack of training. This explorative study leverages case studies to examine and critically analyze how information systems can be leveraged to create, test, execute, and maintain BCPs in IT-driven companies. The case studies focus on the information systems leveraged by a purposively selected sample of Fortune 500 companies. The chosen companies include Walmart, Amazon, Apple, Microsoft, Toyota, General Motors, and IBM. The diversity and credibility of the findings are enhanced by incorporating systems used by a selected sample of large IT-driven companies not listed in the Fortune 500 list. The recommended strategies for using systems to reduce the cost of BCP include adopting cloud-based solutions, automation, virtualization, risk assessment, and standardization. The recommendations to educate organizations on the risks of operating without robust BCPs include awareness campaigns, real-world case studies, compliance requirements, and financial incentives. Implementing the proposed strategies enables organizations to leverage cost-effective, scalable, and reliable information systems to automate BCPs and operations, enhancing competitiveness and resilience to disruptions.
This paper presents the design and evaluation of a real-time IoT-based emergency response and public safety alert system tailored for rapid detection, classification, and dissemination of alerts during critical incidents. The proposed architecture combines a distributed network of heterogeneous sensors (e.g., gas, flame, vibration, and biometric), edge computing nodes (Raspberry Pi, ESP32), and cloud platforms (AWS IoT, Firebase) to ensure low-latency and high-availability operations. Communication is facilitated using secure MQTT over TLS, with fallback to LoRa for rural or low-connectivity environments. A prototype was implemented and tested across four emergency scenarios fire, traffic accident, gas leak, and medical distress within a smart city simulation testbed. The system achieved such as consistent alert latency under 450 ms, detection accuracy exceeding 95%, and scalability supporting over 12,000 concurrent devices. A comprehensive comparison against seven state-of-the-art systems confirmed superior performance in latency, reliability (99.1% alert success), and uptime (99.8%). These results underscore the system’s potential for deployment in urban, industrial, and infrastructure-vulnerable environments, with future work aimed at incorporating AI-driven prediction and federated learning for cloudless operation.
No abstract available
This paper has proposed an IoT -based smart beln and wearable device that has been able to carry out real-time sleep monitoring, alcohol detection, and emergency alerting based on AI. The hardware included ear-EEG, skin-EOG, visor NIR pupillometry, mmWave respiration, PPG-HRV and IMU motion sensing which were fused in a particle- Kalman hybrid filter using dynamic trust and outlier veto. A self-supervised, cross-modal encoder established driver-specific signatures of drowsiness, and estimation of alcohol utilized a combination of fuel-cell BrAC, lag-aligned trans dermal readings and VOC fingerprints, and drift compensation. Neursomorphic TinyML stack performed low-power inference; decisions augmented and explained by a neuro-symbolic reasoner. Personalized drowsiness classification performed with a mean accuracy of 98.6 (per- subject 97.6-99.0), AUC 0.992-0.999, and median micro-sleep detection latency 124 ms in controlled and on-road studies. Estimation of alcohol achieved the MAE 0.008-0.024 mg/L within the range of operation with sensitivity and specificity at least 95%. Telemetry supported 99.7% of mixed-path deliveries latencies Bounded and energy harvesting duty-cycling sustained SOS availability of 99.8 in realistic outdoor use-cases. Experimental results showed how multimodal sensing, principled fusion, and self-supervised personalization led to significant robustness compared to baseline approaches of a single sensor, yet maintained compatibility with wearable power budgets.
Road accidents still remain among the leading causes of death all over the world particularly in places where emergency assistance is delayed or not available. In the present paper, we discuss a compact real time embedded safety framework, which is capable of early detection of pedestrians, classification of accident events and instant revelation of search using cost-effective hardware. The system provides the YOLOv3 model for real time detection of pedestrians and a custom made manually trained Convolutional Neural Network (CNN) which classifies events of accident from annotated and videos. Physical impact detection is done with the help of ADXL335 accelerometer and MQ2 gas sensor giving real time feedback. The alertess are directly sent by Fast2SMS services and SMTP services. The location of the accident is obtained through geo-location APIs using Wi-Fi rather than GPS modules. The CNN produced a classification accuracy of 99.2% and gave alerts within a period of 1.5 seconds using the Raspberry Pi 4B hardware. The practical experiments carried out in daylight achieved stable performance without the integration of the GSM or GPS circuits. The low cost and scalability of this proposed system thus is applicable to semi urban areas, in those places where this communication based on Internet is an useful option.
This paper proposes an AI-driven automated accident response system that integrates computer vision, optical character recognition (OCR), and database analytics to expedite emergency alerts. When an accident is captured via bystander footage or surveillance cameras, the system processes the media using a deep learning model to classify injury severity (High/Medium). Concurrently, vehicle license plates are extracted via OCR and cross-referenced with registration databases to retrieve the owner’s Aadhaar details. For severe cases, the system notifies family members (via ration card linkages), nearby police, and ambulance services using SMS APIs (e.g., Twilio). Preliminary testing demonstrates accurate severity classification (>90% F1-score) and plate recognition (>85% accuracy), reducing emergency responsetimeby~40%compared to manual reporting. Future work includes IoT integration for real-time detection via smart CCTV cameras.
This paper presents SakhiSuraksha, an intelligent emergency response system designed to enhance women's safety by integrating Artificial Intelligence (AI), Internet of Things (IoT), and mobile technologies. The system autonomously detects verbal distress signals using a real-time audio analysis pipeline and triggers immediate multi-channel alerts via SMS, WhatsApp, and automated calls. Leveraging a sophisticated stack including Stanford CoreNLP for linguistic parsing, AssemblyAI for speech-to-text conversion, and a LLaMA 213 B model for contextual distress classification, the solution ensures high accuracy and reliability. The system also integrates GPS tracking and wearable IoT devices for discreet activation and operates effectively even in low-connectivity conditions through an SMS fallback mechanism. Experimental evaluation on a custom multilingual dataset demonstrates over 96% accuracy in distress detection for Hindi and English, with an average end-to-end alert response time of 1.3 seconds. The architecture is validated to be scalable, supporting over 500 concurrent users with 99.97% uptime. This paper details the complete system architecture, the technical methodology employed, and a comprehensive quantitative performance evaluation, establishing SakhiSuraksha as a robust, scalable, and privacy-conscious real-time safety solution.
Road traffic accidents are a leading cause of mortality worldwide, often exacerbated by delayed emergency response times and the lack of accurate incident localization. This paper presents a real-time IoT-enabled vehicle collision detection and emergency response system designed to address these critical issues. The system comprises a combination of sensors such as accelerator, gyroscopes, ultrasonic impact sensors, GPS and GSM modules and coordinated by low-powered ESP32 microcontroller. When abnormal motion signatures are spotted that are suggestive of a collision, sensor data is processed locally as per a Kalman filter with established thresholds, to provide quick and trustworthy decision-making at the edge. After confirming a collision, the system sends important data including the position of the vehicle, the level of impact, a time, and the identification number to predetermined emergency numbers and cloud servers through the GSM communication. Larger-scale experiments revealed a mean collision detection accuracy of 99.03, alert delivery time of less than 4 seconds over 4G networks and a GPS location error of over 99.79. There is also a provision of data integrity by the system SD card logging at the time of network failures. The proposed solution is more reliable, has low-latency processing, and autonomous alerting ability without user intervention compared to traditional accident detection systems. Its lightweight MQTT protocol, embedded architecture, and power-saving design are perfect to implement it in real-life situations in personal and commercial vehicles. The findings confirm that the system can help to make post-accident response time and survivability significantly better. The next step of work is to combine AI-based scene recognition with vehicle-to-vehicle (V2V) communication to enhance further the prediction, classification, and autonomic intervention of accidents.
BACKGROUND Delays in admitting high-risk emergency surgery patients to the intensive care unit result in worse outcomes and increased health care costs. We aimed to use interpretable artificial intelligence technology to create a preoperative predictor for postoperative intensive care unit need in emergency surgery patients. METHODS A novel, interpretable artificial intelligence technology called optimal classification trees was leveraged in an 80:20 train:test split of adult emergency surgery patients in the 2007-2017 American College of Surgeons National Surgical Quality Improvement Program database. Demographics, comorbidities, and laboratory values were used to develop, train, and then validate optimal classification tree algorithms to predict the need for postoperative intensive care unit admission. The latter was defined as postoperative death or the development of 1 or more postoperative complications warranting critical care (eg, unplanned intubation, ventilator requirement ≥48 hours, cardiac arrest requiring cardiopulmonary resuscitation, and septic shock). An interactive and user-friendly application was created. C statistics were used to measure performance. RESULTS A total of 464,861 patients were included. The mean age was 55 years, 48% were male, and 11% developed severe postoperative complications warranting critical care. The Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application was created as the user-friendly interface of the complex optimal classification tree algorithms. The number of questions (ie, tree depths) needed to predict intensive care unit admission ranged from 2 to 11. The Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application had excellent discrimination for predicting the need for intensive care unit admission (C statistics: 0.89 train, 0.88 test). CONCLUSION We recommend the Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application as an accurate, artificial intelligence-based tool for predicting severe complications warranting intensive care unit admission after emergency surgery. The Predictive OpTimal Trees in Emergency Surgery Risk Intensive Care Unit application can prove useful to triage patients to the intensive care unit and to potentially decrease failure to rescue in emergency surgery patients.
Road accidents are a leading cause of death and disability among youth. Contemporary research on accident detection systems is focused on either decreasing the reporting time or improving the accuracy of accident detection. Internet-of-Things (IoT) platforms have been utilized considerably in recent times to reduce the time required for rescue after an accident. This work presents an IoT-based automotive accident detection and classification (ADC) system, which uses the fusion of smartphone’s built-in and connected sensors not only to detect but also to report the type of accident. This novel technique improves the rescue efficacy of various emergency services, such as emergency medical services (EMSs), fire stations, towing services, etc., as knowledge about the type of accident is extremely valuable in planning and executing rescue and relief operations. The emergency assistance providers can better equip themselves according to the situation after making an inference about the injuries sustained by the victims and the damage to the vehicle. In this work, three machine learning models based on Naïve Bayes (NB), Gaussian mixture model (GMM), and decision tree (DT) techniques are compared to identify the best ADC model. Five physical parameters related to vehicle movement, i.e., speed, absolute linear acceleration (ALA), change-in-altitude, pitch, and roll, have been used to train and test each candidate ADC model to identify the correct class of accident among collision, rollover, falloff, and no accident. NB-based ADC model is found to be highly accurate with 0.95 mean F1-score.
Patient monitoring systems are becoming an advanced systems since it works in the sensitive area at intensive care unit(ICU). The patients admitted critically in intensive care units needs high level of attention and the goal towards saving the patient life is higher. Due to the development of internet of things (IoT), Machine learning (ML) and Artificial Intelligence (AI) frameworks patient monitoring systems are rapidly growing in recent days. IoT enabled patient monitoring system collects the data in a smart way and process the data in the cloud where the data can be accessible by the medical practioners in a fraction of seconds. Continuous monitoring is highly demandable for ICU patience. The smart healthcare management system that continuously monitor the patient physiological data such as Heartbeat, Blood Pressure(BP), Respiratory Rate(RR) and display the values in the IoT cloud without any interference. The model proposed with artificial intelligence framework also capable of detecting the normal condition occurring over the patient monitoring system and further intimate the abnormality notification message in case of emergency present with the system will transmit the information to the authorized person such as doctors, medical practitioners and IoT cloud to take immediate steps.
: In today’s fast-paced world, many elderly individuals struggle to adhere to their medication schedules, especially those with memory-related conditions like Alzheimer’s disease, leading to serious health risks, hospitalizations, and increased healthcare costs. Traditional reminder systems often fail due to a lack of personalization and real-time intervention. To address this critical challenge, we introduce MediServe, an advanced IoT-enabled medication management system that seamlessly integrates deep learning techniques to provide a personalized, secure, and adaptive solution. MediServe features a smart medication box equipped with biometric authentication, such as fingerprint recognition, ensuring authorized access to prescribed medication while preventing misuse. A user-friendly mobile application complements the system, offering real-time notifications, adherence tracking, and emergency alerts for caregivers and healthcare providers. The system employs predictive deep learning models, achieving an impressive classification accuracy of 98%, to analyze user behavior, detect anomalies in medication adherence, and optimize scheduling based on an individual’s habits and health conditions. Furthermore, MediServe enhances accessibility by employing natural language processing (NLP) models for voice-activated interactions and text-to-speech capabilities, making it especially beneficial for visually impaired users and those with cognitive impairments. Cloud-based data analytics and wireless connectivity facilitate remote monitoring, ensuring that caregivers receive instant alerts in case of missed doses or medication mismanagement. Additionally, machine learning-based clustering and anomaly detection refine medication reminders by adapting to users’ changing health patterns. By combining IoT, deep learning, and advanced security protocols, MediServe delivers a comprehensive, intelligent, and inclusive solution for medication adherence. This innovative approach not only improves the quality of life for elderly individuals but also reduces the burden on caregivers and healthcare systems, ultimately fostering independent and efficient health management.
The Human-Centric Cloud-Based Portable ICU system revolutionizes emergency medical care by integrating real-time ambulance assistance with hospital infrastructure using advanced IoT technologies. Portable medical devices equipped with IoT sensors monitor vital health parameters, such as body temperature, heart rate, pulse rate, and SpO2 levels, enabling first responders to assess patient conditions efficiently. A unique patient ID consolidates medical data, ensuring seamless information flow. Collected data is securely transmitted via MQTT and cellular networks to a centralized cloud database, facilitating real-time communication between ambulances and hospitals. Hospitals receive instant notifications about incoming patients, allowing them to assess their capacity and respond accordingly. Real-time analytics enhance decision-making, directing patients to the most suitable healthcare facility without delays. By leveraging IoT and cloud computing, this system establishes a robust communication loop between emergency services and hospitals, significantly reducing response times and optimizing patient transfers. This innovative approach enhances the efficiency of emergency healthcare delivery, ensuring timely and appropriate medical attention in critical situations.
Accurate monitoring of vital signs in an ICU is integral to understanding overall physical well-being for patients. Our research endeavor employed machine learning techniques to construct a predictive classification model utilizing continuous ICU vital sign measurements. The primary aim was to develop an early warning system capable of forecasting whether vital indicators would reach critical values within one hour; our ultimate aim was to enable healthcare professionals, including nurses and doctors, to intervene proactively, preventing emergency situations which could result in organ dysfunction or mortality. Our comprehensive dataset comprises vital sign measurements, lab test results, procedures, and medications from over 50,000 patients collected via rigorous preprocessing procedures like data cleansing, bias correction, feature extraction and selection to produce an insightful dataset with distinguishing attributes. After selecting an algorithmic set that included Decision Trees (DT), Support Vector Machines (SVM), Recurrent Neural Networks (RNN), and Long Short-Term Memory (LSTM), to predict critical vital signs in ICU patients one hour in advance - such as Heart Rate, SpO2, Mean Artery Pressure (MAP), Respiratory Rate (RR), and Systolic Blood Pressure (SBP). Our models included Heart Rate prediction as well as respiratory Rate/RR predictions/SBP estimation models. The results of the study demonstrated the efficacy and accuracy of machine learning methods designed to anticipate imminent changes to vital signs. Utilizing such predictive models, healthcare providers can increase their capacity to address potential complications before they occur, ultimately leading to improved patient outcomes in challenging settings.
This research examines the efficacy of an IoT-based system using Recurrent Neural Networks (RNNs) for the early identification and short-term prognosis of Respiratory Tract Infections (RTIs). The proposed system uses simulated real-time physiological data (respiratory rate, heart rate, temperature, oxygen saturation, and white blood cell count) from the MIMIC-III dataset to emulate IoT sensor outputs, achieving 92.1% classification accuracy. The findings highlight the efficacy of integrating continuous monitoring principles with advanced temporal modeling for proactive healthcare treatment. The novelty of this work lies in the use of LSTM-based RNNs with simulated multi-parameter IoT data for early RTI identification. This approach outperforms the traditional static models by effectively capturing the temporal dependencies in the physiological signals of Intensive Care Unit (ICU) patients.
Internet of Things (IoT) can be combined with machine learning in order to provide intelligent applications to the network nodes. Furthermore, IoT expands these advantages and technologies to the industry. In this paper, we propose a modification of one of the most popular algorithms for feature selection, fast-based-correlation feature (FCBF). The key idea is to split the feature space in fragments with the same size. By introducing this division, we can improve the correlation and, therefore, the machine learning applications that are operating on each node. This kind of IoT applications for industry allows us to separate and prioritize the sensor data from the multimedia-related traffic. With this separation, the sensors are able to detect efficiently emergency situations and avoid both material and human damage. The results show the performance of the three FCBF-based algorithms for different problems and different classifiers, confirming the improvements achieved by our approach in terms of model accuracy and execution time.
The integration of smart technologies in agriculture has led to the creation of intelligent systems that address issues like energy use and real-time monitoring. This article introduces the MAC Scheduling and Traffic Control Prioritization with Object Detection Using Artificial Intelligence in IoT-SDN based Smart Agriculture (MSTCOD) model, improving data routing in smart agriculture using IoT, SDN, advanced MAC scheduling, and AI for traffic classification. It features a system that sorts traffic into regular, emergency, and on-demand and uses a Q-learning algorithm for real-time routing adjustments. The SDN-IoT gateway enables secure device authentication and manages data flow rules. A multi-path MAC scheduling mechanism enhances throughput and resource distribution, lowering collisions and interference. The lightweight AI model is designed for low-resource IoT devices, balancing accuracy and performance. Various kernel-based architectures were developed to optimize feature extraction while preventing overfitting. Performance testing showed that MSTCOD with previous models in critical metrics like latency, packet loss, and throughput, making it suitable for applications requiring real-time feedback and monitoring.
This paper presents the design and implementation of an IoT-enabled multifunctional glove intended for enhancing women’s safety and enabling sign language translation. The system incorporates flex sensors to capture finger bending, a GPS module for real-time location tracking, and a distress alert system activated by a predefined gesture. The Raspberry Pi serves as the central processor, managing data from the Arduino Nano, executing gesture classification using the k-Nearest Neighbors algorithm (k =3), and communicating with cloud services and mobile devices. The glove achieves a gesture recognition accuracy of 96.1%, with an emergency alert success rate of 100% across 50 trials and an average response time of 3.14 seconds. For assistive communication, detected gestures are converted to text or synthesized speech using onboard text-to-speech modules. The device remains functional after five 60C wash cycles, confirming its durability and washability. The glove supports continuous operation for up to 8.2 hours using a 3.7 V, 60 Ah rechargeable battery. The system offers a dual-purpose, lightweight, and ergonomic solution for real-time safety intervention and accessible communication.
Hearing-impaired drivers face significant challenges in detecting critical auditory cues, such as emergency vehicle sirens, essential for safe driving. This article presents an advanced IoT-based sound recognition system designed to enhance situational awareness for these drivers. Audible signals are recognized and transformed into alerts displayed in the dashboard. Our approach involves preprocessing audio data to extract 23 features. We normalize these features and evaluate multiple Machine Learning and Deep Learning models for their classification performance. The top five models, selected based on their performance metrics, are then combined into an ensemble model using majority voting to improve accuracy and robustness. Our dataset comprising 1500 audio samples enabled us to achieve a final accuracy of 94.2% with the ensemble voting approach. These results demonstrate a significant performance in sound classification accuracy compared to individual models, indicating the effectiveness of our ensemble approach. This research provides a valuable step towards developing more accessible and safer driving assistance systems for individuals with hearing impairments.
Background/Objectives: Optimization algorithms are acknowledged to be critical in various fields and dynamical systems since they provide facilitation in identifying and retrieving the most possible solutions concerning complex problems besides improving efficiency, cutting down on costs, and boosting performance. Metaheuristic optimization algorithms, on the other hand, are inspired by natural phenomena, providing significant benefits related to the applicable solutions for complex optimization problems. Considering that complex optimization problems emerge across various disciplines, their successful applications are possible to be observed in tasks of classification and feature selection tasks, including diagnostic processes of certain health problems based on bio-inspiration. Sepsis continues to pose a significant threat to patient survival, particularly among individuals admitted to intensive care units from emergency departments. Traditional scoring systems, including qSOFA, SIRS, and NEWS, often fall short of delivering the precision necessary for timely and effective clinical decision-making. Methods: In this study, we introduce a novel, interpretable machine learning framework designed to predict in-hospital mortality in sepsis patients upon intensive care unit admission. Utilizing a retrospective dataset from a tertiary university hospital encompassing patient records from January 2019 to June 2024, we extracted comprehensive clinical and laboratory features. To address class imbalance and missing data, we employed the Synthetic Minority Oversampling Technique and systematic imputation methods, respectively. Our hybrid modeling approach integrates ensemble-based ML algorithms with deep learning architectures, optimized through the Red Piranha Optimization algorithm for feature selection and hyperparameter tuning. The proposed model was validated through internal cross-validation and external testing on the MIMIC-III dataset as well. Results: The proposed model demonstrates superior predictive performance over conventional scoring systems, achieving an area under the receiver operating characteristic curve of 0.96, a Brier score of 0.118, and a recall of 81. Conclusions: These results underscore the potential of AI-driven tools to enhance clinical decision-making processes in sepsis management, enabling early interventions and potentially reducing mortality rates.
The elderly population is particularly vulnerable to falls and sudden health deterioration, which can lead to critical consequences if not addressed promptly. This paper presents a smart, wearable-based system designed for real-time fall detection and vital health monitoring. The system uses an MPU6050 sensor to track body motion, and a MAX30100 sensor to monitor heart rate, SpO2 levels, and body temperature. Machine learning model is employed to analyze sensor data and detect anomalies. Amongst the three classification algorithms (KNN, SVM and Random forest) applied, Random forest demonstrated 97% accuracy that is suitable for this application. In case of a fall or abnormal readings, emergency alert along with GPS coordinates is sent to a designated contact through GSM module. A companion web interface visualizes real-time data, aiding in timely medical intervention. The system demonstrates reliability, accuracy, and practicality for enhancing elderly safety and autonomy.
Purpose As HF progresses into advanced HF, patients experience a poor quality of life, distressing symptoms, intensive care use, social distress, and eventual hospital death. We aimed to investigate the relationship between morality and potential prognostic factors among in-patient and emergency patients with HF. Patients and Methods A case series study: Data are collected from in-hospital and emergency care patients from 2014 to 2021, including their international classification of disease at admission, and laboratory data such as blood count, liver and renal functions, lipid profile, and other biochemistry from the hospital’s electrical medical records. After a series of data pre-processing in the electronic medical record system, several machine learning models were used to evaluate predictions of HF mortality. The outcomes of those potential risk factors were visualized by different statistical analyses. Results In total, 3871 hF patients were enrolled. Logistic regression showed that intensive care unit (ICU) history within 1 week (OR: 9.765, 95% CI: 6.65, 14.34; p-value < 0.001) and prothrombin time (OR: 1.193, 95% CI: 1.098, 1.296; <0.001) were associated with mortality. Similar results were obtained when we analyzed the data using Cox regression instead of logistic regression. Random forest, support vector machine (SVM), Adaboost, and logistic regression had better overall performances with areas under the receiver operating characteristic curve (AUROCs) of >0.87. Naïve Bayes was the best in terms of both specificity and precision. With ensemble learning, age, ICU history within 1 week, and respiratory rate (BF) were the top three compelling risk factors affecting mortality due to HF. To improve the explainability of the AI models, Shapley Additive Explanations methods were also conducted. Conclusion Exploring HF mortality and its patterns related to clinical risk factors by machine learning models can help physicians make appropriate decisions when monitoring HF patients’ health quality in the hospital.
No abstract available
The deep integration of IoT technology and 5G communication is reshaping the public health emergency response system. This paper centers on the construction of an intelligent public health prevention and control platform, focusing on its operational process and expected results. The platform covers three stages: pre-hospital emergency response, in-hospital treatment and post-hospital rehabilitation. It constructs a multi-level collaborative intelligent medical system, integrating core modules such as 5G emergency vehicle dynamic dispatching, real-time transmission of multimodal vital signs, and intelligent allocation of regional resources. The platform realizes precise positioning of pre-hospital patients and ambulance path optimization, dynamic allocation of ICU and emergency beds within the hospital, and post-hospital remote health monitoring and personalized rehabilitation management. The platform not only supports highly effective response to public health emergencies, but also promotes the intelligence and refinement of medical processes by optimizing the allocation of medical resources and improving the level of health management.
Timely ambulance coordination is critical in emergencies, where even minutes can determine survival. Existing systems face challenges such as traffic congestion, delayed hospital communication, and inefficient routing, all of which reduce the effectiveness of care. This paper proposes an AI-enabled ambulance coordination framework that integrates ambulance drivers, hospitals, and traffic control authorities through a secure mobile application. Drivers can log in, locate the nearest hospital with real-time bed availability, and receive AI-assisted shortest path navigation. The ambulance’s live location is transmitted to traffic control rooms, enabling dynamic traffic light adjustment to minimize transit delays. Simultaneously, IoT-based health monitoring devices collect patient vitals and transmit them to the selected hospital, ensuring that ICU or emergency units are prepared before arrival. By combining navigation, hospital readiness, and smart traffic systems in a unified platform, the proposed framework has the potential to reduce emergency response times by up to 30–40%, directly improving survival outcomes.
Rapid identification, reliable patient data, and swift hospital preparation are essential components of emergency medical care; any delay in any of these variables raises morbidity and death. In order to automate ambulance-to-hospital coordination, we present SETU, an IoT- and artificial intelligence-driven emergency patient management platform that combines cloud-hosted medical data, portable biometric identification (fingerprint and iris), and a real-time triage-and-ping procedure. Using a robust Internet of Things device, responders scan the patient in the field. The system instantly looks up the patient's identity, obtains relevant medical history (which may include allergies, chronic diseases, and prescriptions), and uses an artificial intelligence triage model to evaluate the patient's urgency and resource requirements. At that identical time, it sends the ambulance route guidance and a projected time of arrival (ETA) and pre-arrival informs (staff/OT/ICU reservation) to the receiving hospital. In time-sensitive emergencies, SETU intends to mitigate door-to-treatment delays, optimize resource utilization, and improve positive patient outcomes through the integration of biometric-enabled record access, ML-based triage, and real-time encounter notification. By incorporating these findings into an end-to-end operational model for ambulances and hospitals, SETU improves on prior studies demonstrating that prehospital notification minimizes in-hospital waits and that IoT and biometric technologies are feasible in clinical settings. In order to measure the reductions in response and admission delays and to ensure safety, privacy, and clinical effectiveness, implantation and experimental evaluation (simulation + limited field rollouts) are recommended. The project is also taking into account the ethics of design and data security, ensuring all medical and biometric data are anonymised, encrypted and only accessible under approved protocols. The modular structure of SETU allows easy integration with electronic health record (EHR) systems, national healthcare infrastructures and next-generation IoT - based data networks. Implemented Scalability enables potential expansion to disaster response, larger emergency cooperation along with remote healthcare delivery in rural area. SETU can be transformed into a predictive health care system that will predict resource limitations as well as adaptively manage ambulancehospital workflows with advanced analytics and continuous feedback from hospital networks. Ideally, SETU will act as the essential link between patient demand and hospital emergency readiness by developing a network among all hospitals in the state. This network will transform the emergency response into early and data-based life-saving healthcare delivery.
Emergency response systems are critical for saving lives during crises, yet traditional methods often suffer from delays due to fragmented communication and resource mismanagement. This paper proposes a Rescue Squad Web Project (RSWP), a unified platform connecting individuals in emergencies with nearby rescue teams equipped with appropriate tools. The system integrates geolocation tracking, real-time databases, and AI-driven emergency classification to optimize response times and resource allocation. By leveraging GPS and crowdsourced data, RSWP ensures that the nearest available squad receives instant alerts with contextual details, such as emergency type and required instruments. A literature review highlights advancements in IoT, machine learning, and real-time systems, while identifying gaps in holistic emergency management solutions. The proposed methodology emphasizes modular architecture, AI-based prioritization, and multi-stakeholder coordination. Future enhancements could include drone integration and predictive analytics. This project aims to reduce fatalities by 30–40% in urban emergencies, as evidenced by simulations
Facing death by soldiers is happening all the time. They never wriggle out of their responsibility. They fight in utmost hard places in territories, on peaks and foothills, in savannahs and jungles. Their role in protecting the borders of our uncertain terrestrial is exceptional. They sacrifice their life for the nation. There are many conflicts concerning the soldier’s health. Integrated Implementation of Hybrid Deep Learning approach would be convenient for armed forces engaged in various operations in warrior activities. The nano GPS tracing (Global Positioning Systems) will be placed on soldier. Soldier health monitoring model embedded and interfaced with mobile computing, health devices and health care networking facilities. In the control scheme, the soldier’s uniform will be mountedwith smart sensors. Sensors data obtained by the detectors connected to the soldiers is processed by a modified machine learning algorithm such as Autoencoder and long short-term memory structure (AUTO-LSTM). This research implies a computationalintelligence system that can understand both individual events and the changes among two distinct accomplishments of short length and the squat amount for medical applications. In this research, an autoencoder for collecting attributes for classification obtained by detectors. The LTSM system is then used to collect further substantial extended period relationships among datasets to enhance data analysis accuracy further. The experimental outcomes designate that the suggested resolution will help escalation the classification accuracy up to 97 per cent and thefinding accuracy for transformations greater than 90 per cent that are larger than that of several current related frameworks.
BACKGROUND Acute kidney injury (AKI) is not only a complication but also a serious threat to patients with cerebral infarction (CI). This study aimed to explore the application of interpretable machine learning algorithms in predicting AKI in patients with cerebral infarction. METHODS The study included 3,920 patients with CI admitted to the Intensive Care Unit and Emergency Medicine of the Central Hospital of Lishui City, Zhejiang Province. Nine machine learning techniques, including XGBoost, logistics, LightGBM, random forest (RF), AdaBoost, GaussianNB (GNB), Multi-Layer Perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (KNN) classification, were used to develop a predictive model for AKI in these patients. SHapley Additive exPlanations (SHAP) analysis provided visual explanations for each patient. Finally, model effectiveness was assessed using metrics such as average precision (AP), sensitivity, specificity, accuracy, F1 score, precision-recall (PR) curve, calibration plot, and decision curve analysis (DCA). RESULTS The XGBoost model performed better in the internal validation set and the external validation set, with an AUC of 0.940 and 0.887, respectively. The five most important variables in the model were, in order, glomerular filtration rate, low-density lipoprotein, total cholesterol, hemiplegia and serum kalium. CONCLUSION This study demonstrates the potential of interpretable machine learning algorithms in predicting CI patients with AKI.
Health sector is one of the important foundations of nation. The nation will stand if people get enough health facilities in normal as well as emergency level. The increase in the cost of medical treatments, the upsurge of last stage disease detection, delay in getting the treatments or right treatment and increasing population. Every stage of living being is important, but major concerns are about the elder ones the aged population. The aged old are getting helpless without care and medical facilities on the other side is the tremendous costing of medical facility. This balance is tried and maintained by the Healthcare Internet of things framework using Artificial Intelligence and Data Analytics. The sensors and artificial intelligence are not only tracking the health data but also providing the safety and predictions for further coming health scenarios. The different algorithms like rule-based classification, recurrent neural networks are used to detect and prevent the diseases whose dataset is used for the same purpose.
本次研究报告系统整理了应急管理与自动化分类领域的文献,将研究划分为五大核心方向:1) 针对重大自然灾害的实时监测与损害评估;2) 医疗临床紧急救治、分诊及患者实时智能监控;3) 公共交通、工业与基础设施的安全监控与预警;4) 组织业务韧性与灾后恢复规划;5) 基于多模态信息提取、知识图谱与智能决策的应急响应协同机制。研究结果表明,当前领域正从传统的单点感知转向整合IoT、边缘计算、深度学习及大语言模型的智能化、全流程自动化应急管理体系。