ai农业病虫害检测
经典深度学习架构创新与多尺度特征融合
该组文献侧重于探索CNN(如ResNet, VGG, MobileNet)、Transformer、生成对抗网络(GAN)及混合模型在病虫害识别中的应用。研究重点在于通过多尺度特征提取、注意力机制(CBAM, SE)和数据增强技术解决复杂背景、样本不平衡及多病害特征重叠的识别难题。
- Implementation of the Convolutional Neural Network (CNN) Algorithm for Pest Detection in Green Mustard Plants(Gilang Wiwaha Soekarno, Agus Suhendar, 2025, G-Tech: Jurnal Teknologi Terapan)
- Paddy Crop Disease Detection using Deep Learning Techniques(Akshitha M, S. M, S. R. M. Sekhar, P. D., 2022, 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon))
- AI-Driven Crop Disease Detection and Management in Smart Agriculture(Amanullah Ansari, Shrejal Singh, Dr. Nikhat Akhtar, 2025, International Journal of Scientific Research in Science and Technology)
- An ensembled-deep-learning paradigm trained with a self-improved coyote optimization algorithm (SI-COA) for crop disease detection(Preeti Shukla, A. Chandanan, 2024, Multimedia Tools and Applications)
- Leaf-based disease detection in bell pepper plant using YOLO v5(Midhun P. Mathew, T. Mahesh, 2021, Signal, Image and Video Processing)
- AI-Based Crop Disease Detection(2025, International Research Journal of Modernization in Engineering Technology & Science)
- Identifying Common Pest and Disease of Lettuce Plants Using Convolutional Neural Network(Jomer Allan G. Barcenilla, C. Maderazo, 2023, 2023 2nd International Conference on Futuristic Technologies (INCOFT))
- Pest Detection on Green Mustard Plants Using Convolutional Neural Network Algorithm(Nurhikma Arifin, Siti Aulia Rachmini, Juprianus Rusman, 2025, Indonesian Journal of Artificial Intelligence and Data Mining)
- Crop Disease Detection(S. S, P. P R, R. P, J. J, 2024, International Journal For Multidisciplinary Research)
- Tomato pest recognition using convolutional neural network in Bangladesh(Johora Akter Polin, Nahid Hasan, Md. Tarek Habib, Atiqur Rahman, Zannatun Nayem Vasha, Bidyut Sharma, 2024, Bulletin of Electrical Engineering and Informatics)
- PlantDiseaseNet: convolutional neural network ensemble for plant disease and pest detection(Muammer Turkoglu, B. Yanikoglu, Davut Hanbay, 2021, Signal, Image and Video Processing)
- Tomato pest classification using deep convolutional neural network with transfer learning, fine tuning and scratch learning(Gayatri Pattnaik, V. Shrivastava, K. Parvathi, 2021, Intelligent Decision Technologies)
- Sugarcane Crop Disease Detection(Akshay Chavan, A. Desai, K. Oza, 2025, International Journal on Advanced Computer Theory and Engineering)
- 基于CNN的番茄叶片病虫害识别技术(符丹丹, 冯 晶, 2023, 计算机科学与应用)
- Mechanism and Design of Agriculture Pest and Disease Recognition System Based on Convolutional Neural Network(Runling Wang, 2024, 2024 IEEE 7th Eurasian Conference on Educational Innovation (ECEI))
- Crop Pest Detection using Convolutional Neural Network(D. T, S. N, N. J, A. S K, S. T., S. K., 2024, Journal of Soft Computing Paradigm)
- Exploration of machine learning approaches for automated crop disease detection(Annu Singla, A. Nehra, Kamaldeep Joshi, Ajit Kumar, Narendra Tuteja, R. K. Varshney, S. Gill, R. Gill, 2024, Current Plant Biology)
- Towards robust crop disease detection for complex real field background images(Radhika Bhagwat, Y. Dandawate, 2024, Vietnam Journal of Science and Technology)
- An Efficient Insect Pest Classification Using Multiple Convolutional Neural Network Based Models(Hieu T. Ung, Quang Huy Ung, Binh T. Nguyen, 2021, ArXiv)
- 基于改进MobileNetV2的茶叶病害识别方法(严春雨, 李 飞, 2022, 软件工程与应用)
- 基于根系图像处理的番茄枯萎病检测研究(郑琼洁, 李敬蕊, 2022, 软件工程与应用)
- Crop pest detection by three-scale convolutional neural network with attention(Xuqi Wang, Shanwen Zhang, Xianfeng Wang, Cong Xu, 2023, PLOS ONE)
- 基于卷积神经网络的水稻虫害图像识别(张昕玥, 陈勇明, 郭 俊, 龚净茹, 2025, 人工智能与机器人研究)
- Investigating Generative Neural-Network Models for Building Pest Insect Detectors in Sticky Trap Images for the Peruvian Horticulture(J. Cabrera, Edwin Villanueva, 2021, No journal)
- Rapid density estimation of tiny pests from sticky traps using Qpest RCNN in conjunction with UWB-UAV-based IoT framework(Y. Juan, Ziyi Ke, Ziqiang Chen, Debiao Zhong, Weifeng Chen, Liang Yin, 2023, Neural Computing and Applications)
- Agricultural Pest Image Recognition Algorithm Based on Convolutional Neural Network and Bayesian Method(Ling Zhang, Fahui Wu, Wensen Yu, 2024, IEEE Access)
- Deep Learning model of sequential image classifier for crop disease detection in plantain tree cultivation(M. Nandhini, K. U. Kala, M. Thangadarshini, S. Verma, 2022, Comput. Electron. Agric.)
- Deep learning based plant health disease detection in tomatoes using inception v4 convolutional neural network and YOLO V8(B. Sowmya, S. Guruprasad, 2025, Discover Artificial Intelligence)
- Implementation of Convolutional Neural Network Algorithm to Pest Detection in Caisim(Cendekia Luthfieta Nazalia, Pritasari Palupiningsih, B. Prayitno, Yudhi Purwanto, 2023, 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE))
- Timely Detection of Stem Borer Pest Infestation through Convolutional Neural Network(V. N, Tharani Kumari G D, H. C., 2024, 2024 10th International Conference on Advanced Computing and Communication Systems (ICACCS))
- Plant Disease Detection for Guava and Mango using YOLO and Faster R-CNN(Kruthi U Shetty, Rida Javed Kutty, Khushi Donthi, A. Patil, N. Subramanyam, 2024, 2024 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI))
- 基于YOLO-V5l与ResNet50的农田害虫检测(柳春源, 陈洪建, 曾小辉, 向 滔, 寇喜鹏, 2022, 人工智能与机器人研究)
- A hybrid approach for rice crop disease detection in agricultural IoT system(Yu Wang, Udaya Suriya Rajkumar Dhamodharan, Nadeem Sarwar, Faris. A. Almalki, Qamar H. Naith, Sathiyaraj R, Mohan D, 2024, Discover Sustainability)
- Crop Pest Image Classification Based on Multi-Scale Convolutional Neural Network(Chi Ma, Huikai Li, Hui Hu, Jingyan Li, Jie Wu, 2023, No journal)
- Image segmentation for pest detection of crop leaves by improvement of regional convolutional neural network(Xianchuan Wu, Yuling Liu, Mingjing Xing, Chun Yang, Shaoyong Hong, 2024, Scientific Reports)
- Hunger games search based deep convolutional neural network for crop pest identification and classification with transfer learning(Vishakha B. Sanghavi, H. Bhadka, Vijay Dubey, 2022, Evolving Systems)
- Image-based Black Gram Crop Disease Detection(S. Harika, G. Sandhyarani, D. Sagar, G. Reddy, 2023, 2023 International Conference on Inventive Computation Technologies (ICICT))
- Design and Implementation of FourCropNet: A CNN-Based System for Efficient Multi-Crop Disease Detection and Management(H. P. Khandagale1, Sangram Patil2, V. S. Gavali, S. V. Chavan4, P. P. Halkarnikar5, Prateek A. Meshram6, D. Patil, 2025, ArXiv)
- Fast detection of rice striped stem borer (Chilo suppressalis) stress based on UAV sensor and multimodal segmentation method(Bingquan Chu, Zhengyang Guo, Bingjian Liu, Bitao Jian, Yujie Zhou, 2025, Plant Growth Regulation)
- Tomato plant disease prediction system with a new framework SSMAN using advanced deep learning techniques(Saravanan Madderi Sivalingam, Lakshmi Devi Badabagni, 2025, International Journal of Electrical and Computer Engineering (IJECE))
- Convolutional Neural Network Modeling for Pest Detection in Corn Crops: Optimization for Monitoring Efficiency(Óscar Samuel Ocampo Bonilla, A. Duke, 2024, 2024 9th International Conference on Control and Robotics Engineering (ICCRE))
- Pest and Disease Video Classification with Convolutional Neural Network and Transfer Learning(Ghodasara Y. R., Parmar R. S., Kamani G. J., S. D. B., P. R. G., 2024, Journal of Experimental Agriculture International)
- Features of pyramid dilation rate with residual connected convolution neural network for pest classification(Naresh Vedhamuru, Malmathanraj Ramanathan, P. Palanisamy, 2023, Signal, Image and Video Processing)
- Enhanced Crop Disease Detection With EfficientNet Convolutional Group-Wise Transformer(Jing Feng, Wen Eng Ong, W. C. Teh, Rui Zhang, 2024, IEEE Access)
- Evaluation of parameters in a neural network for detection of red ring pest in oil palm(O. Fernandez, J. L. Ordoñez-Ávila, I. A. Magomedov, 2021, I INTERNATIONAL CONFERENCE ASE-I - 2021: APPLIED SCIENCE AND ENGINEERING: ASE-I - 2021)
- An Adaptive Features Fusion Convolutional Neural Network for Multi-Class Agriculture Pest Detection(M. Qasim, Syed M. Adnan Shah, Qamas Gul Khan Safi, Danish Mahmood, Adeel Iqbal, Ali Nauman, Sung Won Kim, 2025, Computers, Materials & Continua)
- Early disease detection of black gram plant leaf using cloud computing based YOLO V8 model(V. Motru, Subbarao P. Krishna, Babu A. Sudhir, 2023, i-manager's Journal on Information Technology)
- An optimized machine learning framework for crop disease detection(L. Srinivas, A. M. V. Bharathy, S. K. Ramakuri, A. Sethy, Ravi Kumar, 2023, Multimedia Tools and Applications)
- Optimized recurrent neural network-based early diagnosis of crop pest and diseases in agriculture(Vijesh Kumar Patel, Kumar Abhishek, Shitharth Selvarajan, 2024, Discover Computing)
- Memetic salp swarm optimization algorithm based feature selection approach for crop disease detection system(Sonal Jain, Ramesh Dharavath, 2021, Journal of Ambient Intelligence and Humanized Computing)
- A novel approach for insect-pest identification using multipath convolutional neural network(V. Gupta, M. Padmavati, Ravi R. Saxena, 2023, Agricultural Research Journal)
- 融合多尺度和迁移学习的蝴蝶种类识别(李 飞, 严春雨, 2022, 软件工程与应用)
- Ensembling YOLO and ViT for Plant Disease Detection(Debojyoti Misra, Suryansh Goel, Tushar Sandhan, 2024, No journal)
- CropViT: A light-weight Transformer Model for Crop Disease Detection(G. Chemmalar Selvi, H. J. Charan, Dinesh Kumar, 2024, 2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT))
- Performance Evaluation of Hybrid Deep Learning Architectures for Plant Disease and Severity Classification(Y. Palve, Meesala Sudhir Kumar, 2025, EPJ Web of Conferences)
- A Rice Pest Identification Method Based on a Convolutional Neural Network and Migration Learning(Pingxia Hu, 2023, J. Circuits Syst. Comput.)
- 浅谈CNN在柑橘病虫害识别预警中的应用(李海清, 2026, 计算机科学与应用)
基于 YOLO 系列的实时目标检测与定位优化
这组文献集中于 YOLO 系列(从 v3 到最新的 v12)的改进,旨在实现农田复杂环境下病斑或害虫的精准定位。研究通过优化损失函数、引入双重注意力机制(如 DA-YOLO)、改进骨干网络,显著提升了模型在检测微小害虫和多目标场景下的推理速度与精度。
- SSD-YOLO: a lightweight network for rice leaf disease detection(Canlin Pan, Sheng Wang, Yahui Wang, Chaoyang Liu, 2025, Frontiers in Plant Science)
- Leveraging YOLO for AI-Powered Image-Based Plant Disease Detection in Sustainable Agriculture(Dhouha Belghith, A. Baâzaoui, W. Barhoumi, 2026, Proceedings of the 18th International Conference on Agents and Artificial Intelligence)
- 基于YOLOv12的智慧化茶叶病害检测系统的研究与应用(许文浩, 林宇涵, 2025, 计算机科学与应用)
- 基于YOLOv5改进的粘蝇纸家蝇识别算法(王亚辉, 2025, 建模与仿真)
- A Lightweight and Efficient Plant Disease Detection Method Integrating Knowledge Distillation and Dual-Scale Weighted Convolutions(Xiong Yang, Hao Wang, Qi Zhou, Lei Lu, Lijuan Zhang, Changming Sun, Guilu Wu, 2025, Algorithms)
- Efficient model for cotton plant health monitoring via YOLO-based disease prediction(A. Pavate, Swetta Kukreja, Surekha Janrao, Sandip Bankar, Rohini Patil, Vijaykumar Bidve, 2025, Indonesian Journal of Electrical Engineering and Computer Science)
- CEFW-YOLO: A High-Precision Model for Plant Leaf Disease Detection in Natural Environments(Jinxian Tao, Xiaoli Li, Yong He, Muhammad Adnan Islam, 2025, Agriculture)
- Deep Learning for Crop Disease Detection using YOLOv8(M. Rakesh Kumar, N. Rengalakshmi, R.P Saghana Shree, R. Akiladevi, P. Kumar, 2024, 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT))
- Improvement Algorithm Analysis and Implementation of Plant Disease and Pest Recognition Based on YOLO(Yongxiang Wei, Xin Pan, 2024, 2024 5th International Conference on Computer Vision, Image and Deep Learning (CVIDL))
- Rice Canopy Disease and Pest Identification Based on Improved YOLOv5 and UAV Images(Gaoyuan Zhao, Yubin Lan, Yali Zhang, Jizhong Deng, 2025, Sensors (Basel, Switzerland))
- BGM-YOLO: An accurate and efficient detector for detecting plant disease(Chenghai Yu, Junhao Xie, Fernandes Jean Adrian Tony, 2025, PLOS One)
- Improving YOLO-Based Plant Disease Detection Using αSILU: A Novel Activation Function for Smart Agriculture(D. T. Nguyen, Thanh Dang Bui, Tien Ngo, Uoc Quang Ngo, 2025, AgriEngineering)
- 改进的YOLOv7茶叶病害识别模型(李治林, 李遇鑫, 2023, 建模与仿真)
- Pest and Disease Identification in Pomelo Leaves Using YOLOv9 Convolutional Neural Network (CNN) Model(Juan Miguel D. Gundran, Edrick Benjamin G. Perez, M. A. Latina, 2025, 2025 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT))
- Implementation of YOLO in Cabbage Plant Disease Detection for Smart and Sustainable Agriculture(Muhammad Andryan Wahyu Saputra, Damar Novtahaning, Narandha Arya Ranggianto, Dwi Wijonarko, 2024, Brilliance: Research of Artificial Intelligence)
- YOLO-Citrus: a lightweight and efficient model for citrus leaf disease detection in complex agricultural environments(Wanmei Feng, Junyu Liu, Zhen Li, Shilei Lyu, 2025, Frontiers in Plant Science)
- Development of an Improved Capsule-Yolo Network for Automatic Tomato Plant Disease Early Detection and Diagnosis(Idris Ochijenu, Monday Abutu Idakwo, Sani Felix, 2025, ArXiv)
- Algorithm for Crop Disease Detection Based on Channel Attention Mechanism and Lightweight Up-Sampling Operator(Wei Chen, Lijuan Zheng, Jiping Xiong, 2024, IEEE Access)
- Performance Evaluation of YOLO Models in Plant Disease Detection(U. Ali, Maizatul Akmar Ismail, Riyaz Ahamed Ariyaluran Habeeb, Syed Roshaan Ali Shah, 2024, Journal of Informatics and Web Engineering)
- YOLO-AgriNet: A Deep Learning-Based Model for Real-Time Plant Disease Detection in Precision Agriculture(Armel Ngomade Nkonjoh, Jean Roger Djamen Kaze, Rostand Verlaine Nwokam, Brondon Ella Njotsa, Alain François Kuate, Alain Serge Mbiada Tchouta, Serge Bertrand Bissiongol Babagniack, 2025, Journal of Computer and Communications)
- DA-YOLO:基于双重注意力的松枯病检测模型(邹素华, 郑秀玲, 杨 鹏, 崔博琰, 刘雪丽, 2024, 软件工程与应用)
- Comparative Analysis of YOLO Models for Plant Disease Instance Segmentation(Agamjot Singh, Aryan Yadav, Anshul Verma, Prashant Singh Rana, 2024, 2024 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI))
- YOLO-ODD: an improved YOLOv8s model for onion foliar disease detection(Anusha Raj, Mukund Dawale, Sagar M. Wayal, K. Khandagale, Indira Bhangare, Susmita Banerjee, Ashwini Gajarushi, R. Velmurugan, M. Baghini, Suresh Gawande, 2025, Frontiers in Plant Science)
- Evaluating the Performance of YOLO Object Detectors for Plant Disease Detection(Youssef Natij, Hajar El Karch, Ayyad Maafiri, Abdelkader Mezouari, 2024, 2024 11th International Conference on Wireless Networks and Mobile Communications (WINCOM))
- 基于深度学习的苹果品质智能检测算法研究(吴岩松, 覃进勇, 曾文俊, 2025, 人工智能与机器人研究)
- TomatoGuard-YOLO: a novel efficient tomato disease detection method(Xuewei Wang, Jun Liu, 2025, Frontiers in Plant Science)
- 基于改进YOLOv8的轻量化农业害虫检测算法(李 奥, 2026, 建模与仿真)
- 基于YOLOv5s的蝴蝶种类检测(覃 林, 谢本亮, 2023, 建模与仿真)
- Pest Detection and Identification in Rice crops using Yolo V3 Convolutional Neural Network(G. Anitha, P. Harini, V. Chandru, S. Abdullah, B. Rahman, 2024, 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0)
- Comparison Study of Corn Leaf Disease Detection based on Deep Learning YOLO-v5 and YOLO-v8(N. Chitraningrum, L. Banowati, D. Herdiana, Budi Mulyati, Indra Sakti, Ahmad Fudholi, Huzair Saputra, Salman Farishi, K. Muchtar, Agus Andria, 2024, Journal of Engineering and Technological Sciences)
- Multi-Crop Plant Disease Detection using PlantDoc(Akshaj Saini, Parteek Madaan, Priyanka Kumari, Swati Panwar, Sahil Kaushik, 2025, 2025 4th International Conference on Automation, Computing and Renewable Systems (ICACRS))
- In-field Chilli Crop Disease Detection Using YOLOv5 Deep Learning Technique(Mayalekshmi K M, Abhishek Ranjan, R. Machavaram, 2023, 2023 IEEE 8th International Conference for Convergence in Technology (I2CT))
- Comprehensive Analysis of a YOLO-based Deep Learning Model for Cotton Plant Leaf Disease Detection(Sailaja Madhu, V. Ravisankar, 2025, Engineering, Technology & Applied Science Research)
- Research on the Application of Convolutional Neural Network Based on YOLO Algorithm in Pest Small Target Detection(Gangwei Kang, Liang Hou, Zhuo Zhao, Bingbing Lang, 2023, 2023 3rd Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS))
- Early-Stage Disease Prediction in Chilli Plant Using YOLO Models(M. R, S. Girish, P. B. R., N. Rani, Sangamesha D, 2024, 2024 Second International Conference on Advances in Information Technology (ICAIT))
- Plant Disease Detection Using Yolo Machine Learning Approach(Ariwa, R. N., M. C., Teneke, N. G., A. S, F. K. G., 2024, British Journal of Computer, Networking and Information Technology)
- 基于Faster-RCNN算法的玉米叶面病害识别系统(杨成贺, 刘家硕, 吴亚宁, 刘英翘, 信富俊, 2024, 应用数学进展)
- Leveraging YOLO deep learning models to enhance plant disease identification(Yousef Alhwaiti, Muntazir Khan, Muhammad Asim, Muhammad Hameed Siddiqi, M. Ishaq, Madallah Alruwaili, 2025, Scientific Reports)
- YOLO-LF: application of multi-scale information fusion and small target detection in agricultural disease detection(Xinming Wang, Saihong Tang, M. K. A. Mohd Ariffin, Mohd Idris Shah B. Ismail, Jiazheng Shen, 2025, Frontiers in Plant Science)
模型轻量化设计与边缘/移动端部署技术
针对农业现场算力资源有限的痛点,此类文献研究如何通过知识蒸馏、模型剪枝、量化(QAT/PTQ)以及开发轻量级架构(如GhostNet, WraNet, MobileNetV2改进型)来压缩模型,并实现在FPGA、NPU、树莓派及移动端设备上的高效推理。
- DYL-Leaf: A Lightweight Distilled YOLO-based Model for Plant Leaf Disease Classification(Touhid Alam, Abir Bokhtiar, Md. Saef Ullah Miah, J. Sulaiman, 2025, 2025 IEEE 9th International Conference on Software Engineering & Computer Systems (ICSECS))
- 基于改进轻量级MobileNetV2的辣椒病虫害图像识别(李艳美, 2025, 应用数学进展)
- Crop Disease Detection using Yolo V5 on Raspberry Pi(Ubio Obu, Yash Ambekar, Harshal Dhote, Sakshi Wadbudhe, Sarika Khandelwal, Snehlata Dongre, 2023, 2023 3rd International Conference on Pervasive Computing and Social Networking (ICPCSN))
- WraNet:一种基于二维离散小波变换的轻量害虫识别网络(李 晖, 胡欣仪, 唐栩燃, 罗 伟, 赵雪如, 赵泽华, 李超然, 谭廷俊, 2024, 图像与信号处理)
- A Hybrid Deep Learning Approach for Robust Plant Disease Detection(Komal Mishra, Deepak Juneja, Divya Thakur, Harpreet Singh Saghra, Subharun Pal, Subodh Bansal, 2025, 2025 2nd Global AI Summit - International Conference on Artificial Intelligence and Emerging Technology (AI Summit))
- 基于FPGA动态视觉的植保无人机精准施药系统设计(吴建军, 赵 波, 王 晴, 2026, 软件工程与应用)
- Effective multi-crop disease detection using pruned complete concatenated deep learning model(R. Arun, S. Umamaheswari, 2022, Expert Syst. Appl.)
- Optimising Embedded Neural Network Inference in Smart Traps for Fruit Pest Detection via Quantization-Aware Training and FPGA Acceleration(Lucas C. Freitas, Isadora V. Dias, V. R. S. Santos, R. Ferreira, Júlio C. B. Mattos, L. Brisolara, 2025, 2025 XV Symposium on Computing Systems Engineering (SBESC))
- 基于YOLOV11-SMALL的轻量化脐橙病虫害检测研究(李奕飞, 刘嘉虎, 2025, 传感器技术与应用)
- 基于改进型轻量化的YOLOv5玉米病害检测(张立伟, 肖国锋, 高东浩, 艾山江·阿卜杜拉, 2025, 计算机科学与应用)
- End-to-End Implementation of Efficient YOLO on NPU for Real-Time Plant Disease Detection(Wenying Zhang, Shih-Pang Tseng, Lei Jiang, 2025, 2025 13th International Conference on Orange Technology (ICOT))
- YOLOv5 Revisited: A Lightweight yet Accurate Framework for Plant Disease Detection in Agricultural Applications(Huria Ali, Muhammad Imran, Saad Irfan Khan, Anees Tariq, H. Dawood, Hussain Dawood, 2025, Applied Fruit Science)
- A lightweight convolutional neural network for tea leaf disease and pest recognition(Xiaojie Wen, Qi Liu, Xuanyuan Tang, Fusheng Yu, Jing Chen, 2025, Plant Methods)
无人机(UAV)遥感监测与大面积精准防控
该组文献探讨了AI与无人机技术的结合,利用搭载的RGB、多光谱或高光谱传感器进行广域农田巡检。研究涵盖了大面积病害早期发现、严重程度制图、自动化路径规划以及智能喷洒系统的闭环集成,实现了从监测到精准施药的一体化。
- 基于轻量化AI与边缘计算的精准农业无人机监测系统研究(赵 毅, 关其炎, 于 泉, 邵佳悦, 梁 天, 徐博航, 王家硕, 2025, 计算机科学与应用)
- Development of Agriculture Monitoring System for Eggplant Crop Using Unmanned Aerial Vehicle(Anurag Chauhan, Pushpendra Singh, Subho Upadhyay, Abhijeet Singh, 2025, 2025 IEEE North-East India International Energy Conversion Conference and Exhibition (NE-IECCE))
- Efficient Detection of Cotton Verticillium Wilt by Combining Satellite Time-Series Data and Multiview UAV Images(Jing Nie, Jiachen Jiang, Yang Li, Jingbin Li, Xuewei Chao, S. Ercişli, 2024, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing)
- Deep Learning-Powered UAV Surveillance for Automated Pest Control in Smart Farms(Vinay Bharadwaja, B. Spoorthi, M. Padma, Rajarshi Tarafdar, Dr. S. Santiago, Dr.B.Jegajothi, 2025, 2025 8th International Conference on Computing Methodologies and Communication (ICCMC))
- Deep Learning Structure for Real-time Crop Monitoring Based on Neural Architecture Search and UAV(Hicham Slimani, J. Mhamdi, A. Jilbab, 2024, Brazilian Archives of Biology and Technology)
- A Unified Transformer Model for Simultaneous Cotton Boll Detection, Pest Damage Segmentation, and Phenological Stage Classification from UAV Imagery(Sabina Umirzakova, Shakhnoza Muksimova, Abror Shavkatovich Buriboev, Holida Primova, Andrew Jaeyong Choi, 2025, Drones)
- Detection and Precision Application Path Planning for Cotton Spider Mite Based on UAV Multispectral Remote Sensing(Hua Zhuo, Mei Yang, Bei Wu, Yuqin Xiao, Jungang Ma, Yanhong Chen, Manxian Yang, Yuqing Li, Yikun Zhao, Pengfei Shi, 2026, Agriculture)
- Crop Disease and Pest Management in Agriculture via UAV Remote Sensing and Advanced Machine Learning Models(A. Punitha, S. Jayamangala, P. Joel Josephson, K. Bharathi, S. V, Kamlesh Singh, 2025, 2025 3rd International Conference on Integrated Circuits and Communication Systems (ICICACS))
- AI-Enabled Crop Management Framework for Pest Detection Using Visual Sensor Data(Asma Khan, S. Malebary, L. Dang, Faisal Binzagr, Hyoung-Kyu Song, Hyeonjoon Moon, 2024, Plants)
- Monitoring the leaf damage by the rice leafroller with deep learning and ultra-light UAV.(Lang Xia, Ruirui Zhang, Liping Chen, Longlong Li, Tongchuan Yi, Meixiang Chen, 2024, Pest management science)
- Optimizing Federated Learning for UAV-Based Crop Health Monitoring(Li Chen, Tao Li, Huaiying Sun, Kaiwen Zhi, 2025, 2025 10th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS))
- Monitoring the Damage of Armyworm as a Pest in Summer Corn by Unmanned Aerial Vehicle Imaging.(Wancheng Tao, Xinsheng Wang, Jing-Hao Xue, W. Su, Mingzheng Zhang, D. Yin, Dehai Zhu, Zixuan Xie, Ying Zhang, 2022, Pest management science)
- Goji Disease and Pest Monitoring Model Based on Unmanned Aerial Vehicle Hyperspectral Images(Ruixin Zhao, Biyun Zhang, Chunmin Zhang, Zeyu Chen, Ning Chang, Baoyu Zhou, Ke Ke, Feng Tang, 2024, Sensors (Basel, Switzerland))
- Application Research of Unmanned Aerial Vehicle Remote Sensing Technology in Agricultural Pest and Disease Monitoring(Hui Liu, Wuping Liu, Yinsheng Cao, 2024, Journal of Engineering System)
- Design and Implementation of Multiattention Convolution Neural Network Architecture to Drones towards Detection and Control of Wheat Disease and Pest(st Vishwetha, Dr.K.Tharageswari, 2025, 2025 4th International Conference on Automation, Computing and Renewable Systems (ICACRS))
- 图像识别技术在农业病虫害监测中的实践与应用(蒋博文, 王建富, 2025, 人工智能与机器人研究)
- UAV hyperspectral remote sensor images for mango plant disease and pest identification using MD-FCM and XCS-RBFNN(D. L. Pansy, M. Murali, 2023, Environmental Monitoring and Assessment)
- Assessing a VTOL UAV-Based Digital Imaging System for Agricultural Monitoring using Low-Cost Digital Camera(Irwansyah Irwansyah, Rizki Agam Syahputra, Farid Jayadi, T. R. T. Wijaya, 2025, Philippine Journal of Agricultural and Biosystems Engineering)
- A Unmanned Aerial Vehicle-Based Image Information Acquisition Technique for the Middle and Lower Sections of Rice Plants and a Predictive Algorithm Model for Pest and Disease Detection(Xiaoyan Guo, Yuanzhen Ou, Konghong Deng, Xiaolong Fan, Ruitao Gao, Zhiyan Zhou, 2025, Agriculture)
- Exploring the Aurrent Status of the Application of Drone Remote Sensing Technology in Agroforestry Pest Control(Chuhan Ao, Chenyang He, 2025, Highlights in Science, Engineering and Technology)
- Monitoring Sitobion avenae Infestations in Winter Wheat Using UAV-Obtained RGB Images and Deep Learning(A. Atanasov, B. Evstatiev, Asparuh I. Atanasov, Plamena D. Nikolova, A. Comparetti, 2026, Agriculture)
物联网(IoT)集成、联邦学习与综合决策管理系统
这些研究关注于构建完整的智慧农业生态系统。通过集成视觉AI、环境传感器(温湿度、pH值)、联邦学习(保护数据隐私)以及大语言模型,开发出具备实时监测、自动诱捕、风险预警及农药智能推荐功能的Web和移动端应用平台。
- Crop Disease Detection using Machine Learning(Abin P. Mathew, Sreehari, P. Viswajith, Abdul Rahman, V. Murali, 2023, Journal of Applied Science, Engineering, Technology and Management)
- iCrop: Enabling High-Precision Crop Disease Detection via LoRa Technology(Xu Tao, Jackson Butcher, Simone Silvestri, Flavio Esposito, 2024, 2024 33rd International Conference on Computer Communications and Networks (ICCCN))
- Image-based crop disease detection with federated learning(Denis Mamba Kabala, A. Hafiane, Laurent Bobelin, R. Canals, 2023, Scientific Reports)
- Smart Disease Severity Detection System using YOLO and CNN for Plant Health Monitoring(B. Anitha, S. Harini, M. Amirtha, S. Vijaymanikandan, 2025, 2025 Third International Conference on Emerging Applications of Material Science and Technology (ICEAMST))
- Crop Disease Detection Using AIML(Ayush Barne, Ranveer Rankhamb, Vinambra Pawar, Apurva Deshpande, 2026, International Journal of Advanced Research in Science Communication and Technology)
- AI-driven banana pest and disease management: methods, applications, challenges, and future directions(Jhih‐Rong Liao, 2025, Discover Internet of Things)
- COTTON PEST CONTROL AND COTTON YIELD IMPROVEMENT USING CONVOLUTIONAL NEURAL NETWORK(Nurali Eshonpulatovich Chorshanbiev, 2024, Theoretical & Applied Science)
- Neural Network-Guided Smart Trap for Selective Monitoring of Nocturnal Pest Insects in Agriculture(J. Hinojosa-Dávalos, Miguel Angel Robles-García, Melesio Gutiérrez-Lomelí, Ariadna Berenice Flores Jiménez, Cuauhtémoc Acosta Lúa, 2025, Agriculture)
- 全视觉有机蔬菜害虫智能监测系统(毛鑫玉, 陆吉文, 于蓉蓉, 2025, 软件工程与应用)
- E-Citrus: A Cloud-Based Citrus Pest and Disease Detection, Diagnostic and Prevention using Convolutional Neural Network(J. Anthony, Jenny Lyn V. Abamo, 2024, Journal of Innovative Technology Convergence)
- Cropable - The Crop Disease Detection WebApp(Shashwat Kumar, Archisa Kumar, Disha Goyal, Anannya Chuli, Riddhi Maniktalia, K. Deepa, 2024, E3S Web of Conferences)
- Real Time Cotton Crop Disease Detection using Deep Transfer Learning(K. A. Chavan, M. Shirdhonkar, 2024, 2024 Second International Conference on Advances in Information Technology (ICAIT))
- 基于迁移学习和ResNet34的作物病害图像识别方法(赵雪如, 吴 青, 2025, 计算机科学与应用)
- Smart Pest Monitoring and Management System with Integrated Deep Learning and Unmanned Aerial Vehicle (UAV) Technologies(Ogidi Patient C., Asogwa T.C., 2025, International Journal of Research and Innovation in Applied Science)
- CROPCARE: An Intelligent Real-Time Sustainable IoT System for Crop Disease Detection Using Mobile Vision(Garima Garg, S. Gupta, Preeti Mishra, Ankit Vidyarthi, Aman Singh, A. Ali, 2023, IEEE Internet of Things Journal)
- AI for Crop Disease Detection(Shree Rupnar, 2025, International Journal for Research in Applied Science and Engineering Technology)
- Recommendation of Pesticide for Roof Top Pest Image Using Convolutional Neural Network Model(E. Ramanujam, S. Padmavathi, Nashwa Ahmad Kamal, 2021, Int. J. Sociotechnology Knowl. Dev.)
- A Comprehensive Pest Monitoring System for Brown Marmorated Stink Bug(Lennart Almstedt, Francesco Betti Sorbelli, Bastian J. Boom, R. Calvini, E. Costi, Alexandru Dinca, Veronica Ferrari, D. Giannetti, L. Ichim, Amin Kargar, Cătălin Lazăr, Lara Maistrello, Alfredo Navarra, David Niederprüm, Peter Offermans, Brendan O’Flynn, Lorenzo Palazzetti, Niccolò Patelli, C. Pinotti, Dan Popescu, A. K. Rangarajan, Liviu Serghei, A. Ulrici, Lars C. Wolf, Dimitrios Zorbas, Leonard Zurek, 2025, IEEE Transactions on AgriFood Electronics)
- Crop Disease Detection System(Rupesh Gaikwad, 2025, International Journal of Scientific Research and Engineering Trends)
AI农业病虫害检测领域的研究已经形成从底层算法优化到高层系统集成的全产业链覆盖。核心研究方向已从单纯的图像分类演进为以YOLO系列为代表的实时定位检测。为解决农业现场落地的局限,研究正向“轻量化边缘计算”和“无人机广域监测”双向发展。同时,通过引入物联网(IoT)、联邦学习及多模态数据融合,该领域正迈向软硬件高度集成、自动化决策与精准防控于一体的智慧农业新阶段。
总计149篇相关文献
随着农业现代化进程的加快,病虫害监测作为保障农作物产量与质量的关键环节,正逐步向智能化、精准化方向转型。图像识别技术凭借其非接触式、高效性和实时性优势,在农业病虫害监测领域展现出广阔应用前景。本文系统梳理该技术的核心原理与发展脉络,聚焦小麦条锈病无人机监测、番茄晚疫病温室监测两大代表性场景,深度剖析各场景下的图像采集方案、预处理方法、模型架构、性能评估标准及实际部署成效,通过横向对比传统图像处理、浅层机器学习与深度学习在处理效率、识别精度、成本投入及环境适应性等维度的差异,提炼不同技术路线的适用边界与优劣特征。针对当前技术应用中图像质量不稳定、背景干扰显著、模型泛化能力弱及应用成本高等问题,从技术优化、数据建设、成本控制与应用模式创新四个层面提出解决路径,并构建基于场景需求与资源条件的技术选型框架,为推动图像识别技术在农业病虫害监测中的深度落地提供理论参考与实践借鉴,助力农业绿色可持续发展。
作为全球主要的农业生产大国,我国长期面临着严峻的病害威胁,农作物病害的准确识别和防治对保障我国粮食安全具有重要意义。而要想科学准确地判断虫害类型并做出有效应对,精准识别病害图像无疑是关键前提。近年来深度学习技术发展迅速,在包括病害在内的识别领域展现出了相较于机器学习等传统方法更高的效率与精度。本文针对农作物病害识别问题,提出基于迁移学习改进ResNet34的识别方法,结合图像增强、特征层冻结、全连接层优化与分层动态学习率调节等方法有效提高了识别精度,并将其部署于在线识别系统。
近年来,卷积神经网络(CNN)在植物病害检测中得到了迅速的发展和广泛的应用。番茄叶病是植物病害中的一种重要病害,所以设计一种能准确识别番茄叶病的模型是很有必要的。DCNet模型主要利用空洞卷积技术来训练网络模型,并使用批量归一化技术来加速模型收敛,采用随机失活技术避免过拟合问题。同时也利用批量归一化技术和随机失活技术减少了模型的训练次数,提高了植物叶片病害的分类效率。实验表明,与不同的CNN模型相比,在解决番茄病害分类问题上,该模型无论是参数量还是分类精度都达到了最好的效果。
水稻作为全球重要的粮食作物之一,其产量和品质对食品安全和农业经济具有重大影响。为应对稻田虫害分类中遇到的样本不平衡和特征复杂性问题,提出了一种基于卷积神经网络(CNN)的识别方法。该方法整合了ResNet、VGG等经典的网络结构,并通过实施数据增强和迁移学习策略,有效提升了模型的泛化能力与分类精度。在数据预处理阶段,引入了旋转、缩放和平移等多样化的增强技术,增强了模型对复杂农田环境的适应能力。为了解决类别不平衡的问题,采用了类别权重调整,特别提升了模型在小样本类别上的性能。通过集成学习策略进一步优化了模型的表现,显著提高了分类精度和系统稳定性。实验结果显示,优化后的CNN模型在测试集上表现卓越,整体分类准确率高达98.23%,在具体类别如“rice leaf roller”和“asiatic rice borer”上的准确率分别为96.5%和95.6%。对于样本量较少的“grain spreader thrips”类别,模型同样展现了优异的识别能力。模型在测试集上的平均精确率、召回率及F1分数分别为96.48%,98.41%和97.26%,进一步验证了所提出模型的高效性和鲁棒性。
本文围绕脐橙果园病虫害智能识别的实际需求,开展基于深度学习的目标检测算法优化研究。通过构建包含复杂背景的脐橙病虫害图像数据集,提出一种轻量化改进模型YOLOv11-SMALL。该模型在YOLOv11n的基础上引入ADown下采样模块以降低参数与计算量,嵌入HGNetV2主干网络增强多尺度特征提取能力,并利用ASFF自适应空间特征融合机制提升小目标与复杂背景下的检测性能。实验表明,改进模型在准确率、mAP@0.5和mAP@0.5:0.95分别达到0.975、0.971和0.912,参数量仅1728K,模型大小3.6M,在嵌入式芯片上推理速度达25.1 FPS,综合性能优于YOLO系列多个版本及Faster R-CNN、RT-DETR等对比模型。本研究为轻量化病虫害检测算法在边缘设备中的实际应用提供了有效解决方案。
针对真实环境下辣椒病虫害识别准确率不高以及深度卷积网络参数多、模型内存大等问题,本文提出了一种基于改进MobileNetV2的辣椒病虫害图像识别算法。首先在基线模型的基础上引入通道注意力和空间注意力机制,提升模型对特征信息的敏感度;同时将L2正则化加入到损失函数中,平滑损失函数的梯度,以缓解模型过拟合。最后实验结果表明,改进后模型识别准确率达到94.43%,相较于基线模型,改进后模型精确率提升了4.38%,召回率提升了3.38%,F1值提升了3.88%,同时改进后模型参数量仅为2.43 M,与基线模型相比只增加了约0.2 M。本研究方法在保持较低模型参数量的同时达到了较高的识别准确率,具有一定的优势。
基于深度可分离卷积与CBAM注意力机制协同优化的改进ResNet34模型,在柑橘病虫害智能识别任务中实现了高精度、高效率与轻量化的统一。该模型通过引入深度可分离卷积有效降低了计算复杂度与参数量,同时结合CBAM模块的通道与空间注意力机制,自适应聚焦病虫害关键特征区域,显著提升了模型的表征能力与判别力。实验表明,该识别系统在自建柑橘病虫害数据集上平均精确度可达96%,在保证实时性的同时,大幅压缩了模型体积,为移动端与边缘计算设备部署提供了可行的技术方案,具备较强的实际应用价值与推广潜力。
针对玉米种植面积广,但易受病害影响导致产量下降,农民对玉米叶面病害识别困难的问题,以玉米叶面健康、大斑病、小斑病和锈病4种叶面种类为研究对象,采用Faster-RCNN建立识别模型,并在此基础上开发智能玉米叶面识别系统。首先对5998张图片的数据集采用LabelImg工具进行分类标注,训练集和验证集比例为9:1;然后使用ResNet50神经网络架构对标注好的数据集进行训练,得到最优权重的PTH文件;最后将算法通过API接口部署到使用Django搭建的后端框架中,在前端调用算法进行玉米叶面检测识别,结合Neo4j知识图谱,将玉米叶面病害种类的详细信息以及解决方案,通过知识图谱进行展示。玉米叶面病害识别结果表明:平均识别很高,达到96%。
随着有机蔬菜种植的兴起,害虫监测与防控成为保障蔬菜品质和产量的关键环节。传统的害虫监测方法难以满足有机蔬菜种植对精准、高效的要求。本论文设计并实现了一款全视觉害虫智能监测系统,该系统选用Goland平台,通过使用计算机视觉技术和机器学习技术对有机蔬菜害虫进行自动化的监测计数,系统主要包括病虫检测、害虫防护、数据分析、防害数据查询等模块。本系统有效提高害虫监测的准确性和效率,为病虫害防治工作提供有力的信息支持,具有广阔的应用前景。
针对茶叶病害图像背景复杂、目标小、易漏检等问题,提出一种改进YOLOv7的茶叶病害识别模型。该模型首先引入混合注意力模块ACmix加强对小目标的敏感度,解决茶叶病害目标小,易漏检的问题。其次,采用C3模块替换Neck部分的ELAN-W模块以提高网络性能。最后使用Alpha-IoU损失函数优化原YOLOv7模型中的CIoU损失函数,提升模型对检测目标的定位能力。实验结果表明,改进后模型的平均检测精度mAP达到93.3%,比YOLOv7模型提高了1.8%,在FPS增加的同时模型参数量降低了3.5 M。该研究内容可以为茶园病害的智能化监控设备提供支持。
为解决传统茶叶病害检测依赖人工、效率低下且泛化能力弱的问题,本研究构建了一套集高精度检测与智能分析于一体的智慧化系统。技术上,系统以轻量化的YOLOv12模型为核心检测算法,并采用Dash框架开发前端交互平台。在智能分析模块,系统集成了本地部署的DeepSeek-R1-14B大语言模型,通过连接病害数据库(D1)与茶叶知识库(D2),实现了从数据采集、实时检测到智能诊断与决策的闭环管理。为保证数据源质量,系统采用了IP65/67防护等级的ace2Basler工业相机进行图像采集。实验结果表明,该YOLOv12模型在茶藻斑病、茶褐枯病和茶灰枯病三种病害数据集上的mAP@0.5达到了0.955。同时,模型参数量仅为2.56 M,GFLOPs为6.3,端到端检测速度达到189.30 FPS,实现了高时效性。本方案完成了一个兼顾数据安全与检测效率的智慧化农业系统的开发任务。
本文提出了一种基于改进型轻量化YOLOv5的玉米叶片病害检测方法,通过采用GhostNet模块替换骨干网络并结合GSConv模块优化颈部网络,显著降低了模型的参数量和计算复杂度。实验结果表明,改进后的YOLOv5s-Ghost-GSConv-seg在保持较高检测精度的同时,参数量减少37.5%,计算量降低20.8%,推理速度提升15.6%。该模型通过多尺度特征融合设计,有效平衡了检测精度与实时性,为农业智能化中的轻量化目标检测提供了高效解决方案。
松枯病(PWD)是一种传播迅速、杀伤力极强的森林病害,对我国森林生态安全构成严重威胁,并造成巨大的林业经济损失。考虑到我国森林面积广阔,人工巡查监测难度大且成本高,因此利用无人机遥感技术监测病树成为控制松枯病传播的有效途径。尽管目前松枯病检测算法取得了相对较好的性能,但由于松枯病的强传染性,检测效果仍需进一步提高。基于此,本文提出了一种基于YOLOv5的双重注意力混合模型——DA-YOLO,用于更有效地检测病害树木区域。该算法使用基于自注意力的CoT模块加强骨干特征网络的提取能力,并结合ECA注意力机制整体提升定位精度。实验结果显示,在使用PWD遥感数据集时,该模型的AP@0.5:0.95较之基线提高了5.2个百分点。并将本文提出的算法DA-YOLO与Faster R-CNN、RetinaNet、YOLOv5、YOLOv6、YOLOx、YOLOv7等算法的模型复杂度和精度进行对比,并分析了它们在检测松枯萎线虫树木方面的效果。实验结果表明,本文提出的DA-YOLO模型在检测方面具有明显的优势。
目前,我国农田受虫害影响日渐严重,虫情分析可以针对不同区域的农田虫情状况,制定不同的治理农田害虫方案。传统的虫情分析靠人工收集与统计,耗时耗力,随着深度学习技术在计算机视觉领域的发展,本文提出结合YOLO-V5l目标检测与ResNet50神经网络搭建农田害虫检测模型。昆虫在图像数据中呈现时具有体态多样、鳞片缺失、肢体脱落等特点,对目标检测与分类的影响较大,因此本文将28种害虫按照体态,颜色等进行粗分类为A~G七种后,利用YOLO-V5l模型对其进行检测与计数,再将检测结果代入ResNet50识别模型中确定其种类。这种方法极大降低了农田害虫检测的误检率。并且,本文提出一种预测增强算法,对待检测害虫图像进行增强后,分别带入识别模型中,对识别的结果取其加权平均,得到最终结果。单一的YOLO-V5l模型的mAP.5:.95为71.4%,平均精确率80.91%,漏检率5.39%。本文提出的虫情检测模型其平均精确率为89.56%,提升了对农田害虫的识别准确率。该模型将改善原始人工统计的缺点,推进我国智慧农业的发展。
针对农业害虫检测任务中普遍存在的模型计算开销大、小尺度目标识别能力不足以及复杂背景干扰严重等问题,本文提出了一种名为C3Ghost-EMA YOLOv8的轻量化目标检测方法。该方法以YOLOv8为基础架构,通过引入GhostConv轻量化卷积算子及C3Ghost模块来优化网络结构,在保证特征表达能力的同时有效缩减模型参数规模并降低计算复杂度,进而实现网络结构的轻量化。在此基础上,通过在网络颈部结构中嵌入高效多尺度注意力机制(EMA),利用跨维度并行交互与多尺度特征融合策略,增强模型对小目标害虫的感知与定位能力。基于自建IP9害虫数据集的实验结果表明,所提方法在实现显著轻量化的同时保持了较高的检测精度,其参数量仅为1.91 M并使计算量降低至5.7GFLOPs,较基准YOLOv8模型分别减少约39.4%和36.0%,且mAP@0.5达到81.3%,相比原模型提升了5.9%。实验数据验证了C3Ghost-EMA YOLOv8在检测精度与推理效率之间取得了良好平衡,从而为资源受限场景下农业害虫的实时智能检测提供了一种有效且可行的解决方案。
近年来,人工智能技术在害虫识别领域得到广泛应用。目前深度网络害虫识别方法仍存在计算量大、对复杂背景下的害虫识别效果差等问题。为了解决计算量大的问题,本文提出了一种新型轻量网络——WraNet。该网络利用二维离散变换模块对图像进行特征混合,并学习图像的强先验知识,例如尺度不变性、平移不变性和边缘稀疏性。这使得单层二维离散小波变换层达到多层深度神经网络的效果,从而减少了计算量和模型参数的大小。本文还提出了一种新的算法——WraNet-m,该算法通过软投票集成了WraNet、ResNet50和FPN网络模型,以进一步提升识别效果。WraNet-m算法在IP102和D0害虫数据集上的准确率分别达到了72.44%和99.52%,证明了集成方法的有效性和鲁棒性。
文章针对粘蝇纸上家蝇目标体积小、形态多变且背景复杂的检测难题,提出了一种基于YOLOv5改进的检测算法。为充分反映实际应用场景,本研究搜集和采集并标注了500张粘蝇纸图像,其中400张用于训练,100张用于测试。改进工作主要集中在检测头部分,通过引入Zoom_cat模块实现多尺度特征的对齐与融合、采用ScalSeq模块增强特征序列化处理能力,并结合注意力机制提升目标区域的显著性,从而优化小目标的特征提取和定位效果。实验结果表明,改进后的模型在mAP、精确率和召回率等关键指标上均显著优于原始YOLOv5m模型,充分验证了所提方法在家蝇检测中的有效性和鲁棒性。该研究为粘蝇纸上家蝇的数量监测提供了一种高效、准确的识别方法,同时为小目标检测问题的进一步探索提供了新的思路。
蝴蝶对周围环境敏感,能作为反应生态环境的指示物种,因此对其进行识别研究对研究生态稳定性具有重大意义。但蝴蝶分类细致,相似度高,传统识别方法效率低。为解决上述问题,本文以野外蝴蝶图像的种类自动识别为目标,提出了一种基于YOLOv5s的改进的目标检测方法。为了减少信息丢失,提高精度,在YOLOv5s的主干特征提取网络上设计了CSandGlass模块来代替残差模块;并加入了SE注意力机制和对损失函数进行改进。实验结果表明,改进后模型平均精度为92.6%,相比原模型平均精度提升2%,且具有较强的鲁棒性和稳定性,可满足自然环境下的蝴蝶种类识别需求。
蝴蝶在生态系统稳定中发挥着重要作用,不仅能帮助植物传播花粉,还能对其生存环境变化做出指示。针对自然环境中蝴蝶种类识别率低的问题,本文提出一种融合多尺度和迁移学习的识别模型。首先,使用焦点损失函数解决数据集分布不平衡问题;其次,引入迁移学习提升识别准确率、加快模型收敛;最后,引入空洞空间卷积池化金字塔,提取蝴蝶图像不同尺度信息。实验结果表明,本研究所提方法平均识别准确率达到98.03%,较原始模型提升了4.88%,相较其他对比模型也取得较大优势,可为自然环境中蝴蝶种类识别提供技术支持。
针对植保无人机在精准施药中“看得见但打不准”与“算得动却飞不远”的核心技术瓶颈,本研究提出一种基于FPGA动态视觉的边缘智能施药系统。该系统以动态视觉处理架构为基础,融合边缘计算能力,并优化自适应阈值算法,实现了感知–决策–执行的闭环快速响应。测试结果表明,该系统在复杂农田环境下具有低延迟、高识别准确率和低误检率的优势,系统能效也显著提升。相较于传统方案,本系统在施药精度、作业效率和续航能力方面均有明显改进,为植保无人机的智能化升级提供了可行的技术路径。
本设计针对传统农业监测效率低、数据精准度不足等问题,构建基于人工智能与无人机技术融合的智能化农业监测解决方案。系统搭载多光谱、热成像等高精度传感器,通过无人机低空飞行实现农田作物生长状况等关键信息的快速采集。助力农业生产降本增效,为智慧农业发展提供技术支撑与实践参考。
番茄枯萎病是番茄病害中最严重的一种,枯萎病的早期识别具有重要意义。本研究以患枯萎病番茄的根部为实验对象,通过图像处理技术,首先将番茄根部用扩展高斯差分(XDoG)进行边缘检测,在HSV色彩空间中对番茄枯萎病进行检测。对于根系没有颜色变化的样本,提取与病害相关的根部形状参数,并结合从根系扫描仪获取的参数,建立随机森林(RF)检测模型,识别率为92.64%。为了缩短该方法的运行时间并提高准确率,引入主成分分析法(PCA),建立PCA-RF模型,该模型的运行时间提高了62.13%,平均识别率提高了2.62%。结果表明,与常用的识别算法相比,PCA-RF模型具有更高的检测准确率。本研究为番茄枯萎病识别提供了一种高效稳定的方法。
自然场景下采集的茶叶病害样本背景复杂,且存在类别样本数量不平衡的现象。结合茶叶病害特征,提出一种基于改进MobileNet V2的茶叶病害识别方法。在MobileNet V2倒残差结构中引入坐标注意力机制,使网络将注意力定位于目标区域,减少无关信息的干扰,有效地学习茶叶病害特征。同时,将交叉熵损失替换为焦点损失,解决茶叶病害样本类别不平衡导致网络训练效果不佳的问题。在茶叶病害数据集上进行验证实验,实验结果表明,改进后的MobileNet V2网络识别率达96.31%,参数量仅为2.27MB,对比其他模型具有较高性价比。改进后的MobileNet V2网络能高效地对自然环境中茶叶病害进行识别,为茶叶病害识别提供了新思路。
目的:针对传统苹果品质检测方法效率低、主观性强的问题,研究基于深度学习的苹果品质智能检测算法,实现苹果外观缺陷、成熟度和品质等级的自动化精准识别。方法:构建包含15,000张高分辨率苹果图像的大规模数据集,涵盖红富士、嘎啦、黄元帅等6个主要品种;基于YOLOv8网络架构,引入多尺度特征融合模块(MSFM)、卷积块注意力机制(CBAM)和改进的Focal Loss损失函数,设计面向苹果品质检测的深度学习模型;采用迁移学习和数据增强策略优化模型性能。结果:改进算法在苹果品质检测任务上达到96.8%的准确率,精确率95.9%,召回率96.2%,F1分数96.0%,mAP值93.7%,处理速度45.1帧/秒;相比基线YOLOv8算法,各项指标分别提升4.5%、4.3%、5.4%、4.8%、5.8%和6.9帧/秒;在缺陷检测方面,对表面斑点、破损、变色等缺陷的检测准确率均超过94%。结论:所提出的改进深度学习算法能够高效准确地实现苹果品质自动化检测,为现代果品加工业提供了有效的技术解决方案。
Agriculture is an essential sector that plays a necessary role in the economic improvement of a country. Prediction of plant diseases at the earliest stage may result in better yield and sustainable for growing population. The conventional method necessitates highly skilled inspectors to identify the phenotypic expression of different diseases. Alternatively, biochemical technologies offer more precise means of obtaining crop disease information by analyzing susceptible rice. However, these methods are time-consuming, expensive, reliant on laboratories, and require skilled professionals, rendering them unaffordable for most farmers. The paper aims to propose a solution to prevent infection at the earliest stage for the benefit of farmers. A novel crop disease detection model deploying a deep convolutional generative adversarial network (DC-GAN) and with multidimensional feature compensation Residual Neural Network (MDFC-ResNet) and named as DC-GAN-MDFC–ResNet, which aims at fine grained disease identification system detects from three aspects, bacterial leaf blight, leaf streak and panicle blight. Initially the input data undergone preprocessing using the several processes like data improvement, data normalization, and Singular value decomposition (SVD) to reduce the negative influence that the data set has on the training of the model. When compared to traditional convolution models, the suggested DC-GAN-MDFC–ResNet architecture exhibits in terms of highest classification accuracy, Segmentation free methodology and training stability. The experiments done in this work using Plant Village dataset which show the proposed technique offering improved recognition with the rate of 95.99% accuracy and generating higher quality samples compared to other well-known deep learning models.
Agriculture is an important sector that plays an essential role in the economic development of a country. Each year farmers face numerous challenges in producing good quality crops. One of the major reasons behind the failure of the harvest is the use of unscientific agricultural practices. Moreover, every year enormous crop loss is encountered either by pests, specific diseases, or natural disasters. It raises a strong concern to employ sustainable advanced technologies to address agriculture-related issues. In this article, a sustainable real-time crop disease detection and prevention system, called CROPCARE, is proposed. The system integrates mobile vision, Internet of Things (IoT), and Google Cloud services for sustainable growth of crops. The primary function of the proposed intelligent system is to detect crop diseases through the CROPCARE—mobile application. It uses the superresolution convolution network (SRCNN) and the pretrained model MobileNet-V2 to generate a decision model trained over various diseases. To maintain sustainability, the mobile app is integrated with IoT sensors and Google Cloud services. The proposed system also provides recommendations that help farmers know about current soil conditions, weather conditions, disease prevention methods, etc. It supports both Hindi and English dictionaries for the convenience of the farmers. The proposed approach is validated by using the PlantVillage data set. The obtained results confirm the performance strength of the proposed system.
Plant disease detection is a critical task in agriculture, directly impacting crop yield, food security, and sustainable farming practices. This study proposes FourCropNet, a novel deep learning model designed to detect diseases in multiple crops, including CottonLeaf, Grape, Soybean, and Corn. The model leverages an advanced architecture comprising residual blocks for efficient feature extraction, attention mechanisms to enhance focus on disease-relevant regions, and lightweight layers for computational efficiency. These components collectively enable FourCropNet to achieve superior performance across varying datasets and class complexities, from single-crop datasets to combined datasets with 15 classes. The proposed model was evaluated on diverse datasets, demonstrating high accuracy, specificity, sensitivity, and F1 scores. Notably, FourCropNet achieved the highest accuracy of 99.7% for Grape, 99.5% for Corn, and 95.3% for the combined dataset. Its scalability and ability to generalize across datasets underscore its robustness. Comparative analysis shows that FourCropNet consistently outperforms state-of-the-art models, such as MobileNet, VGG16, and EfficientNet, across various metrics. FourCropNet’s innovative design and consistent performance make it a reliable solution for real-time disease detection in agriculture. This model has the potential to assist farmers in timely disease diagnosis, reducing economic losses and promoting sustainable agricultural practices.
Sugarcane is the crucial crop in the world, and the many diseases are impacted on this crop. Early disease detection of the crop is the important for the preventing losses of the yield. This research proposes a deep learning based approach for the detecting diseases of the sugarcane using the DenseNet and Sequential models. This pre-trained model uses the convolutional neural networks (CNNs) to extract features from the sugarcane images and classify them into the different diseases based on their features. The Sequential model achieves the high accuracy i.e. 94% while the DenseNet achieves the 75% accuracy. These result shows that this models can effectively detect the diseases of the sugarcane crop which is helpful for the preventing the disease spread and the reduce the yield losses. Sugarcane is a vital crop worldwide, and its production is severely impacted by various diseases. Early detection of these diseases is crucial for preventing significant yield losses. This research proposes a deep learning-based approach for detecting sugarcane crop diseases using DenseNet and Sequential models. The proposed models utilize convolutional neural networks (CNNs) to extract features from sugarcane images and classify them into different disease categories. The DenseNet model achieves a high accuracy of 75%, while the Sequential model attains an accuracy of 94%. The results demonstrate that the proposed models can effectively detect sugarcane crop diseases, enabling farmers and agricultural experts to take timely measures to prevent disease spread and reduce yield losses. This research contributes to the development of precision agriculture techniques, promoting sustainable and efficient sugarcane production.
Agriculture is a fundamental component of human civilization. It contributes to the economy while also providing sustenance. Plant foliage or crops are susceptible to many illnesses during agricultural agriculture. The illnesses impede the development of their respective species. Timely and accurate identification and categorization of illnesses may mitigate the risk of further harm to the plants. The identification and categorization of these disorders have emerged as significant challenges. The conventional methods used by farmers to anticipate and categorize plant leaf diseases may be tedious and inaccurate. Challenges may occur while endeavouring to manually forecast illness kinds. The failure to promptly identify and categorize plant diseases may lead to the devastation of crops, causing a substantial reduction in yield. Agriculturalists using computerized image processing techniques in their fields may mitigate losses and enhance output. A multitude of strategies has been used in the identification and categorization of plant diseases using photographs of sick leaves or crops. In this research, convolutional neural networks (CNNs) are often used for image recognition and classification because of their intrinsic ability to autonomously extract relevant visual characteristics and comprehend spatial hierarchies. Consequently, in many sophisticated image recognition and classification tasks, deep learning, mostly via convolutional neural networks, is favoured when substantial data and computing resources are accessible, demonstrating effective detection and classification outcomes on their datasets. This methodology seeks to enhance productivity, minimize crop losses, and foster sustainable agricultural practices via the provision of valuable information and the automation of disease identification. The multilingual solution guarantees inclusion for diverse agricultural communities by automating disease detection and providing actionable information.
Crop disease detection and management is critical to improving productivity, reducing costs, and promoting environmentally friendly crop treatment methods. Modern technologies, such as data mining and machine learning algorithms, have been used to develop automated crop disease detection systems. However, centralized approach to data collection and model training induces challenges in terms of data privacy, availability, and transfer costs. To address these challenges, federated learning appears to be a promising solution. In this paper, we explored the application of federated learning for crop disease classification using image analysis. We developed and studied convolutional neural network (CNN) models and those based on attention mechanisms, in this case vision transformers (ViT), using federated learning, leveraging an open access image dataset from the “PlantVillage” platform. Experiments conducted concluded that the performance of models trained by federated learning is influenced by the number of learners involved, the number of communication rounds, the number of local iterations and the quality of the data. With the objective of highlighting the potential of federated learning in crop disease classification, among the CNN models tested, ResNet50 performed better in several experiments than the other models, and proved to be an optimal choice, but also the most suitable for a federated learning scenario. The ViT_B16 and ViT_B32 Vision Transformers require more computational time, making them less suitable in a federated learning scenario, where computational time and communication costs are key parameters. The paper provides a state-of-the-art analysis, presents our methodology and experimental results, and concludes with ideas and future directions for our research on using federated learning in the context of crop disease classification.
Crop diseases, as one of the major problems in global agricultural production, lead to crop yield reduction, death, and even total extinction, with serious impacts on farmers and the food supply. Traditionally, crop diseases are identified by visual inspection and based on the experience of farmers and agricultural experts, a method that not only consumes human resources but also has a certain degree of subjectivity and inaccuracy. The development of artificial intelligence technology successfully achieves real-time monitoring, automatic identification, and intelligent decision by combining the Internet of Things (IoT) technology and cloud computing technology. Herein, we proposed an EfficientNet Convolutional Group-Wise Transformer (EGWT) architecture. The local features of crop leaf images are extracted by EfficientNet convolution and then input into a group-wise transformer architecture. In the group-wise transformer process, the input features are divided into multiple groups. An attention mechanism is used within each group to calculate correlations between features. After calculating the intra-group attention, the output features of each group are stitched together to form the final output features. Our proposed model achieves 99.8% accuracy on the PlantVillage dataset, 86.9% accuracy on the cassava dataset, and 99.4% accuracy on the Tomato leaves dataset, with the least number of parameters 23.04M in the state-of-the-art convolutional combinatorial transformer hybrid model. The experimental results indicate that the proposed model has the best accuracy and optimal model complexity so far compared to other neural networks based on CNN, transformer, and the hybrid structure of CNN and transformer.
No abstract available
Crop disease recognition is a fundamental keystone in enabling disease control, limiting disease spread, and mitigating farmers’ losses. Recently, advanced image processing techniques for crop disease detection, based on deep learning, have gained significant popularity. However, the practical deployment of these models in real farms remains challenging. This is mostly due to the lack of Internet connectivity which prevents the transmission of the acquired images to sufficiently powerful edge/cloud servers to execute such complex models. LoRa has emerged as a promising network solution for rural areas, thanks to its extensive communication range and cost-efficient deployment. However, the low data rate of this technology prevents its effective application for the transmission of large images for crop disease detection. In this paper, we propose a LoRa-based framework called iCrop. iCrop enables high disease classification accuracy while exploiting the cost-effectiveness of LoRa transmission technologies. Specifically, iCrop is based on a LoRa Node, which captures crop leaf images and preprocesses them through image segmentation. The node selects and transmits the most informative segments over LoRa to the LoRa Edge Server. The server, in turn, runs the disease classification using a Convolutional Nerual Network (CNN) deep learning model empowered with majority voting among segments. To prevent data losses, typical of LoRa transmission, we develop a reliable transmission protocol on top of LoRa, which takes care of retransmissions and efficient communication. Extensive experiments on a real LoRa testbed show the advantages over two comparison approaches with respect to several performance metrics.
No abstract available
Most of the work done in image processing-based crop disease detection focuses on images with plain background. This paper presents a technique for crop disease detection for complex real field background images. A segmentation technique is presented to extract leaf patches from the entire image. Transform domain cepstral analysis is proposed for obtaining cepstral coefficients, to attain two level classifications. The first level classifies the crop species while the second level classifies the species into healthy leaf or leaf with specific type of disease. The work is tested on three crops Banana, Soybean and Grape and is checked on plain background laboratory images and on complex real field images. Suggested technique give species level accuracy of 94.33 %, 94.11 % and 98.44 % and disease level average accuracy of 97.75 %, 96.66 % and 97.95 % for Banana, Soybean and Grape, respectively. Comparison with standard features like texture and shape indicate that the presented technique gives the best results for both plain and complex background images suggesting its utilization in crop disease detection to reduce the agricultural and economic losses.
Agricultural industry has grown significantly bring sustainable farming practices in improving the food quality, enhancing agricultural productivity and global food security. However, the crop yield and its quality are impacted when the timely identification and management of the crop diseases are not promptly solved. Thus, the crop disease detection becomes the critical aspect in the smart agriculture. Several researchers addressed this crop disease detection by developing optimized solutions through machine learning, deep learning and image processing techniques. However, these techniques face serious challenges when deployed in real-time crop disease identification problems like annotated training images, capturing long-range dependencies, adaption to real-time changes evolving over time and so on. This leads to performance degradation seeking for more powerful applications like some advanced deep learning techniques, in particular, the transformer models. In this paper, a novel crop disease detection model known as CropViT was proposed based on Vision Transformer. This CropViT was developed by fine-tuning the architecture of existing Vision Transformer and PlantVillage dataset was used in the experimental study. Nine crop species were selected and trained using CropViT whose performance was then compared with traditional Convolutional Neural Network model. The experimental study clearly showed the highest mean accuracy of 98.64% for CropViT when compared to 95.52% mean accuracy for traditional CNN. The experimental results also outperformed other existing state-of-the-art models. Finally, the work was concluded with the discussion of experimental results and its future scope of work.
Crop disease is a serious challenge to the system of agricultural production. At the level of manufacturing, storage, and transportation, it impairs yield quantity and quality. Given that pests and viruses account for 50% of the output loss in today's world, crop disease identification is essential. It leads to effort waste, widespread interruption of the food supply, and a significant increase in the number of hungry individuals. Farmers cannot consistently put together the professional infrastructure and agriculture expertise that are needed. This research uses a deep transfer learning model to develop a real-time mobile application for cotton crop disease detection that can be utilized by any farmer without the need for special knowledge. Cotton Crop Disease Dataset from Kaggle used to train, test, and create the model. Dataset contains total 2310 images split into four classes as Diseased Cotton Leaf, Diseased Cotton Plant, Fresh Cotton Leaf and Fresh Cotton Plant. To design an optimal CNN three different transfer learning approach MobileNetV2, InceptionV3,Resnet50 are designed and tested. MobileNetV2 using Adam optimizer has been used as it gives 98% train and 93% test accuracy.
Detecting diseases in crops is a vital yet labor-intensive task in agriculture, often demanding extensive time and expert knowledge. This paper presents an innovative approach to crop disease detection using advanced computer vision and machine learning techniques. By automating the identification of common crop diseases, this system aims to reduce the reliance on expert intervention, expedite the diagnosis process, and ultimately improve crop management efficiency. The proposed method integrates deep learning models trained on a diverse dataset of diseased and healthy crop images, achieving high accuracy in disease recognition. This approach not only saves time but also provides farmers with a powerful tool to protect their crops from potential threats, thereby contributing to increased agricultural productivity and sustainability.
No abstract available
Crop diseases and pests cause significant economic losses to agriculture every year, making accurate identification crucial. Traditional pest and disease detection relies on farm experts, which is often time-consuming. Computer vision technology and artificial intelligence can provide automated disease detection, enabling real-time precise control of crop diseases and timely prevention measures. To accurately identify plant diseases under complex natural conditions, we developed an improved crop pest and disease recognition model based on the original YOLOv5 network. First, we integrated the Squeeze-and-Excitation (SE) module into YOLOv5, allowing our proposed model to better distinguish leaf features of different crops and accurately identify disease types. Second, to enhance the model’s feature extraction capability for diseased areas and reduce the loss of disease feature information, we replaced the original Up-sample module in YOLOv5 with a lightweight up-sampling operator, the CARAFE module. Third, we improved the original loss function using the EIoU loss function to increase the model’s detection accuracy. Lastly, to reduce model complexity and meet real-time detection requirements, we introduced the Ghost Convolution module into the backbone network. During the experimental phase, to validate the model’s effectiveness, we randomly divided sample images from the constructed crop pest and disease database into training, validation, and test sets. Experimental results showed that the improved YOLOv5 model achieved an accuracy of 90.0%, a recall rate of 91.4%, mAP@.50 of 92.1%, and mAP@.5:.95 of 64%. The parameter count and computational load were reduced by 23.9% and 31.2%, respectively, outperforming popular methods including YOLOv5, YOLOv7, and YOLOv8. The improved model can accurately identify crop pests and diseases under natural conditions and is suitable for deployment in real-world applications, providing a technical reference for crop pest and disease management.
Agriculture is the primary source of food for the world's population, despite the rapid increase in population. Early detection of plant diseases in the field would be beneficial to improve crop production efficiency. Technology has become increasingly important in agriculture in recent years, as it is used to improve efficiency, reduce costs, and increase yields. The emergence of accurate techniques in the field of leaf-based image classification has shown impressive results. Our proposed work includes various phases of implementation of image classification, namely dataset creation, feature extraction, training the classifier, and classification. The work also included hardware design and implementation, as well as software programming for the microcontroller unit of the detector. The system utilized the microcontroller to receive and send data from the various sensors to an online database.
Agriculture, deeply rooted in India’s history, remains as the root of the country’s economy and a primary source of employment. Despite the ageold practice, smallholder farmers in India sometimes face problems, especially in getting critical information that would allow them to make knowledgeable farming decisions. In this era of technology, knowledge is the foundation of advancement, farmers productivity and profitability could be impacted due to a lack of widely accessible agricultural knowledge. Crop health is an important aspect in productivity in agriculture. Crop diseases can have major implications for quality and quantity, as well as farmer’s livelihoods. This technology aims to assist farmers, increase productivity, and contribute to agriculture’s sustainability by providing up-to-date knowledge regarding crop health, disease identification, and optimal farming practices. Crop disease identification used to rely on digital image processing, but new advances in deep learning have exceptionally outperformed previous techniques. In this paper, we focus on the effectiveness of employing deep learning for crop disease identification. Our research is based on a database of images of diseased crop leaf and healthy leaf, with a focus on prevalent crop diseases in India. An application has been created with this ideology in mind to aid farming practices by detecting plant disease. There are two types of input format can be given to the application such as image and video of the diseased plant leaves. YOLO v8 is the deep learning model that is used in this application to effectively identify the disease.
Turmeric and ginger are economically vital spice crops cultivated for their underground rhizomes, which are largely susceptible to conditions similar as soft spoilage, rhizome spoilage, and bacterial wilt. Traditional discovery styles calculate on visual examination orpost- harvest opinion, frequently performing in delayed treatment and significant yield loss. This exploration proposes an AI- driven frame for early rhizome complaint discovery using a multimodal approach that integrates deep literacy, hyperspectral imaging, and IoT- grounded environmental seeing. Convolutional Neural Networks (CNNs), enhanced through transfer literacy, are employed to classify rhizome health from subterranean image data, while detector emulsion ways relate soil humidity, temperature, and pH with complaint onset. The system also incorporates time- series soothsaying and natural language interfaces to deliver real- time cautions and treatment recommendations to growers. By fastening on rhizome- position analysis — an area largely overlooked in being literature — this study aims to ameliorate individual delicacy, reduce crop losses, and promote sustainable spice husbandry through intelligent, accessible technology.
No abstract available
No abstract available
Crop disease detection in the actual field is difficult due to the unstructured environment. The images taken from the farms are covered mainly by green-colored plants or leaves, from which slightly varied portions of diseases are highly challenging to identify, even with the naked eye. This study contributes to detect the chilli leaf disease from images collected from actual field conditions. An RGB camera was used to collect images for the dataset from a chilli field. The research utilized one of the state-of-the-art deep learning object detection models named YOLOv5. The model is very efficient considering its speed of detection and accuracy. The model performed well despite having a small dataset with a mean average precision (mAP) of 0.461. The proposed deep learning object detection model is appreciably promising for disease detection of chilli crop.
This research paper presents a novel approach for detecting crop diseases using YOLO v5 and Raspberry Pi. The proposed method employs YOLO v5, a state-of-the-art object detection algorithm, to analyse images of crops and detect infected leaves. The results are then processed by a Raspberry Pi, a low-cost and low-power computer, to make predictions about the presence and type of disease. The experiment was conducted on a dataset of crop images, and the results showed that the proposed method achieved high accuracy in detecting and classifying crop diseases. This work demonstrates the potential of using YOLO v5 and Raspberry Pi for efficient and cost-effective disease detection in agriculture, this paper outlines the procedures deployed in the implementation and the different techniques deployed to increase its efficiency.
No abstract available
No abstract available
Agriculture productivity plays a key role in the economy development. The availability of food is being exposed to crop diseases. Due to the spread of technology worldwide, it is now technically possible to use image processing techniques to identify the kind of plant disease from a straightforward approach. The use of an automatic method for crop disease detection is advantageous due to the less effort and in identifying the disease signs at an early stage. In the presented work a deep convolutional network and semi-supervised techniques are trained to distinguish crop species and the condition of illness in leaf. The technique for identifying paddy illness contains two key phases: the first is training the model, and the second is spotting the disease from the provided picture. The proposed work has used the CNN, VGG19 and DenseNet models to classify the Paddy Crop Disease. This work shows that the DenseNet model achieves the highest accuracy of 99% compared to other models.
AbstractAccording to estimates, every year 10% of global production, goes waste due to pests and crop pathogens. For instance, India is a leading producer of many crops, including wheat, rice, lentils, sugarcane, and cotton. But a majority of the farmers are unable to detect whether a crop is infected or not simply by looking at it. As crop pathogens develop greater resistance to fungicides and pesticides, there is an urgent need to find new antifungal compounds to effectively combat them, which over time are rendered useless as the pathogens again develop resistance to these compounds. Thus, the food security of any country is always at risk due to the vulnerability of the current agricultural systems to climate, pests, pathogens, and associated diseases. To solve this problem, we have developed Cropable, The Crop Protection App. In the proposed work, we have used Deep Convolution Neural Networks( CNN) models to detect the disease and further created a web app using flask. Cropable is an Artificially Intelligent Web Application that can help to identify whether the crop is infected or not. We also provide farmers with a treatment for the detected disease, which not only helps them in identifying a disease but also assists them in solving it.
The agriculture sector faces several significant challenges, including plant diseases, fluctuating market prices, and a lack of proper guidance for selecting crops suitable for different seasons. These issues often lead to reduced productivity and financial instability for farmers. This research proposes an AI-based smart agriculture system that integrates crop disease detection, crop price prediction, and seasonal crop recommendation within a single platform. The crop disease detection module analyzes images of plant leaves using deep learning models to accurately identify diseases and provide possible treatment suggestions. The crop price prediction module utilizes historical agricultural market data to forecast potential future prices, enabling farmers.
No abstract available
The productivity of agriculture is mostly influenced by the Indian economy. Because of the fore mentioned factor, plant diseases are more prevalent in agricultural fields and are easier to identify. Vigilance for the detection of plant diseases has risen due to current agricultural monitoring in numerous and diverse locations. This study presents an image-based method for the Detection of Black gram Crop Disease (DBCD). The Black gram plant is often referred to as “urad” in India and is officially recognized as “Vigna mungo”. This work considers four diseases anthracnose, leaf crinkle, powdery mildew, and yellow mosaic diseases, which have a considerable negative influence on the production of black gram. The black gram crop diseases were classified in this study using the BPLD dataset. For a comparati ve classification analysis, three machine learning algorithms and two deep learning techniques were considered. This classification study for the diagnosis of Black gram crop disease makes use of the artificial neural network and convolutional neural network of deep learning, as well as the decision tree, random forest, and k-nearest neighbor algorithms of machine learning. Here, the accuracy, precision and recall are measured in order to compare various classification models. As per the analysis, CNN outperforms in every aspect when compared to other classifications with 89% accuracy.
Green mustard plants are of significant economic importance, making effective pest management essential. This study employed the Convolutional Neural Network (CNN) algorithm to detect pests on green mustard leaf images. The dataset, comprising 96 test images, was divided into two categories: pest-infested and healthy leaves. Using the NasNet Mobile architecture, the model was trained over 10 epochs with the Adam optimizer, achieving a training accuracy of 94.99% and a validation accuracy of 98.00%. Results indicate that CNN combined with NasNet Mobile effectively identifies pests, providing a robust and practical solution to enhance agricultural productivity and mitigate crop losses caused by pests. This study demonstrates the potential of leveraging deep learning for agricultural advancements, particularly in addressing pest-related challenges efficiently.
Insect pests remain a major threat to agricultural productivity, particularly in open-field cropping systems where conventional monitoring methods are labor-intensive and lack scalability. This study presents the design, implementation, and field evaluation of a neural network-guided smart trap specifically developed to monitor and selectively capture nocturnal insect pests under real agricultural conditions. The proposed trap integrates light and rain sensors, servo-controlled mechanical gates, and a single-layer perceptron neural network deployed on an ATmega-2560 microcontroller by Microchip Technology Inc. (Chandler, AZ, USA). The perceptron processes normalized sensor inputs to autonomously decide, in real time, whether to open or close the gate, thereby enhancing the selectivity of insect capture. The system features a removable tray containing a food-based attractant and yellow and green LEDs designed to lure target species such as moths and flies from the orders Lepidoptera and Diptera. Field trials were conducted between June and August 2023 in La Barca, Jalisco, Mexico, under diverse environmental conditions. Captured insects were analyzed and classified using the iNaturalist platform, with the successful identification of key pest species including Tetanolita floridiana, Synchlora spp., Estigmene acrea, Sphingomorpha chlorea, Gymnoscelis rufifasciata, and Musca domestica, while minimizing the capture of non-target organisms such as Carpophilus spp., Hexagenia limbata, and Chrysoperla spp. Statistical analysis using the Kruskal–Wallis test confirmed significant differences in capture rates across environmental conditions. The results highlight the potential of this low-cost device to improve pest monitoring accuracy, and lay the groundwork for the future integration of more advanced AI-based classification and species recognition systems targeting nocturnal Lepidoptera and other pest insects.
The productivity of mustard greens is vulnerable to pests and diseases that can threaten the yield and quality of the harvest. This study aims to detect pests on green mustard plants using the Convolutional Neural Network (CNN) method. The dataset used in this research consists of 450 images, with 225 images of pest-infested mustard greens and 225 images of healthy mustard greens. These 450 datasets are divided into 400 training data and 50 testing data. The testing was conducted fifteen times using CNN architectures with 2, 3 and 4 convolutional layers, having filter numbers of (64,32) (64, 32, 16) and (64, 32, 16, 8) respectively, and learning rates ranging from 0.1 to 0.00001 with the Adam optimizer. Based on the testing results of the learning rate and the number of layers, it was found that a learning rate of 0.001 provided the best performance with the highest accuracy and the lowest loss, especially in the model with 3 layers (64, 32, 16), which achieved an accuracy of 94% and a loss of 24.92%. A learning rate that is too high (0.1) or too low (0.00001) results in poor performance and instability, with low accuracy and high loss. Therefore, selecting the appropriate learning rate is crucial to achieving optimal results in model training.
The tea industry plays a vital role in China’s green economy. Tea trees (Melaleuca alternifolia) are susceptible to numerous diseases and pest threats, making timely pathogen detection and precise pest identification critical requirements for agricultural productivity. Current diagnostic limitations primarily arise from data scarcity and insufficient discriminative feature representation in existing datasets. This study presents a new tea disease and pest dataset (TDPD, 23-class taxonomy). Five lightweight convolutional neural networks (LCNNs) were systematically evaluated through two optimizers, three learning rate configurations and six distinct scheduling strategies. Additionally, an enhanced MnasNet variant was developed through the integration of SimAM attention mechanisms, which improved feature discriminability and increased the accuracy of tea leaf disease and pest classification. Model validation employs both our proprietary TDPD dataset and an open-access dataset, with performance evaluation metrics including average accuracy, F1 score, recall, and parameter size. The experimental results demonstrated the superior classification performance of the model, which achieved accuracies of 98.03% based on TDPD and 84.58% based on the public dataset. This research outlines an effective paradigm for automated tea disease and pest detection, with direct applications in precision agriculture through integration with UAV-mounted imaging systems and mobile diagnostic platforms. This study provides practical implementation pathways for intelligent tea plantation management.
Deploying deep learning models in embedded agricultural systems requires balancing predictive performance with strict hardware constraints such as memory, power, and latency. This work uses quantization techniques, specifically post-training quantization (PTQ) and quantization-aware training (QAT), to optimize convolutional neural networks for real-time pest detection in smart traps. Using the Brevitas framework for quantization and FINN for hardware generation targeting FPGAs, we evaluate a range of weight and activation bit-width configurations. Our results show that QAT significantly outperforms PTQ, particularly in aggressive low-bit scenarios, achieving high accuracy while drastically reducing hardware resource utilization. Among the proposed solutions, one achieves 87.47% accuracy while using less than 10% of the LUTs required by its full-precision counterpart. Comparative analysis with standard models such as ResNet18 and MobileNet further validates the effectiveness of our approach. This study highlights the practicality of QAT-driven quantization for edge Artificial Intelligence applications in agriculture. It paves the way for future work, including power, latency, and throughput profiling, to support large-scale deployment.
: Grains are the most important food consumed globally, yet their yield can be severely impacted by pest infestations. Addressing this issue, scientists and researchers strive to enhance the yield-to-seed ratio through effective pest detection methods. Traditional approaches often rely on preprocessed datasets, but there is a growing need for solutions that utilize real-time images of pests in their natural habitat. Our study introduces a novel two-step approach to tackle this challenge. Initially, raw images with complex backgrounds are captured. In the subsequent step, feature extraction is performed using both hand-crafted algorithms (Haralick, LBP, and Color Histogram) and modified deep-learning architectures. We propose two models for this purpose: PestNet-EF and PestNet-LF. PestNet-EF uses an early fusion technique to integrate handcrafted and deep learning features, followed by adaptive feature selection methods such as CFS and Recursive Feature Elimination (RFE). PestNet-LF utilizes a late fusion technique, incorporating three additional layers (fully connected, softmax, and classification) to enhance performance. These models were evaluated across 15 classes of pests, including five classes each for rice, corn, and wheat. The performance of our suggested algorithms was tested against the IP102 dataset. Simulation demonstrates that the Pestnet-EF model achieved an accuracy of 96%, and the PestNet-LF model with majority voting achieved the highest accuracy of 94%, while PestNet-LF with the average model attained an accuracy of 92%. Also, the proposed approach was compared with existing methods that rely on hand-crafted and transfer learning techniques, showcasing the effectiveness of our approach in real-time pest detection for improved agricultural yield.
Pest detection is important for crop cultivation. Crop leaf is the main place of pest invasion. Current technologies to detect crop pests have constraints, such as low efficiency, storage demands and limited precision. Image segmentation is a fast and efficient computer-aided detection technology. High resolution image capture solidly supports the crucial processes in discerning pests from images. Study of analytical methods help parse information in the images. In this paper, a regional convolutional neural network (R-CNN) architecture is designed in combination with the radial bisymmetric divergence (RBD) method for enhancing the efficiency of image segmentation. As a special application of RBD, the hierarchical mask (HM) is produced to endorse detection and classification of the leaf-dwelling pests, offering enhanced efficiency and reduced storage requirements. Moreover, to deal with some mislabeled data, a threshold variable is introduced to adjust a fault-tolerant mechanism into HM, to generate a novel threshold-based hierarchical mask (TbHM). Consequently, the hierarchical mask R-CNN (HM-R-CNN) model and the threshold-based hierarchical mask R-CNN (TbHM-R-CNN) model are established to detect various types of healthy and pest-invasive crop leaves to select the regional image features that are rich in pest information. Then simple linear iterative clustering (SLIC) method is incorporation to finish the image segmentation for the classification of pest invasion. The models are tuned and optimized, then validated. The most optimized modeling results are from the TbHM-R-CNN model, with the classification accuracy of 96.2%, the recall of 97.5% and the F1 score of 0.982. Additionally, the HM-R-CNN model observed appreciable results second only to the best model. These results indicate that the proposed methodologies are well-suited for training and testing a dataset of plant diseases, offering heightened accuracy in pest classification. This study revealed that the proposed methods significantly outperform the existing techniques, marking a substantial improvement over current methods.
The tomato is one of the most popular and well-liked veggies among Asians. It is interesting to note that in Bangladesh, it is the second most significant vegetable consumed. Moreover, tomato is served not only as a vegetable, but it is also served as sauce, jam, etc., and used in making different types of cuisines. But the fact is due to the pests, thousands of tons of tomatoes are harmed every year in Bangladesh. The production of tomatoes in Bangladesh is harmed by a number of dangerous pests. We develop a solution to recognize pests at an early stage. Five different pest types, including aphids, red spider mites, whiteflies, looper caterpillars, and thrips, have been studied in this research. To identify tomato pests, we curated image datasets from online and offline repositories and processed them using a convolutional neural network (CNN) model. We used features from CNN layers for three machine learning algorithms: Random Forest (RF), support vector machine (SVM), and K-Nearest Neighbors (K-NN). This comprehensive approach allowed a thorough comparison of these algorithms in tomato pest recognition. For recognizing tomato pests, our methods generate excellent results. The accuracy of our experiment is 95.49% which indicates the successful completion of the experiment.
Pomelos (Citrus maxima) are a key fruit crop in the Philippines, but yields are threatened by foliar diseases such as citrus greening, leaf miner, anthracnose, and sooty mold. To support automated field monitoring, this study developed a curated dataset of 537 raw pomelo leaf images and expanded it via augmentation to 2,257 samples across five classes, then fine-tuned a YOLOv9e model for detection. The model achieved an overall mAP@0.5 of 98.1%, mAP@0.5:0.95 of 96.4%, precision of 96.5%, recall of 96.0%, and a peak F1 score of 0.97 at a confidence threshold of 0.723. Class-wise evaluation showed near-perfect performance for Sooty Mold and Healthy Leaf, with Anthracnose recording the lowest recall (88.5%) and Leaf Miner the lowest precision (89.9%) due to overlapping visual symptoms. These results demonstrate that the proposed YOLOv9e-based approach is highly effective for pomelo leaf disease detection while indicating scope for further improvement on the more visually ambiguous classes.
Automated Crop disease detection and control becomes significant research in agriculture domain as viral, fungal and bacterial types impacts the yield potential of the crop also it generates economic damage to the farmers. Nowadays Humanoid Robots considered as drones has deployed in agriculture field to monitor and interact with crop regions to perform human related tasks. Drones were trained using artificial intelligence and machine learning architecture to perform specialized complex tasks such as crop disease prediction and disease controlling through employment of the specified pesticides and fertilizers. Despite of multiple advantages of machine learning and deep learning model towards predicting crop diseases on monitoring and controlling by applying pesticides and fertilizer faces several challenges in predicting disease due to diversity of the crop diseases with respect to lightening, climatic condition and soil conditions in the agriculture field. In this paper, a new deep learning architecture entitled as multiattention convolution neural network is designed and it is used to train drone which is deployed to predict and control wheat crop diseases against different conditions accurately. Multiattention convolution neural network uses attention mechanism to process the monitored wheat crop images with attention coefficients. Initially Image preprocessing techniques is applied to eliminate the noise and to perform image augmentation. Preprocessed image is processed in the convolution layer to extract disease specific features related to climate and soil. Attention coefficient of attention mechanism is incorporated in the polling layer of the model to determine the anthracnose and other fungal diseases in the crops and obtains saliency map. Finally fully connected layer of the model incorporating the softmax function classifies the saliency map into crop diseases with respect to severity classes and predict the appropriate fertilizer and pesticide to manage specified severity of the crop diseases as control mechanism. Experimental analysis of the model is performed using training data of the Global Wheat Head Detection (GWHD) dataset in python environment. Performance analysis is performed using test data of the GWHD dataset. On evaluation, it is identified that model predict and controls wheat crop disease with 98.8% accuracy while compared to conventional deep learning architectures.
Aiming at the limitations of existing agricultural pest image recognition technology, a novel agricultural pest recognition algorithm based on convolutional neural network and Bayesian method is proposed. During the process, convolutional neural networks are chosen as the basic model for image recognition algorithms, and Bayesian methods are used to optimize the neural network structure. At the same time, approximate variational methods are applied to construct corresponding approximate functions. Finally, the Bayesian neural network sampling optimization is completed through the method of random variable reparameterization. According to simulation experiments, the accuracy of the proposed method in classifying and recognizing beetles is 92.1%, and the accuracy in classifying and recognizing grasshoppers is 92.4%. The proposed method has an image recognition accuracy of over 90% for agricultural pests, and has the highest accuracy among the 5 image recognition methods compared. Except for the accuracy of cricket recognition, the convolutional neural network image recognition method has the lowest accuracy of agricultural pest image recognition among the five image recognition methods. The experimental results show that the proposed method can effectively recognize agricultural pest images and has good operational performance. The contribution of the research lies in proposing a novel agricultural pest image recognition algorithm and innovatively optimizing the structure of the convolutional neural network model. The Bayesian method is used to improve the estimation of weights and biases, making the model more accurate in processing agricultural pest images.
The important field crops of agriculture are affected due to attack of various pests and diseases which leads to reduction in crop production. Early classification and identification of pests and diseases in plant helps farmers to take mitigation steps. To address this issue with computer vision based techniques, convolutional neural network (CNN) based deep learning models were studied for classification of pests and diseases videos. Six different CNN models were developed. Two approaches namely from scratch learning and transfer learning were used. Data augmentation techniques such as reflection, scaling, rotation, and translation were also applied to prevent the network from overfitting. The classification accuracy of 99.19%, 99.08% and 98.80% was attained in VGG19, DENSENET201 and CNN 5 Layer model. The results demonstrated that CNN models with good architecture can classify pests and diseases with good performance.
The Yellow Stem Borer (YSB), Scirpophaga incertulas (Walker), is an important pest of rice throughout tropical South and Southeast Asia. The highest incidence of this pest is primarily observed in tropical lowland rice fields and deep-water rice cultivation. The yield loss caused by the YSB is estimated to be 20% in early-planted rice crops and 80% in late-planted crops. In this paper, we developed a method to detect and classify the forms of YSB using a Convolutional Neural Network (CNN) and then model the infestation migration patterns of YSB in several rice-growing regions by using a CNN learning model. A dedicated CNN architecture is designed, and optimized for its ability to extract features and discern spatial hierarchies indicative of pest presence. Transfer learning techniques, utilizing pre-trained models, enhance the model's capability to recognize subtle patterns associated with pest infestations. The dataset is carefully annotated and augmented to ensure robust model training, with an emphasis on real-world variability. These models can help detect, classify, and model the infestations of other Agricultural pests, improving food security for rice.
No abstract available
Pests in plants can cause significant losses in agricultural production. As a result, various technologies are used nowadays to improve agriculture's efficiency and make it more sustainable. This research highlights the contribution of machine learning algorithms and image recognition technologies for pest identification. Farmers can use the system to recognize pests and take the necessary actions to reduce them. Convolutional Neural Networks (CNN) is used in this study for image recognition tasks, including pest identification in agricultural fields. The algorithm is trained using the Agricultural Pests Dataset acquired from Kaggle. The experiment results showed that the CNN performed better than the other state-of-the-art machine learning models, with a much lower false rejection rate of 0.12% and an accuracy of 99%.
This project aims to develop automated techniques for early detection of pests in maize crops using computer vision and deep learning. Early detection of pests is key to crop success, but is typically done manually, which is time consuming and resource intensive. Accurate methods are needed to obtain data quickly and non-invasively. The implemented method consists of analysing images of maize crops using convolutional neural networks(CNN) trained to identify insects. The results show that the developed approach achieves an accuracy of over 95% in detecting the main pests studied, surpassing the accuracy of manual methods. The data generated by computer vision and deep learning techniques can be useful to farmers in making decisions about preventive pest control.
An agriculture pest and disease recognition system was constructed using convolutional neural networks (CNNs). After reviewing relevant theories and technologies, the requirements of the system were determined. Based on the requirement, the design of the system was created including modules. Features for pest and disease recognition were defined using CNNs as the core component of the system. Input images were classified effectively for automatic recognition and classification of agriculture pests and diseases.
Diseases and insect pests are intimately related to the quantity and quality of agricultural goods. Crop diseases and insect pest outbreaks occur on a massive scale and frequently result in catastrophic economic losses. Consequently, it is crucial to keep an eye on and manage crop diseases and insect pests. The first and most crucial step in the prevention and management of crop diseases and pests is learning how to promptly and precisely detect the ailments that affect crops. Deep learning approaches for pest identification are now more accurate than traditional methods because of the rapid advancements in machine learning and artificial intelligence technologies. Deep learning techniques are increasingly being used as the main strategies for overcoming the technological barriers to pest recognition. Clear structure and good recognition accuracy are the benefits of deep learning algorithms for image recognition. Accurately identifying pests in crops may be more effective, which is good for agricultural productivity. In this work, an improved deep learning model Yolo V3 is used to produce high-performance pest detection in rice crops. This Yolo V3 classifier gives 99% accuracy, and it outperforms the conventional convolutional neural network (CNN).
Crop pests seriously affect the yield and quality of crop. To timely and accurately control crop pests is particularly crucial for crop security, quality of life and a stable agricultural economy. Crop pest detection in field is an essential step to control the pests. The existing convolutional neural network (CNN) based pest detection methods are not satisfactory for small pest recognition and detection in field because the pests are various with different colors, shapes and poses. A three-scale CNN with attention (TSCNNA) model is constructed for crop pest detection by adding the channel attention and spatial mechanisms are introduced into CNN. TSCNNA can improve the interest of CNN for pest detection with different sizes under complicated background, and enlarge the receptive field of CNN, so as to improve the accuracy of pest detection. Experiments are carried out on the image set of common crop pests, and the precision is 93.16%, which is 5.1% and 3.7% higher than ICNN and VGG16, respectively. The results show that the proposed method can achieve both high speed and high accuracy of crop pest detection. This proposed method has certain practical significance of real-time crop pest control in the field.
Citrus fruit yields in the Philippines have been fluctuating dramatically in recent years. Diseases, pests, and soil inadequacy have all contributed to the citrus industry's severe decline. More than 15 viruses and virus-like diseases have infected Citrus. Agricultural productivity must improve for a country to be progressive. Resources should be utilized to their full potential, diseases and pests should be controlled efficiently, and technological advancements must be adopted. This application will identify and map common pests and diseases of citrus fruits in Oriental Mindoro, apply image processing techniques to analyze diseases of citrus fruits with corresponding solutions caused by bacteria, and give information about diseases related to citrus fruits and how to cure them. The researchers used the Spiral Model as a Software Development Life Cycle (SDLC) model to develop this application. In this model, researchers can plan the flow of the application. If the application did not meet the desired result, the researchers could revise it again until it met the desired one. The researchers used the convolutional neural network to classify and process the captured images of the citrus fruits’ diseases and pests. The researchers asked the selected Citrus farmers in Oriental Mindoro to evaluate the project using the different ISO 25010 criteria and rated the application as very acceptable overall.
No abstract available
No abstract available
In agricultural science, preventing and controlling diseases and insect pests is a key research topic. Information technology has been applied to pest detection and identification. In crop pest research, neural networks can improve accuracy and efficiency by applying image recognition and classification technology. The small size of the pests in the disease pictures makes them easy to detect. The pest small target data set is constructed in this paper to train small target detection networks. A convolutional neural network based on the YOLO algorithm was proposed to detect small targets, such as pests, with few features. This paper also proposes a mosaic-based data enhancement method, label smoothing processing, and a backbone network improvement idea to improve the detection efficiency of the YOLOv5 model.
Some of the most crucial factors that can influence the quality and quantity of harvested produce are soil fertility, water availability, climate, pests, and diseases. Pests and diseases have been a major problem for farmers in the production of crops, where they have caused up to 40% of the total global crop production. The model aims to determine the wellness of lettuce plants by identifying common pests and diseases through the use of image recognition techniques with deep learning called the Convolutional Neural Network (CNN). The model detects pests and diseases of lettuce plants such as anthracnose, leaf drop, powdery mildew, septoria leaf spot, big vein, bottom rot, and downy mildew based on their physical and observable characteristics. The training and testing of the model will be done over a dataset created specifically for the model. A 10-fold crossvalidation will then be performed to verify the models’ accuracy. Categorical cross-entropy is utilized to ensure the model has no outlier predictions with huge errors against the observed data, making the system accurate and effective. The CNN model garnered the highest accuracy of 95.72% with 97.03% precision, 95.12% recall, and a 95.84% F1 score.
No abstract available
Accurate insect pest recognition is significant to protect the crop or take the early treatment on the infected yield, and it helps reduce the loss for the agriculture economy. Design an automatic pest recognition system is necessary because manual recognition is slow, time-consuming, and expensive. The Image-based pest classifier using the traditional computer vision method is not efficient due to the complexity. Insect pest classification is a difficult task because of various kinds, scales, shapes, complex backgrounds in the field, and high appearance similarity among insect species. With the rapid development of deep learning technology, the CNN-based method is the best way to develop a fast and accurate insect pest classifier. We present different convolutional neural network-based models in this work, including attention, feature pyramid, and fine-grained models. We evaluate our methods on two public datasets: the large-scale insect pest dataset, the IP102 benchmark dataset, and a smaller dataset, namely D0 in terms of the macro-average precision (MPre), the macro-average recall (MRec), the macro-average F1- score (MF1), the accuracy (Acc), and the geometric mean (GM). The experimental results show that combining these convolutional neural network-based models can better perform than the state-of-the-art methods on these two datasets. For instance, the highest accuracy we obtained on IP102 and D0 is $74.13\%$ and $99.78\%$, respectively, bypassing the corresponding state-of-the-art accuracy: $67.1\%$ (IP102) and $98.8\%$ (D0). We also publish our codes for contributing to the current research related to the insect pest classification problem.
High demand for caisim in Indonesia’s main export commodity must be accompanied by a good planting process. The obstacle faced is that farmers are currently able to apply pesticides when the caisim plants have holes due to being eaten by pests. This control can be a good step to maximize the yield of caisim farming. However, many farmers have not implemented proper control of pests, one of which is farmers in Kebon Raya Dempo, South Sumatera, Indonesia. The obstacles faced such as not being able to detect pests correctly and provide pesticides with precision. Motivated by CNN’s success in image classification, a learning-based approach has been carried out in this study to detect the presence of pests in caisim. The experimental results show differences in accuracy in each experiment with a dataset of 1000, consisting of 500 image data with pests and 500 without pests. The accuracy of the experiment A – CNN from Scratch is 48.33%, precision 1, recall 0.48, F1-score 0.65, experiment B – CNN from Scratch is 73.00% precision 1, recall 0.64, F1- score 0.78, experiment C–CNN from Scratch experiment is 92.00% precision 0.88, recall 0.96, F1-score 0.92. Of the 3 trials, experiment A – CNN from Scratch experienced underfitting, experiment B – CNN from Scratch overfitting, and the C – CNN experiment from Scratch can be used for pest detection in ciasim.
No abstract available
No abstract available
No abstract available
No abstract available
Pests are major threat to economic growth of a country. Application of pesticide is the easiest way to control the pest infection. However, excessive utilization of pesticide is hazardous to environment. The recent advances in deep learning have paved the way for early detection and improved classification of pest in tomato plants which will benefit the farmers. This paper presents a comprehensive analysis of 11 state-of-the-art deep convolutional neural network (CNN) models with three configurations: transfers learning, fine-tuning and scratch learning. The training in transfer learning and fine tuning initiates from pre-trained weights whereas random weights are used in case of scratch learning. In addition, the concept of data augmentation has been explored to improve the performance. Our dataset consists of 859 tomato pest images from 10 categories. The results demonstrate that the highest classification accuracy of 94.87% has been achieved in the transfer learning approach by DenseNet201 model with data augmentation.
Pest insects are a problem in horticulture, so early detection is key for their con-trol. Sticky traps are an inexpensive way to obtain insect samples in crops, but identifying them manually is a time-consuming task. Building computational models to identify insect species in sticky trap images is therefore highly desirable. However, this is a challenging task due to the difficulty in getting size-able sets of training images. In this paper, we studied the usefulness of three neural network generative models to synthesize pest insect images (DCGAN, WGAN, and VAE) for augmenting the training set and thus facilitate the induction of insect detector models. Experiments with images of seven species of pest insects of the Peruvian horticulture showed that the WGAN and VAE models are able to learn to generate images of such species. It was also found that the synthesized images can help to induce YOLOv5 detectors with significant gains in detection performance compared to not using synthesized data. A demo app that integrates the detector models can be accessed through the URL https://bit.ly/3uXW0Ee . The repository of the project is available at https://github.com/weirdfish23/pest-insects-GAN
Rooftop farming in urban places is gaining more popularity which increases the cultivation of organic vegetables on the rooftop of houses and buildings with the minimal utilization of water. But rooftop farming is more vulnerable to pest infestation which reduces the quality of plants. Urban residents are novices in farming, and they are unaware of the pest attacks. Various researchers have proposed pest identification systems using image processing techniques and machine learning algorithms specific to particular disease which shows less accuracy on generaliztion and not user-friendly. To provide user-friendly pest identification system, this paper proposes a mobile based pest identification system using the concept of pre-trained convolutional neural network model – AlexNet. Experimental results have been analyzed with various rooftop pests using different kernel sizes and layers of convolutional neural network. In addition, the best evaluated pre-trained model has been converted to a mobile application using REST API for the recommendation of pesticide to the novice user.
No abstract available
Early automation in identifying plant diseases is crucial for the precise protection of crops. Plant diseases pose substantial risks to agriculture-dependent nations, often leading to notable crop losses and financial challenges, particularly in developing countries. Symptoms such as chlorosis, structural deformities, and wilting, characterize these diseases. However, early identification can be challenging due to symptoms similarity. Researchers using artificial intelligence (AI) for plant disease classification, challenges like data imbalance, symptom variability, real-time performance, and costly annotation hinder accuracy and adoption. This work introduced a novel approach using the You Only Look Once (YOLO) deep learning model, chosen for its exceptional accuracy and speed. The study focuses on analyzing YOLO models, specifically YOLOv3 and YOLOv4, to identify fruit plant diseases. This work examines healthy peach and strawberry leaves, as well as peach leaves affected by bacterial spots and strawberry leaves with scorch disease. These models underwent thorough training using data from the publicly accessible Plant Village dataset. The simulation results were highly promising, numerically YOLOv3 model achieved 97% accuracy and a Mean Average Precision (mAP) of 92%, within a total detection time of 105 s. In comparison, the YOLOv4 model outperformed, with a 98% accuracy and an impressive mean average precision of 98%, all while completing the detection process in just 29 s. YOLOv4 demonstrated lower complexity, significantly faster, and more precise performance, especially in detecting multiple items. Serving as an efficient real-time detector, it holds the potential to transform plant disease diagnosis and mitigation strategies, ultimately leading to increased agricultural productivity and enhanced financial outcomes for developing nations.
The precise identification of plant diseases is essential for improving agricultural productivity and reducing reliance on human expertise. Deep learning frameworks, belonging to the YOLO series, have demonstrated significant potential in the real-time detection of plant diseases. There are various factors influencing model performance; activation functions play an important role in improving both accuracy and efficiency. This study proposes αSiLU, a modified activation function developed to optimize the performance of YOLOv11n for plant disease-detection tasks. By integrating a scaling factor α into the standard SiLU function, αSiLU improved the effectiveness of feature extraction. Experiments are conducted on two different plant disease datasets—tomato and cucumber—to demonstrate that YOLOv11n models equipped with αSiLU outperform their counterparts using the conventional SiLU function. Specifically, with α = 1.05, mAP@50 increased by 1.1% for tomato and 0.2% for cucumber, while mAP@50–95 improved by 0.7% and 0.2% each. Additional evaluations across various YOLO versions confirmed consistently superior performance. Furthermore, notable enhancements in precision, recall, and F1-score were observed across multiple configurations. Crucially, αSiLU achieves these performance improvements with minimal effect on inference speed, thereby enhancing its appropriateness for application in practical agricultural contexts, particularly as hardware advancements progress. This study highlights the efficiency of αSiLU in the plant disease-detection task, showing the potential in applying deep learning models in intelligent agriculture.
Plant diseases significantly impact global agriculture, leading to substantial production losses and economic consequences. Timely disease detection can enhance crop yield, optimize resource utilization, reduce costs, and mitigate environmental effects, ultimately ensuring high-quality food production. Deep learning, specifically computer vision-based techniques, have proven invaluable in tasks like image classification, segmentation, and object detection. Deep Learning techniques such as You Only Look Once (YOLO) models are state of the art neural network algorithms used for accurate object detection. In this study, YOLOv5, YOLOv7 and YOLOv8 models were trained on CCL’20 dataset for citrus disease detection. Data augmentation techniques such as image translation, image scaling, flip, mosaic augmentations were implemented to improve the models’ performance during training phase. The model performance was evaluated using metric such as Mean Average Precision at 50% to 95% Intersection over Union score i.e. mAP@50-95. The results show that YOLOv8 model performs better than other variants and offers significant improvements over the benchmark performance from previous studies. The final hyper-parameter tuned model achieved 96.1% mAP@50-95 on testing data for citrus disease detection and mAP@50-95 of 95.3%, 96.0% and 97.0% for detection of Anthracnose, Melanose and Bacterial Brown Spot diseases, respectively. The trained model was able to detect single and multiple instances of same or different disease in an image showing the potential of recent YOLO models. The trained YOLOv8 model is deployed on Roboflow platform.
With the worsening global food crisis, ensuring food security and reducing food losses have become national strategies for many countries. Traditional methods for detecting plant diseases and pests heavily rely on manual inspection, which is time-consuming, labor-intensive, and prone to human errors. This has prompted scientific researchers to explore innovative and effective approaches to mitigate agricultural losses. Deep learning-based object detection offers a promising solution; however, deploying such models in real-world agricultural environments remains challenging because of the high computational demands and associated hardware costs, particularly when relying on GPU-based systems. This paper presents a lightweight, real-time plant disease detection system based on neural processing units (NPUs) and the YOLOX-Nano object detection model. By leveraging the high energy efficiency, low power consumption, and compact integration capabilities of NPUs, we address the limitations of GPU-based inference, including excessive power requirements, high cost, and limited mobility. The system allows deployment on embedded and mobile platforms such as smart agricultural robots, unmanned ground vehicles, and handheld inspection devices to enable real-time, on-site diagnosis. Furthermore, the framework supports multi-domain and multi-functional applications through algorithmic adaptation, paving the way for scalable and sustainable smart agriculture solutions.
Like many countries, Nigeria is naturally endowed with fertile agricultural soil that supports large-scale tomato production. However, the prevalence of disease causing pathogens poses a significant threat to tomato health, often leading to reduced yields and, in severe cases, the extinction of certain species. These diseases jeopardise both the quality and quantity of tomato harvests, contributing to food insecurity. Fortunately, tomato diseases can often be visually identified through distinct forms, appearances, or textures, typically first visible on leaves and fruits. This study presents an enhanced Capsule-YOLO network architecture designed to automatically segment overlapping and occluded tomato leaf images from complex backgrounds using the YOLO framework. It identifies disease symptoms with impressive performance metrics: 99.31% accuracy, 98.78% recall, and 99.09% precision, and a 98.93% F1-score representing improvements of 2.91%, 1.84%, 5.64%, and 4.12% over existing state-of-the-art methods. Additionally, a user-friendly interface was developed to allow farmers and users to upload images of affected tomato plants and detect early disease symptoms. The system also provides recommendations for appropriate diagnosis and treatment. The effectiveness of this approach promises significant benefits for the agricultural sector by enhancing crop yields and strengthening food security.
Diagnosis of cotton plant diseases is essential to maintain agricultural sustainability and output. This study proposes a YOLO-based deep learning model for leaf disease detection to maximize cotton plant leaf disease detection accuracy. This method ensures a comprehensive evaluation of cotton plant health by combining various image processing techniques, improving the accuracy of disease identification. This study provides a viable path to improve crop health monitoring and management in cotton farming systems and emphasizes the importance of utilizing cutting-edge image processing techniques in agricultural activities. ROC curve performance and classification metrics were better for YOLOv5 than for VGG16 and ResNet50, as it had the highest F1 score (99.21%), recall, and precision. Consistent performance in classification tests was demonstrated by all models, which showed balanced precision, recall, and F1 scores. ResNet50 marginally outperformed VGG16 in terms of true positive rates, F1 score (98.88% vs. 98.65%), recall, and precision. More sophisticated models, such as YOLOv5 and ResNet50, showed higher efficiency and accuracy than VGG16, which makes them more appropriate for applications demanding low false positive rates and high precision. The proposed YOLO-based method improves the accuracy of disease identification, ensuring a thorough assessment of cotton plant health using image processing techniques. The results show that the proposed approach is quite successful in correctly detecting and classifying a variety of diseases that affect cotton plants.
Protecting plants from diseases involves recognizing the symptoms and identifying practical, safe, and reasonable treatment methods. Holistic approaches based on particular times or seasons can reduce plant resistance and minimize tedious work. Technological advancements have led to the development of microscopic examinations and computational methods using machine learning techniques to detect diseases automatically and quickly using leaf images. This study builds the prediction model using EfficientNet and YOLO neural network architectures from computer vision. The development of a model that assists farmers in identifying cotton disease so that they use pesticides that may treat it further utilizes this concept. In the physical world, the input is accepted from many different sources, so observing the model’s output is necessary. This work concentrates on model response to the inputs from physical devices, and analysis shows that the monitoring varies the results. A novel convolutional neural network (CNN) based on the EfficientNet architectures and variations of YOLO architectures is used to classify and identify the objects in cotton leaf. The EfficientNetB4 yielded 100% accuracy for healthy leaf and powdery mild leaf classes, and YOLO v4 version with 96%, 98.3%, 99.2%, and 0.70 for precision, recall, mAP@0.5, mAP120.5:095 respectively. These results indicate that consequences vary in real-time per environmental parameters such as light effect and devices, and analysis shows that monitoring affects the results.
Given the complexity of crop growth environments in nature, where leaf backgrounds often include soil, weeds, and other plants, along with variable lighting conditions, and considering the small size of leaf spots and the wide variety of crop diseases with significant scale differences, this paper proposes a new BGM-YOLO model structure aimed at improving accuracy and inference speed. First, the GSBottleneck module is utilized to enhance the C2f module of the YOLOv8n model, leading to the introduction of the GSC2f module, which reduces computational costs and increases inference efficiency. Next, the model incorporates a multiscale bitemporal fusion module (BFM) to increase the effectiveness and robustness of feature fusion across different levels. Finally, we developed a median-enhanced spatial and channel attention block (MECS) that combines both channel and spatial attention mechanisms, effectively improving the capture and fusion of small-scale features. The experimental results demonstrate that the BGM-YOLO model achieves a 3.9% improvement in the mean average precision (mAP) over the original model. In crop disease detection tasks, the BGM-YOLO model has higher detection accuracy and a lower false negative rate, confirming its practical value in complex application scenarios.
The accurate and rapid detection of apple leaf diseases is a critical component of precision management in apple orchards. The existing deep-learning-based detection algorithms for apple leaf diseases typically demand high computational resources, which limits their practical applicability in orchard environments. Furthermore, the detection of apple leaf diseases in natural settings faces significant challenges due to the diversity of disease types, the varied morphology of affected areas, and the influence of factors such as lighting variations, leaf occlusions, and differences in disease severity. To address the above challenges, we constructed an apple leaf disease detection (ALD) dataset, which was collected from real-world scenarios, and we applied data augmentation techniques, resulting in a total of 9808 images. Based on the ALD dataset, we proposed a lightweight YOLO11n-based detection network, named CEFW-YOLO, designed to tackle the current issues in apple leaf disease identification. First, we designed a novel channel-wise squeeze convolution (CWSConv), which employs channel compression and standard convolution to reduce computational resource consumption, enhance the detection of small objects, and improve the model’s adaptability to the morphological diversity of apple leaf diseases and complex backgrounds. Second, we developed an enhanced cross-channel attention (ECCAttention) module and integrated it into the C2PSA_ECCAttention module. By extracting global information, combining horizontal and vertical convolutions, and strengthening cross-channel interactions, this module enables the model to more accurately capture disease features on apple leaves, thereby enhancing detection accuracy and robustness. Additionally, we introduced a new fine-grained multi-level linear attention (FMLAttention) module, which utilizes multi-level asymmetric convolutions and linear attention mechanisms to improve the model’s ability to capture fine-grained features and local details critical for disease detection. Finally, we incorporated the Wise-IoU (WIoU) loss function, which enhances the model’s ability to differentiate overlapping targets across multiple scales. A comprehensive evaluation of CEFW-YOLO was conducted, comparing its performance against state-of-the-art (SOTA) models. CEFW-YOLO achieved a 20.6% reduction in computational complexity. Compared to the original YOLO11n, it improved detection precision by 3.7%, with the mAP@0.5 and mAP@0.5:0.95 increasing by 7.6% and 5.2%, respectively. Notably, CEFW-YOLO outperformed advanced SOTA algorithms in apple leaf disease detection, underscoring its practical application potential in real-world orchard scenarios.
Traditional methods of plant disease detection are cumbersome and prone to errors that cannot be avoided. Plant disease detection can help prevent crop losses and ensure food security. The system employs state-of-the-art deep learning techniques to automatically detect and classify plant disease in agricultural fields. The proposed system addresses the challenges associated with identifying plant diseases, which often have similar visual characteristics, by using a combination of deep learning and computer vision methods, specifically employing the YOLO (You Only Look Once) and Faster Region-Convolutional Neural Network with a ResNet-152 backbone. The system is trained on a custom dataset where the images were captured in an uncontrolled environment, specifically focusing on two plant leaves, namely Guava and Mango, to achieve high accuracy in identifying diseases in real-world field conditions. The proposed system has the potential to revolutionize plant disease detection and ensure food security.
No abstract available
Early and accurate detection of plant diseases is crucial for ensuring food security and minimizing crop losses. Deep learning-based object detection models, particularly those based on the You Only Look Once (YOLO) architecture, have shown promise in this area. This study compares the performance of five recent YOLO models - YOLOv5, YOLOv5-p6, YOLOv6, YOLOv8, and YOLOv9 - for plant disease detection using the PlantDoc dataset. The models were evaluated based on their accuracy, speed, and resource requirements. Our results indicate that YOLOv9-C and YOLOv8s demonstrated superior performance in terms of Fl-Score, Recall, and mAP metrics. These models achieved an Fl-Score of 0.39 and 0.37 respectively, with YOLOv9-C leading in Recall at 0.51 and both models tying at a Best mAP@50 of 46%. Moreover, the highest Fl-Score was achieved by YOLOv5s at 0.42, indicating that this model also has potential for further exploration and optimization. The highest mAP50-95 was achieved by YOLOv8s at 36 %, suggesting that this model may be particularly effective at detecting a wide range of plant diseases across different thresholds.
Artificial intelligence and deep learning models are utilised in health, IT, animal and plant research, and more. Maize, one of the most widely eaten crops globally, is susceptible to a wide variety of disease that impede its development and reduce its output. The objective of this research work is to develop a deep learning-based model for detection of illnesses affecting maize leaves. Furthermore, the model that has been constructed not only forecasts illness but also furnishes illustrative visuals of leaf diseases, so facilitating the identification of disease types. To do this, a dataset including specified illnesses, including blight, common rust, gray leaf spot, and a healthy leaf, was obtained from Kaggle, a secondary source (Pant village). For data analysis, the cross-platform Anaconda Navigator was used, while the programming languages Python and Jupiter Notebook were implemented. The acquired data was used for both training and evaluating the models. The study presents a novel approach to plant disease detection using the YOLO deep learning model, implemented in Python and associated libraries. The Yolov8 algorithm was employed to develop a maize leaf detection system, which outperformed algorithms such as CNN (84%), KNN (81%), Random Forest (85%), and SVM (82%), achieving an impressive accuracy of 99.8%. Limitations of the study include the focus on only three maize leaf diseases and the reliance on single-leaf images for detection. Future research should address environmental elements like temperature and humidity, include numerous leaves in a frame for disease identification, and create disease stage detection methods.
Accurate detection and segmentation of plant diseases are essential for sustaining agricultural productivity and global food security. This report conducts a comparative analysis of three state-of-the-art YOLO models-YOLOv5, YOLOv7, and YOLOv8-emphasizing their efficiency in instance segmentation of plant diseases. The research employs a comprehensive dataset featuring various crops like rice, sugarcane, wheat, bell pepper, potato, and tomato, each suffering from different diseases. The methodology includes fine-tuning YOLO models with pretrained weights and optimizing them with the curated dataset.The assessment of performance is based on crucial metrics such as recall, mean average precision (MAP) and precision. YOLOv8 exhibits superior performance, with over $\mathbf{9 0 \%}$ average precision across all disease categories, significantly outperforming YOLOv5 and YOLOv7. This report offers detailed insights into the architectural features, training procedures, and evaluation metrics of each model. It also addresses the implications of the findings for practical agricultural applications, highlighting the role of advanced deep learning techniques in improving crop protection and management strategies. Despite the positive results, the report acknowledges limitations like dataset dependency and challenges in real-world deployment.
Agriculture is vital for global food security, but plant leaf diseases pose escalating threats, causing significant crop losses and economic damage. Traditional diagnostic methods are often time-consuming and resource-intensive, prompting the need for efficient, scalable solutions. This research addresses these challenges by proposing DYL-Leaf, a lightweight, distilled model designed for detecting 13 classes of potato, rice, and tomato leaf diseases from the PlantVillage dataset. Leveraging Knowledge Distillation (KD), a lightweight student model with only $\mathbf{5 4 5, 0 0 5}$ parameters, is trained to emulate a larger, custom YOLO-based teacher model (2.6 M parameters). The methodology optimizes both hard and soft losses using temperature-scaled KullbackLeibler divergence, ensuring the student model retains the teacher’s knowledge while being computationally efficient. Results demonstrate that the student model not only matches but surpasses the teacher’s performance, achieving a validation accuracy of 93.8% (vs. 92.9%), along with improved precision (94.00%), recall (93.23%), and F1-score (93.38%). Saliency maps were employed to interpret the model’s decision-making process, confirming its ability to focus on disease-specific features. These findings highlight the effectiveness of KD in creating lightweight, high-performing models for plant disease classification. By reducing computational requirements while maintaining accuracy, DYL-Leaf is well-suited for deployment in resourceconstrained agricultural environments.
Cabbage plants are a commodity needed by the community and an export commodity that must have good quality and be worth selling. There are approaches to create detection systems, namely rule-based and image-based. The use of images allows the system to be reorganized by training data, resulting in a flexible system. The image will be detected by the model and then predict the cabbage plant disease. The data used is image data, namely Alternaria Spots, Healthy, Black Root, and White Rust. Implementation This research tests the YOLO model in making a detection system with the highest precision-confidence result for all labels is 78,5%. While in confusion-matrix testing, the highest result is 0.67 in White Rust disease. This indicates that the YOLO model can identify diseases in cabbage plants based on data that has been trained with great results.
Agriculture plays an importance role in feeding the global population; thus, timely and accurate plant diseases diagnosis is crucial to yield protection. This research implements a hybrid deep learning system, using YOLOv8 to detect plant diseases and Convolutional Neural Network (CNN) to classify disease severity on maize leaf samples. The model identifies infected areas in the image frame of maize leaves using the YOLOv8 detection protocols, and then classifies disease severity as healthy, mild, moderate, or severe based on CNN-based architectures like ResNet50, EfficientNet, and MobileNet. Experimental assessments exhibit strong performance with a mean Average Precision (mAP) of 90.2% for YOLOv8 detection and 89.8% accuracy for CNN severity classification. The study also implements Grad-CAM architecture to explainability which improves the model’s interpretability and further facilitates decision-making to enhance farmers’ pre-emptive response toward future conditions in the early season. The proposed system lends itself to a scalable interpretative solution for field use in structure disease development in maize to monitor disease and facilitate precision agriculture and sustainable crop production.
No abstract available
Agriculture is one of the main economic pillars in many countries and regions, contributing significantly to the stable development of the national economy. However, the growth rate of agriculture is currently declining rapidly, and plant diseases and pests are increasingly prevalent due to various factors such as climate and pollution. In response to the detection and recognition of plant diseases and pests, we propose introducing a multi-channel BiFormer module into the basic YOLOv8 target detection algorithm, which can effectively improve recognition accuracy. In experiments, we tested our method using a cashew dataset, and the results showed that our method outperforms traditional methods in terms of precision (P), recall rate (R), and mean average precision (mAP), indicating its high effectiveness.
Agriculture is an important component of every country's economy, supplying the necessary resources to farmers and their families. The livelihoods of farmers are greatly threatened by crop diseases, which highlights the importance of early identification of diseases in chilli plants at the right stage of growth is crucial for timely fertilizer recommendation. This paper presents an intelligent transfer learning technique to detect thrips and viruses in chilli plants in the early stages of development. Stage-by-stage datasets are collected from chilli plant orchards around Mysore district, Karnataka, to carry out the training using the YOLO model. The dataset consists of three growth stages of chilli plants (15, 25, and 35 days) that are captured in diverse environmental settings with multiple-resolution smart phones. The annotations are created for the diseases healthy, viruses, and thrips at leaf level from whole plants for all three stages. Hyperparameters such as learning rate, the learning optimization algorithm, batch size, epochs, and loss functions are employed to fine-tune the YOLO v5, v6, and v7 models. To achieve high generalisation and reduce overfitting, a weight decay (weight_decay) of 0.0005 is employed. From the evaluation, it was noticed that YOLOv7 outperformed other YOLO models. A mAP score of 0.80 is achieved in stage 1, 0.82 in stage 2, and 0.73 in stage 3 with minimal validation loss and remarkable inference speed.
No abstract available
No abstract available
Tomatoes are highly susceptible to numerous diseases that significantly reduce their yield and quality, posing critical challenges to global food security and sustainable agricultural practices. To address the shortcomings of existing detection methods in accuracy, computational efficiency, and scalability, this study propose TomatoGuard-YOLO, an advanced, lightweight, and highly efficient detection framework based on an improved YOLOv10 architecture. The framework introduces two key innovations: the Multi-Path Inverted Residual Unit (MPIRU), which enhances multi-scale feature extraction and fusion, and the Dynamic Focusing Attention Framework (DFAF), which adaptively focuses on disease-relevant regions, substantially improving detection robustness. Additionally, the incorporation of the Focal-EIoU loss function refines bounding box matching accuracy and mitigates class imbalance. Experimental evaluations on a dedicated tomato disease detection dataset demonstrate that TomatoGuard-YOLO achieves an outstanding mAP50 of 94.23%, an inference speed of 129.64 FPS, and an ultra-compact model size of just 2.65 MB. These results establish TomatoGuard-YOLO as a transformative solution for intelligent plant disease management systems, offering unprecedented advancements in detection accuracy, speed, and model efficiency.
Rice leaf diseases significantly impact yield and quality. Traditional diagnostic methods rely on manual inspection and empirical knowledge, making them subjective and prone to errors. This study proposes an improved YOLOv8-based rice disease detection method (SSD-YOLO) to enhance diagnostic accuracy and efficiency. We introduce the Squeeze-and-Excitation Network (SENet) attention mechanism to optimize the Bottleneck structure of YOLOv8, improving feature extraction capabilities. Additionally, we employ a Dynamic Sample (DySample) lightweight dynamic upsampling module to address high similarity between rice diseases and backgrounds, enhancing sampling accuracy. Furthermore, Shape-aware Intersection over Union (ShapeIoU) Loss replaces the traditional Complete Intersection over Union (CIoU) loss function, boosting model performance in complex environments. We constructed a dataset of 3000 rice leaf disease images for experimental validation of the SSD-YOLO model. Results indicate that SSD-YOLO achieves average detection accuracies of 87.52%, 99.48%, and 98.99% for rice brown spot, rice blast, and bacterial blight respectively—improving upon original YOLOv8 by 11.11%, 1.73%, and 3.81%. The model remains compact at only 6MB while showing significant enhancements in both detection accuracy and speed, providing robust support for timely identification of rice diseases.
No abstract available
Plant diseases pose a major threat to agricultural productivity and economies dependent on it. Monitoring plant growth and phenotypes is vital for early disease detection. In Indian agriculture, black-gram (Vigna mungo) is an important pulse crop afflicted by viral infections like Urdbean Leaf Crinkle Virus (ULCV), causing stunted growth and crinkled leaves. Such viral epidemics lead to massive crop losses and financial distress for farmers. According to the FAO, plant diseases cost countries $220 billion annually. Hence, there is a need for quick and accurate diagnosis of crop diseases like ULCV. Recent advances in computer vision and image processing provide promising techniques for automated non-invasive disease detection using leaf images. The key steps involve image pre-processing, segmentation, informative feature extraction, and training machine learning models for reliable classification. In this work, an automated ULCV detection system is developed using black gram leaf images. The Grey Level Co-occurrence Matrix (GLCM) technique extracts discriminative features from leaves. Subsequently, a deep convolutional neural network called YOLO (You Only Look Once) is leveraged to accurately diagnose ULCV based on the extracted features. Extensive experiments demonstrate the effectiveness of the GLCM-YOLO pipeline in identifying ULCV-infected leaves with high precision. Such automated diagnosis can aid farmers by providing early disease alerts, thereby reducing crop losses due to viral epidemics.
Onion crops are affected by many diseases at different stages of growth, resulting in significant yield loss. The early detection of diseases helps in the timely incorporation of management practices, thereby reducing yield losses. However, the manual identification of plant diseases requires considerable effort and is prone to mistakes. Thus, adopting cutting-edge technologies such as machine learning (ML) and deep learning (DL) can help overcome these difficulties by enabling the early detection of plant diseases. This study presents a cross layer integration of YOLOv8 architecture for detection of onion leaf diseases viz.anthracnose, Stemphylium blight, purple blotch (PB), and Twister disease. The experimental results demonstrate that customized YOLOv8 model YOLO-ODD integrated with CABM and DTAH attentions outperform YOLOv5 and YOLO v8 base models in most disease categories, particularly in detecting Anthracnose, Purple Blotch, and Twister disease. Proposed YOLOv8 model achieved the highest overall 77.30% accuracy, 81.50% precession and Recall of 72.10% and thus YOLOv8-based deep learning approach will detect and classify major onion foliar diseases while optimizing for accuracy, real-time application, and adaptability in diverse field conditions.
Corn is one of the primary carbohydrate-rich food commodities in Southeast Asian countries, among which Indonesia. Corn production is highly dependent on the health of the corn plant. Infected plants will decrease corn plant productivity. Usually, corn farmers use conventional methods to control diseases in corn plants. Still, these methods are not effective and efficient because they require a long time and a lot of human labor. Deep learning-based plant disease detection has recently been used for early disease detection in agriculture. In this work, we used convolutional neural network algorithms, namely YOLO-v5 and YOLO-v8, to detect infected corn leaves in the public data set called ‘Corn Leaf Infection Data set’ from the Kaggle repository. We compared the mean average precision (mAP) of mAP 50 and mAP 50-95 between YOLO-v5 and YOLO-v8. YOLO-v8 showed better accuracy at an mAP 50 of 0.965 and an mAP 50-95 of 0.727. YOLO-v8 also showed a higher detection number of 12 detections than YOLO-v5 at 11 detections. Both YOLO algorithms required about 2.49 to 3.75 hours to detect the infected corn leaves. This all-trained model could be an effective solution for early disease detection in future corn plantations.
Accurate and efficient detection of citrus leaf diseases is crucial for ensuring the quality and yield of global citrus production. However, many existing agricultural disease detection methods face significant challenges, including overlapping leaf occlusion, difficulty in identifying small lesions, and interference from complex backgrounds. These limitations often lead to reduced accuracy and efficiency of object detection. Moreover, current models generally necessitate significant computational resources and possess substantial model sizes, which restrict their practical applicability and operational convenience. To tackle these issues, this study presents a novel model named YOLO-Citrus. It is a lightweight and efficient YOLOv11-based model designed to enhance the precision of detection while simultaneously minimizing computational expenses and the size of the model. This makes it more suitable for practical agricultural applications. The proposed solution incorporates three major innovations: the C3K2-STA module, the ADown module, and the Wise-Inner-MPDIoU loss function. In particular, YOLO-Citrus utilizes Star-Triplet Attention by embedding Triplet Attention into the Star Block to enhance bottleneck performance in C3K2-STA. It also adopts the ADown module as a lightweight and effective downsampling strategy and introduces the Wise-Inner-MPDIoU loss to facilitate optimized bounding box regression and enhanced detection accuracy. These advancements enable high detection accuracy with substantially reduced computational requirements. The experimental results demonstrate that YOLO-Citrus attains 96.6% mAP@0.5, representing an improvement of 1.4 percentage points over the YOLOv11s baseline (95.2%). Furthermore, it reaches 81.6% mAP@0.5:0.95, i.e., an enhancement of 1.3 percentage points compared to the baseline value of 80.3%. The optimized model delivers considerable efficiency gains, with model size reduced by 25.0% from 19.2 MB to 14.4 MB and computational cost decreased by 20.2% from 21.3 to 17.0 GFlops. Comparative analysis has confirmed that YOLO-Citrus performs better than other models in terms of comprehensive detection capability. These performance enhancements validate the model’s effectiveness in real-world orchard conditions, offering practical solutions for early disease detection, precision treatment, and yield protection in citrus cultivation.
With the increasing threat of agricultural diseases to crop production, traditional manual detection methods are inefficient and highly susceptible to environmental factors, making an efficient and automated disease detection method urgently needed. Existing deep learning models still face challenges in detecting small targets and recognizing multi-scale lesions in complex backgrounds, particularly in terms of multi-feature fusion. To address these issues, this paper proposes an improved YOLO-LF model by introducing modules such as CSPPA (Cross-Stage Partial with Pyramid Attention), SEA (SeaFormer Attention), and LGCK (Local Gaussian Convolution Kernel), aiming to improve the accuracy and efficiency of small target disease detection. Specifically, the CSPPA module enhances multi-scale feature fusion, the SEA module strengthens the attention mechanism for contextual and local information to improve detection accuracy, and the LGCK module increases the model’s sensitivity to small lesion areas. Experimental results show that the proposed YOLO-LF model achieves significant performance improvements on the Plant Pathology 2020 - FGVC7 and Plant Pathology 2021 - FGVC8 datasets, particularly in mAP@0.5% and mAP@0.5-0.95%, outperforming existing mainstream models. These results indicate that the proposed method effectively handles complex backgrounds and small target detection tasks in agricultural disease detection, demonstrating high practical value.
Agriculture plays a pivotal role in India's economy, and the timely detection of plant infections is essential to safeguard crops and prevent further spread of diseases. The conventional approach involves manual inspection of plant leaves to identify the specific type of disease, a task typically carried out by farmers or plant pathologists. In previous studies, you only look once (YOLO) and faster region-based convolutional neural network (R-CNN), machine learning algorithms were applied to datasets for detecting objects on tomato leaves which includes a total of images 2403 and got accuracies of 86 and 82 percent. In this paper, a deep convolutional neural network (DCNN) model proposed with a new framework separate, shift, and merge based AlexNet50 algorithm (SSMAN) is used to predict the disease at an earlier stage with higher accuracy. Among various pre-trained deep models, AlexNet emerges as the top performer, achieving the highest accuracy in disease classification. SSMAN can address anomalies in images by employing a class decomposition approach to scrutinize class boundaries. AlexNet exhibits a notable accuracy of 98.30% in successfully identifying tomato leaf diseases from images, with pre-trained new framework, superior to the original AlexNet architecture as well as traditional classification methods with other algorithms.
Plant diseases significantly undermine agricultural productivity. This study introduces an improved YOLOv10n model named WD-YOLO (Weighted and Double-scale YOLO), an advanced architecture for efficient plant disease detection. The PlantDoc dataset was initially enhanced using data augmentation techniques. Subsequently, we developed the DSConv module—a novel convolutional structure employing double-scale weighted convolutions that dynamically adjust to different scale perceptions and optimize attention allocation. This module replaces the conventional Conv module in YOLOv10. Furthermore, the WTConcat module was introduced, dynamically merging weighted concatenation with a channel attention mechanism to replace the Concat module in YOLOv10. The training of WD-YOLO incorporated knowledge distillation techniques using YOLOv10l as a teacher model to refine and compress the architectural learning. Empirical results reveal that WD-YOLO achieved an mAP50 of 65.4%, outperforming YOLOv10n by 9.1% without data augmentation and YOLOv10l by 2.3%, despite having significantly fewer parameters (9.3 times less than YOLOv10l), demonstrating substantial gains in detection efficiency and model compactness.
No abstract available
For sustainable agriculture and food security, it is crucial that diseases of crops are correctly identified along with the severity. With the increasing availability of annotated image datasets and computational resources, deep learning has become a promising solution to automate plant health surveillance. Summary This paper provides a through performance assessment of well-known hybrid deep learning-based architectures for plant disease and severity classification. The study covers a variety of models including CNN-based, CNN-LSTM hybrids, attention mechanisms and lightweight object detection architectures like YOLO and EfficientNet derivatives. We evaluate our methods using several benchmark datasets as well as field-acquired datasets of rice, cotton, tomato and sorghum. To evaluate these models for different disease types and severity stages, performance measures of accuracy and F1-score are utilized to compare between them. Experiments show that it often outperforms the pure CNN counterparts, especially for severity detection on multi-stage diseases. This paper also discusses the shortcomings of current methods and identifies promising research directions to further improve generalization, interpretability, and real-time application of models. The results are expected to help researchers and developers to choose the right architecture for precision agriculture applications.
We present a deep-learning approach for detecting diseases across multiple crop types using the PlantDoc dataset. Our method combines a classification network and an object detection network to identify both the disease category and the location of infected regions in an image. We first preprocess each image and feed it into a convolutional neural network (CNN) that outputs the most likely disease class. Simultaneously, a detection pipeline (e.g. a YOLO-based model) localizes diseased leaf areas with bounding boxes. We evaluate our approach on the PlantDoc dataset (which contains 13 plant species and 30 disease classes) and report performance in terms of accuracy, precision, recall, F1-score, and mean Average Precision (mAP). Our experiments show that the proposed multi-crop model achieves high accuracy and robust localization across diverse plant types. These results suggest that our system can serve as a useful tool for scalable plant disease monitoring in real-world agricultural settings.
Plants diseases are an important issue affecting crop yield and food security around the world. Conventional manual detecting techniques are cumbersome and inaccurate. Effective implementation of deep learning (DL) quite recently has resulted in automated plant disease detection with high accuracy, especially or due to convolutional neural networks (CNNs) and object detection schemes such as the YOLO. Nonetheless, there is a challenge in many of these models, which are associated with interpretability, deployment efficiency, and generalizability to real-life agricultural settings. The proposed approach in this paper incorporates a hybrid DL which involves the use of mockups of YOLOv8, lightweight GhostNet and Coordinate Attention (CA) mechanisms to enhance accuracy and inference optimization. The research is carried out in a comparative way via the use of public datasets (e.g., PlantVillage, PlantDoc). The proposed technique can show higher accuracy (95%), smaller model sizes (less than 10MB), and inference speed (less than 20 ms), which can run the technique in smaller devices (e.g. mobile/edge).
Infestation of pest is one of the leading agricultural problems which have led to a lot of losses in the yield and risks food security. The use of conventional forms of pest control is usually inefficient, consumes a lot of chemicals and requires response time. The paper describes a smart pest monitoring and management system, combining the deep learning, Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) technologies to detect pests in real-time and give decision support. The annotated mixed data consisting of primary pest images gathered in the Federal College of Agriculture, Ishiagu, and secondary data in the Kaggle repository was used to generate a model based on YOLOv10. The model had a precision of 0.84, a recall of 0.82 and a mean Average Precision (mAP@50) of 0.83 indicating that the model was very effective in detecting and classifying a variety of pest species at an acceptable level of accuracy. A recommendation algorithm based on rules was installed to offer specific pesticide recommendations depending on the identified type of pest and the IoT-based email notification module provided the real-time notifications to the farmers to take immediate action. To realise remote sensing and aerial pest surveillance, the UAV was simulated and designed in the Simulink environment to ensure the efficient coverage and the reliable data capture. The integrated system offers a smart and long-term solution to the pest management process by eliminating false alarms, reducing pesticide waste, and enhancing the reaction time. The limitation of the study lies on the condition that the system was only being implemented as a simulation and lacks real-world validation, hence, it is recommended that future studies should adopt the real-world implementation approach for the identification of pests.
The present-day issues related to the cotton-growing industry, namely yield estimation, pest effect, and growth phase diagnostics, call for integrated, scalable monitoring solutions. This write-up reveals Cotton Multitask Learning (CMTL), a transformer-driven multitask framework that launches three major agronomic tasks from UAV pictures at one go: boll detection, pest damage segmentation, and phenological stage classification. CMTL does not change separate pipelines, but rather merges these goals using a Cross-Level Multi-Granular Encoder (CLMGE) and a Multitask Self-Distilled Attention Fusion (MSDAF) module that both allow mutual learning across tasks and still keep their specific features. The biologically guided Stage Consistency Loss is the part of the architecture of the network that enables the system to carry out growth stage transitions that occur in reality. We executed CMTL on a tri-source UAV dataset that fused over 2100 labeled images from public and private collections, representing a variety of crop stages and conditions. The model showed its virtues state-of-the-art baselines in all the tasks: setting 0.913 mAP for boll detection, 0.832 IoU for pest segmentation, and 0.936 accuracy for growth stage classification. Additionally, it runs at the fastest speed of performance on edge devices such as NVIDIA Jetson Xavier NX (Manufactured in Shanghai, China), which makes it ideal for deployment. These outcomes evoke CMTL’s promise as a single and productive instrument of aerial crop intelligence in precision cotton agriculture.
The invasive insect brown marmorated stink bug (BMSB) is an emerging pest of global importance, as it is destroying fruits and seeds, having caused estimated damages of € 588 million to crops in 2019 in Northern Italy alone. An open challenge is to improve monitoring of BMSB in order to be able to deploy countermeasures more efficiently and to increase consumer confidence in the end product. The Horizon 2020 Haly.ID project seeks to reduce or eliminate dependence on conventional monitoring tools and practices, such as traps, baits, visual inspections, sweep netting, and tree beating. In their place, the project proposes the use of unmanned aerial vehicle (UAV) and Internet of Things (IoT) solutions for monitoring the insect population and investigates novel methods for enhancing the quality of fruit in the market. In this work, we focus on the novel autonomous IoT insect monitoring system consisting of multiple innovative solutions for BMSB monitoring and trusted data management developed in Haly.ID. In particular, this article describes the challenges faced when integrating and deploying this monitoring system consisting of those different parts and aims at presenting valuable “lessons learned” for the realization of future deployments. We show that massive over-provisioning of power supply and network speed allows to adapt the system at run-time reflecting changing project requirements, and to conduct experiments remotely. At the same time, over-provisioning introduces new weak points impacting the system reliability, such as cables that can be unplugged or damaged.
Combining near-earth remote sensing spectral imaging technology with unmanned aerial vehicle (UAV) remote sensing sensing technology, we measured the Ningqi No. 10 goji variety under conditions of health, infestation by psyllids, and infestation by gall mites in Shizuishan City, Ningxia Hui Autonomous Region. The results indicate that the red and near-infrared spectral bands are particularly sensitive for detecting pest and disease conditions in goji. Using UAV-measured data, a remote sensing monitoring model for goji pest and disease was developed and validated using near-earth remote sensing hyperspectral data. A fully connected neural network achieved an accuracy of over 96.82% in classifying gall mite infestations, thereby enhancing the precision of pest and disease monitoring in goji. This demonstrates the reliability of UAV remote sensing. The pest and disease remote sensing monitoring model was used to visually present predictive results on hyperspectral images of goji, achieving data visualization.
Pests and diseases greatly reduce crop quality and yield; therefore, IA relies on effective pest and disease control. UAVs have become a crucial remote sensing (RS) tool for agricultural process monitoring and management. This study will examine major advances in this field using bibliometric methodologies including author co-occurrence and keyword co-contribution studies. The suggested technique involves preprocessing, feature extraction, and model training. Data quality improves with preprocessing. UAV images are used for feature extraction, focusing on canopy structure and height. PPO is trained the prediction model. Compared to ultramodern GANs and LSTM networks, the recommended model wins. The model consistently outperforms competitors with 91.17 percent accuracy. The study suggests employing UAVs in smart farming to reduce pests and diseases. The suggested model's accuracy and reliability improve crop quality and production by solving agricultural monitoring and management problems.
Pest infestations pose a significant threat to agricultural productivity, necessitating the development of efficient and automated monitoring solutions. This study presents a UAV-based pest detection framework leveraging the YOLOv8 deep learning model for precise pest identification and localization. The dataset comprises 3,150 RGB images across nine pest categories, with 2,700 images allocated for training and 450 for testing. The model's performance is evaluated using multiple metrics, including localization error and inference time. Among the tested models, YOLOv8-L demonstrated superior accuracy with a localization error of 2.47 px, outperforming Faster R-CNN (3.02 px) and SSD (5.21 px). Additionally, YOLOv8-S exhibited the fastest inference time (8.3 ms per image), highlighting its suitability for real-time deployment. A pest density analysis across different farm sections revealed varying infestation levels, with Section C exhibiting the highest density (15.2 detections/m2) and Section D the lowest (7.5 detections/m2). The proposed approach enhances pest monitoring efficiency by integrating UAV-based image acquisition with deep learning-based detection, enabling early intervention and reducing crop damage risks. Furthermore, a heatmap visualization provides an intuitive representation of pest distribution, assisting farmers in targeted pest control. This research demonstrates the effectiveness of UAV-based pest detection in large-scale agricultural applications. Future work will explore advanced imaging techniques, such as hyperspectral analysis, and integrate predictive modeling to anticipate infestation trends. The proposed system offers a scalable, high-precision solution for automated pest monitoring, contributing to sustainable and data-driven precision agriculture.
A vertical take-off and landing unmanned aerial vehicle (VTOL-UAV) was used to assess the possibilities of a digital image-based for agricultural surveillance system. The VTOL-UAV system has advantages in terms of efficiency, adaptability, and capacity to collect data across a variety of terrains for agricultural yield estimation, crop health monitoring, and early pest identification. The developed VTOL-UAV was constructed based on Skywalker platform with wingspan and length of 1800 mm and 1300 mm, respectively. Agricultural images were collected under various field settings using a digital camera (Canon IXUS 185 at 20 MP and 8x zoom). The study took place in an agricultural field at the University Teuku Umar, Aceh, Indonesia. The technical performance, aerodynamic and stability of the VTOL-UAV system during the hover and cruise were examined. Flight plan parameters included speeds between 10 to 20 m/s, camera angle of 90 degrees vertically looking down, altitudes between 50 m to 200 m, and flight overlap between 60 to 70 % flyaway in accordance with the chart in the flight plan that was made. The findings showed that VTOL-UAV offer viability of using imagery captured by a VTOL-UAV equipped with a low-cost camera for agricultural land mapping. Operational flexibility was increased by the capacity to switch between vertical take-off and horizontal flight, particularly in areas with restricted access. The study revealed that the system has to maintain the altitude and reference ground system within the mission planning to ensure stable flight orientation, reduce vibrations and image distortions. The agricultural ortho-photograph and digital surface model are beneficial for accurate mapping, effective monitoring, and informed decision-making in agricultural applications, particularly for smallholder farm management.
In recent years, the aerial-ground federated learning architecture, which combines unmanned aerial vehicles (UAVs) with federated learning, has shown great potential for crop health monitoring in smart agriculture. However, optimizing federated learning for these UAV-based systems in precision agriculture scenarios faces multiple challenges, such as reducing latency, improving accuracy, and ensuring coverage, necessitating a balance between comprehensiveness and efficiency during model training. Existing strategies mainly focus on reducing training time but often overlook the fairness of model training, which can lead to insufficient classification accuracy or incomplete coverage for certain crop diseases, thereby affecting the accuracy of pest control decisions. To address this issue, this study proposes a federated learning optimization scheme for precision agriculture, introducing a novel fairness metric and multi-criteria client selection mechanism based on the upper confidence bound. Simulation results show that the proposed MCCS method achieves superior performance compared to existing DCS and GCS baselines. Specifically, MCCS reduces the total training latency by 20% and 27% respectively while improving the model accuracy by 6.9% and 12%.
Traditional monitoring methods rely on manual field surveys, which are subjective, inefficient, and unable to meet the demand for large-scale, rapid monitoring. By using unmanned aerial vehicles (UAVs) to capture high-resolution images of rice canopy diseases and pests, combined with deep learning (DL) techniques, accurate and timely identification of diseases and pests can be achieved. We propose a method for identifying rice canopy diseases and pests using an improved YOLOv5 model (YOLOv5_DWMix). By incorporating deep separable convolutions, the MixConv module, attention mechanisms, and optimized loss functions into the YOLOv5 backbone, the model’s speed, feature extraction capability, and robustness are significantly enhanced. Additionally, to tackle the challenges posed by complex field environments and small datasets, image augmentation is employed to train the YOLOv5_DWMix model for the recognition of four common rice canopy diseases and pests. Results show that the improved YOLOv5 model achieves 95.6% average precision in detecting these diseases and pests, a 4.8% improvement over the original YOLOv5 model. The YOLOv5_DWMix model is effective and advanced in identifying rice diseases and pests, offering a solid foundation for large-scale, regional monitoring.
: Real-time monitoring of crop growth has become indispensable in modern agriculture, facilitating prompt detection of crop stress, diseases, and nutrient deficiencies by farmers. This study investigates the feasibility of leveraging unmanned aerial vehicles (UAVs) and deep learning algorithms for the real-time monitoring of Vicia faba L. crop growth stages, aimed at informing decisions related to irrigation, fertilization, and pest management. The study introduces a cutting-edge deep learning model tailored for accurate real-time monitoring of diverse growth stages based on neural architecture search (NAS). This model is benchmarked against seven other rigorously trained models using a diverse dataset of 2530 UAV-captured images, encompassing varied and complex lighting and background conditions. We meticulously fine-tuned the training parameters, closely examining and comparing the substantial performance of each model. Notably, the NAS-based architecture model proved outstanding results, achieving a precision rate of 95.80%, a recall rate of 98.80%, and a mAP@0.5_0.95 value of 71.30%. It strikes an optimal balance between precision, speed, and model size compared to alternative neural network models. The mean average precision (mAP) stands at 95.50%, and it maintains a refresh rate of 24.8 frames per second (FPS), all within a compact model size of 256 megabytes (MB). This chosen model achieves an impressive inference speed of 40.32 milliseconds per frame during testing with new images. This performance is underpinned by the current technology of the NVIDIA Quadro P1000, recognized for its high performance and significant pipelines/CUDA cores.
BACKGROUND Rice leafroller is a serious threat to the production of rice. Monitoring the damage caused by rice leafroller is essential for effective pest management. Owing to limitations in collecting decent quality images and high-performing identification methods to recognize the damage, studies recommending fast and accurate identification of rice leafroller damage are rare. In this study, we employed an ultra-lightweight unmanned aerial vehicle (UAV) to eliminate the influence of the downwash flow field and obtain very high-resolution images of the damaged areas of the rice leafroller. We used deep learning technology and the segmentation model, Attention U-Net, to recognize the damaged area by the rice leafroller. Further, a method is presented to count the damaged patches from the segmented area. RESULTS The result shows that Attention U-Net achieves high performance, with an F1 score of 0.908. Further analysis indicates that the deep learning model performs better than the traditional image classification method, Random Forest (RF). The traditional method of RF causes a lot of false alarms around the edge of leaves, and is sensitive to the changes in brightness. Validation based on the ground survey indicates that the UAV and deep learning-based method achieve a reasonable accuracy in identifying damage patches, with a coefficient of determination of 0.879. The spatial distribution of the damage is uneven, and the UAV-based image collecting method provides a dense and accurate method to recognize the damaged area. CONCLUSION Overall, this study presents a vision to monitor the damage caused by the rice leafroller with ultra-light UAV efficiently. It would also contribute to effectively controlling and managing the hazardous rice leafroller. © 2024 Society of Chemical Industry.
No abstract available
The grain aphid (Sitobion avenae) is a major pest of winter wheat, causing significant yield losses through direct feeding and as a vector of barley yellow dwarf virus (BYDV). Populations can increase rapidly under moderate temperatures and low rainfall, potentially leading to severe infestations if not effectively monitored and managed. This study develops and validates a UAV-based RGB imaging methodology, which relies on deep learning for accurate detection and assessment of Sitobion avenae in wheat crops. The RGB images are preliminarily filtered using “histogram equalization”, which allows for highlighting the infested areas. An experimental study was conducted under the specific climatic conditions of Southern Dobruja, Bulgaria, to quantify Sitobion avenae infestations. Three neural network architectures were used (DeepLabv3, U-Net, and PSPNet) in combination with three backbone models: ResNet34, ResNet50, and ResNet101. The optimal combination was determined to be the U-Net + ResNet101 model, which achieved an average F1 score of 0.982 and a Cohen’s Kappa coefficient of 0.966. The results demonstrate that UAV-based detection allows precise mapping of infested areas, enabling targeted insecticide applications and effective pest management while substantially reducing chemical inputs. These findings indicate that the proposed framework provides a reliable and scalable tool for precision pest monitoring and control in winter wheat.
This study aims to explore the application of unmanned aerial vehicle (UAV) remote sensing technology in agricultural pest and disease monitoring to improve monitoring efficiency and accuracy. Through literature review and empirical research, the current application status, advantages, and challenges of UAV remote sensing technology in agricultural pest and disease monitoring were systematically studied. The study first analyzed and summarized the principles and characteristics of UAV remote sensing technology, and then conducted case studies and field investigations on its specific application in agricultural pest and disease monitoring. The research results show that UAV remote sensing technology has the advantages of high resolution, high spatiotemporal coverage, and high efficiency in agricultural pest and disease monitoring. It can provide multi-source data acquisition and analysis during crop growth to achieve rapid identification and dynamic monitoring of agricultural pest and disease. The findings of this study have important implications for guiding the application and promotion of UAV remote sensing technology in agricultural pest and disease monitoring.
The timely, rapid, and accurate near real-time observations are urgent to monitor the damage of corn armyworm, because the rapid expansion of armyworm would lead to severe yield losses. Therefore, the potential of machine learning algorithms for identifying the armyworm infected areas automatically and accurately by multispectral Unmanned Aerial Vehicle (UAV) dataset is explored in this study. And the study area is in Beicuizhuang Village, Langfang City, Hebei Province, which is the main corn-producing area in the North China Plain. Firstly, we identified the optimal combination of image features by Gini-importance and the comparation of four kinds of machine learning methods including Random Forest (RF), Multilayer Perceptron (MLP), Naive Bayes Classifier (NB) and Support Vector Machine (SVM) was done. And RF was proved to be the most potential with the highest Kappa and OA of 0.9709 and 0.9850, respectively. Secondly, the armyworm infected areas and healthy corn areas were predicted by an optimized RF model in the UAV dataset, and the armyworm incidence levels were classified subsequently. Thirdly, the relationship between the spectral characteristics of different bands and pest incidence levels within the Sentinel-2 and UAV images were analyzed, and the B3 in UAV images and the B6 in Sentinel-2 image were less sensitive for armyworm incidence levels. So the Sentinel-2 image was used to monitor armyworm in two towns. The optimized dataset and RF model are effective and reliable, which can be used for identifying the corn damage by armyworm using UAV images accurately and automatically in field-scale. This article is protected by copyright. All rights reserved.
No abstract available
Aiming at the technical bottleneck of monitoring rice stalk, pest, and grass damage in the middle and lower parts of rice, this paper proposes a UAV-based image information acquisition method and disease prediction algorithm model, which provides an efficient and low-cost solution for the accurate early monitoring of rice diseases, and helps improve the scientific and intelligent level of agricultural disease prevention and control. Firstly, the UAV image acquisition system was designed and equipped with an automatic telescopic rod, 360° automatic turntable, and high-definition image sensing equipment to achieve multi-angle and high-precision data acquisition in the middle and lower regions of rice plants. At the same time, a path planning algorithm and ant colony algorithm were introduced to design the flight layout path of the UAV and improve the coverage and stability of image acquisition. In terms of image information processing, this paper proposes a multi-dimensional data fusion scheme, which combines RGB, infrared, and hyperspectral data to achieve the deep fusion of information in different bands. In disease prediction, the YOLOv8 target detection algorithm and lightweight Transformer network are adopted to determine the detection performance of small targets. The experimental results showed that the average accuracy of the YOLOv8 model (mAP@0.5) in the detection of rice curl disease was 90.13%, which was much higher than that of traditional methods such as Faster R-CNN and SSD. In addition, 1496 disease images and autonomous data sets were collected to verify that the system showed good stability and practicability in field environment.
Our research focuses on addressing the challenge of crop diseases and pest infestations in agriculture by utilizing UAV technology for improved crop monitoring through unmanned aerial vehicles (UAVs) and enhancing the detection and classification of agricultural pests. Traditional approaches often require arduous manual feature extraction or computationally demanding deep learning (DL) techniques. To address this, we introduce an optimized model tailored specifically for UAV-based applications. Our alterations to the YOLOv5s model, which include advanced attention modules, expanded cross-stage partial network (CSP) modules, and refined multiscale feature extraction mechanisms, enable precise pest detection and classification. Inspired by the efficiency and versatility of UAVs, our study strives to revolutionize pest management in sustainable agriculture while also detecting and preventing crop diseases. We conducted rigorous testing on a medium-scale dataset, identifying five agricultural pests, namely ants, grasshoppers, palm weevils, shield bugs, and wasps. Our comprehensive experimental analysis showcases superior performance compared to various YOLOv5 model versions. The proposed model obtained higher performance, with an average precision of 96.0%, an average recall of 93.0%, and a mean average precision (mAP) of 95.0%. Furthermore, the inherent capabilities of UAVs, combined with the YOLOv5s model tested here, could offer a reliable solution for real-time pest detection, demonstrating significant potential to optimize and improve agricultural production within a drone-centric ecosystem.
As an essential pillar industry of the national economy, agriculture and forestry industries are often seriously threatened by diseases and pests such as stem-boring pests and stem borers, which lead to crop yield reduction and even crop failure. For this reason, the establishment of a scientific and practical monitoring and early warning mechanism and the implementation of precise prevention and control measures are important measures to ensure the safety of agricultural production and ecological security. The purpose of this paper is to explore the advantages of UAV remote sensing in the application of pest control in agriculture and forestry. Meanwhile, this paper focuses on analyzing the current status of the application of a comprehensive vegetation index combined with deep learning methods. This paper proposes two possible directions for future research: one is to explore the synergistic application mode of satellite remote sensing and UAV remote sensing; the other is to study the changing law of spectral features of different crops in each growth stage to optimize the selection of monitoring parameters. The analyses in this paper can provide a valuable scientific basis for advancing the practical application of UAV remote sensing technology in the monitoring of pests and diseases in agriculture and forestry, and thus promote the sustainable development of pest control in agriculture and forestry.
No abstract available
As a crucial economic crop, the health status of cotton directly impacts farmers' income and the national economy. Therefore, timely and accurate detection and identification of cotton diseases and pests are of significant importance, aiding in reducing the adverse effects of diseases and pests on cotton yield and quality. The existing research struggles to address the balance between resource consumption and detection accuracy in cotton disease and pest detection. Moreover, diseases and pests often occur beneath the canopy, and the orthorectification of drone imagery may result in insufficient feature information and prolonged processing time, among other issues. To address the aforementioned issues, this article proposes a precise detection method for cotton Verticillium wilt based on unmanned aerial vehicle multiangle remote sensing guided by a satellite time-series monitoring model. Specifically, first, combining Sentinel-1 microwave and Sentinel-2 optical time-series images, we constructed a cotton Verticillium wilt monitoring model based on extreme gradient boosting algorithm to identify areas affected by the disease invasion. Subsequently, after identifying the blocks affected by the disease, we collected multispectral remote sensing data captured from multiple angles by unmanned aerial vehicles and compared different combinations of vegetation indices and bands. Finally, we constructed a precise classification model for cotton Verticillium wilt based on support vector machine radial basis function classification method. The experimental results indicate that the joint microwave and optical time-series monitoring model achieved overall accuracy (OA) of 81.73% and Kappa coefficient of 0.63, meeting the monitoring requirements of the first stage. Based on the SVM with RBF and the optimal band combination, the OA value of the comprehensive image captured at −58° angle reached 96.74%, with Kappa coefficient of 0.93, meeting the requirements of precise classification detection in the second stage.
Cotton spider mites pose a significant threat to cotton production, while traditional manual investigation and blanket pesticide application are inefficient for precision pest management in large-scale cotton fields. To address this challenge, this study developed an integrated UAV multispectral remote sensing system for spider mite monitoring and precision spraying. Multispectral imagery was acquired from cotton fields in Shaya County, Xinjiang using UAV-mounted cameras, and vegetation indices including RDVI, MSAVI, SAVI, and OSAVI were selected through feature optimization. Comparative evaluation of three machine learning models (Logistic Regression, Random Forest, and Support Vector Machine) and two deep learning models (1D-CNN and MobileNetV2) was conducted. Considering classification performance and computational efficiency for real-time UAV deployment, Random Forest was identified as optimal, achieving 85.47% accuracy, an 85.24% F1-score, and an AUC of 0.912. The model generated centimeter-level spatial distribution maps for precise spray zone delineation. An improved NSGA-III multi-objective path optimization algorithm was proposed, incorporating PCA-based heuristic initialization, differential evolution operators, and co-evolutionary dual population strategies to optimize deadheading distance, energy consumption, operation time, turning frequency, and load balancing. Ablation study validated the effectiveness of each component, with the fully improved algorithm reducing IGD by 59.94% and increasing HV by 5.90% compared to standard NSGA-III. Field validation showed 98.5% coverage of infested areas with only 3.6% path repetition, effectively minimizing pesticide waste and phytotoxicity risks. This study established a complete technical pipeline from monitoring to application, providing a valuable reference for precision pest control in large-scale cotton production systems. The framework demonstrated robust performance across multiple field sites, though its generalization is currently limited to one geographic region and growth stage. Future work will extend its application to additional cotton varieties, growth stages, and geographic regions.
In the current scenario, different types of plant is main source to meet the food needs of the world. However, unfavourable environmental conditions affect the crop health and crop production. Therefore, state of the art methods are required for the early detection of crop diseases to raise the production. Under the current work, a convolutional neural concept based ResNet-50 classifier is developed for the early prediction of various diseases in eggplant crop. Image dataset of unhealthy/healthy crop leaves were collected by using Unmanned Aerial Vehicle (UAV). Four diseases of the eggplant crop are considered during the development of classifier i.e. cercospora melongenae, lace bug, leaf curl and pest. Further, health condition of the crop is examined by using first classifier. While, second classifier is developed to detect the diseases of eggplant crop. Finally, accuracy and precision of the developed classifiers have been estimated on the basis of several performance metrics.
No abstract available
AI农业病虫害检测领域的研究已经形成从底层算法优化到高层系统集成的全产业链覆盖。核心研究方向已从单纯的图像分类演进为以YOLO系列为代表的实时定位检测。为解决农业现场落地的局限,研究正向“轻量化边缘计算”和“无人机广域监测”双向发展。同时,通过引入物联网(IoT)、联邦学习及多模态数据融合,该领域正迈向软硬件高度集成、自动化决策与精准防控于一体的智慧农业新阶段。