人工智能和宠物医疗的结合
基于计算机视觉的医学影像与数字病理辅助诊断
该组文献集中研究利用深度学习(如CNN, UNet, ResNet)对宠物(犬、猫、马等)的X线、CT、超声、内窥镜及病理切片进行自动化分析。涵盖病灶分割、骨骼畸形检测(如髋关节发育不良)、器官病变识别及影像质量控制。
- Evaluating Deep Learning Model ResNet50 for Dog Skin Disease Classification: AI-Powered Dog Care Companion(V. Keerthika, Sumathi D, Prakruthi Ganiga, 2025, 2025 8th International Conference on Emerging Technologies in Computer Engineering: Advances in Computing, Healthcare and Smart Systems (ICETCE))
- Deep Learning Based Dog SkinPrediction: A Multi Label Classification Approach using Resent18(Karanam Madhavi, Cherupally Vishal Kumar, M. Ranjith, Layth Hussein, 2025, 2025 Third International Conference on Industry 4.0 Technology (I4Tech))
- Canine Vertebral Column Segmentation with UNet Models: A Positive Approach to Radiographic Image Analysis(Rajneesh Kumar, S. Samantaray, Kavita Arup Kumar Das, 2023, INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- Computer vision model for the detection of canine pododermatitis and neoplasia of the paw.(Andrew L Smith, Patrick W Carroll, Srikanth R Aravamuthan, E. Walleser, Haley Lin, K. Anklam, Dörte Döpfer, Neoklis Apostolopoulos, 2023, Veterinary dermatology)
- Automated Kidney Disease Detection Using Machine Learning and Computer Vision Basedon Radiology Image Analysis(Pardaev Shokhrukh, Danish Ather, Rahul Chauhan, Kireet Joshi, Gurinder Singh, Naina Chaudhary, 2024, 2024 4th International Conference on Technological Advancements in Computational Sciences (ICTACS))
- Digitalization of Veterinary Pathology in the Era of Artificial Intelligence: A Comprehensive Review(Ahmed Fotouh, 2025, Benha Veterinary Medical Journal)
- Computer-aided diagnosis of Canine Hip Dysplasia using deep learning approach in a novel X-ray image dataset(Chaouki Boufenar, Tété Elom Mike Norbert Logovi, Djemai Samir, Imad Eddine Lassakeur, 2023, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization)
- A Deep Learning Approach for Classification of Dog Skin Diseases(Priti Pal, Munish Saini, 2025, 2025 2nd International Conference on New Frontiers in Communication, Automation, Management and Security (ICCAMS))
- Quantitative analysis of ultrasonographic images and cytology in relation to histopathology of canine and feline liver: An ex-vivo study.(T. Banzato, M. Gelain, L. Aresu, C. Centelleghe, S. Benali, A. Zotti, 2015, Research in veterinary science)
- Evaluation of a Deep Active Learning Model for the Segmentation of Canine Thoracic Radiographs.(Nicole Norena, Peyman Tahghighi, Eran Ukwatta, Fiona M. K. James, Gabrielle Monteith, Amin Komeili, Ryan B. Appleby, 2025, Veterinary radiology & ultrasound : the official journal of the American College of Veterinary Radiology and the International Veterinary Radiology Association)
- Optimizing Image Quality for Dog Skin Disease Diagnosis: Bacterial, Fungal, and Hypersensitivity Cases with MATLAB(Mery Oktaviyanti Puspitaningtyas, Jufriadif Na`am, 2025, Journal Medical Informatics Technology)
- Development of Deep Learning based Automated Detection and Classification of Dog Skin Diseases(Kiara Patel, Vinay Vishwakarma, 2025, 2025 3rd International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI))
- Selection of density standard and X–ray tube settings for computed digital absorptiometry in horses using the k–means clustering algorithm(Bernard Turek, Marek Pawlikowski, Krzysztof Jankowski, Marta T. Borowska, K. Skierbiszewska, T. Jasiński, M. Domino, 2025, BMC Veterinary Research)
- Multimodal Approach of Optical Coherence Tomography and Raman Spectroscopy Can Improve Differentiating Benign and Malignant Skin Tumors in Animal Patients(M. Tamošiūnas, Oskars Čiževskis, Daira Viškere, Mikus Melderis, U. Rubins, B. Cugmas, 2022, Cancers)
- Feasibility study of computed tomography texture analysis for evaluation of canine primary adrenal gland tumors(Kyungsook Lee, Jinhyong Goh, J. Jang, Jeongyeon Hwang, Jungmin Kwak, Jaehwan Kim, K. Eom, 2023, Frontiers in Veterinary Science)
- Next-Generation Computer Vision in Veterinary Medicine: A Study on Canine Ophthalmology(Matija Burić, M. Ivašić-Kos, 2025, IEEE Transactions on Artificial Intelligence)
- Differentiation of canine and feline neoplasms using multi-modal imaging and machine learning(M. Maciulevičius, Greta Rupšytė, R. Raišutis, B. Cugmas, Mindaugas Tamošiūnas, 2025, Scientific Reports)
- Deep learning-based diagnosis of Dirofilaria immitis microfilariae in dog blood(Sepide Banihashem Nejad, Nima Hashemi, Ershad Hasanpour, F. Jalousian, S. Jamshidi, Seyed Hossein Hosseini, Fatemeh Manshori Ghaishghorshagh, H. Soltanian-Zadeh, 2022, 2022 29th National and 7th International Iranian Conference on Biomedical Engineering (ICBME))
- Classification of the quality of canine and feline ventrodorsal and dorsoventral thoracic radiographs through machine learning.(Peyman Tahghighi, Ryan B. Appleby, Nicole Norena, Eran Ukwatta, Amin Komeili, 2024, Veterinary radiology & ultrasound : the official journal of the American College of Veterinary Radiology and the International Veterinary Radiology Association)
- Deep Learning Can be Used to Classify the Disease Status of the Canine Middle Ear From Computed Tomographic Images(Zhixuan Zhao, Oisin Mac Aodha, C. Daniel, N. Israeliantz, Anna Orekhova, T. Schwarz, Richard Mellanby, Christopher J. Banks, 2025, Veterinary Radiology & Ultrasound)
- Artificial Intelligence in Chest Radiography—A Comparative Review of Human and Veterinary Medicine(Andrea Rubini, Roberto Di Via, Vito Paolo Pastore, Francesca Del Signore, Martina Rosto, Andrea De Bonis, Francesca Odone, Massimo Vignoli, 2025, Veterinary Sciences)
- Evaluation of a Novel Veterinary Dental Radiography Artificial Intelligence Software Program(Markay L Nyquist, Lisa A. Fink, G. Mauldin, C. Coffman, 2024, Journal of Veterinary Dentistry)
- Generative Active Learning with Variational Autoencoder for Radiology Data Generation in Veterinary Medicine(In-Gyu Lee, Jun-Young Oh, Hee-Jung Yu, Jae-Hwan Kim, Ki-Dong Eom, Ji-Hoon Jeong, 2024, 2024 IEEE Conference on Artificial Intelligence (CAI))
临床电子病历(EHR)挖掘、NLP与流行病学监测
利用自然语言处理技术处理非结构化兽医临床文本。重点在于自动化诊断编码(SNOMED-CT)、处方行为分析(尤其是抗生素管理)、疾病流行病学监测以及从海量病历中提取预后指标。
- Fine-tuning foundational models to code diagnoses from veterinary health records(Mayla Boguslav, Adam Kiehl, David Kott, G. Strecker, Tracy L. Webb, Nadia Saklou, T. Ward, Michael Kirby, 2024, PLOS digital health)
- Processing Medical Reports to Automatically Populate Ontologies(Luís Borrego, P. Quaresma, 2013, Studies in health technology and informatics)
- Domain Adaptation and Instance Selection for Disease Syndrome Classification over Veterinary Clinical Notes(Brian Hur, Timothy Baldwin, Karin M. Verspoor, L. Hardefeldt, J. Gilkerson, 2020, No journal)
- Corrigendum to “Natural language processing in veterinary pathology: A review”(2025, Veterinary Pathology)
- Using natural language processing and VetCompass to understand antimicrobial usage patterns in Australia.(Brian Hur, L. Hardefeldt, K. Verspoor, Timothy Baldwin, J. Gilkerson, 2019, Australian veterinary journal)
- Is That the Right Dose? Investigating Generative Language Model Performance on Veterinary Prescription Text Analysis(Brian Hur, L. Wang, L. Hardefeldt, Meliha Yetisgen-Yildiz, 2024, No journal)
- Describing the antimicrobial usage patterns of companion animal veterinary practices; free text analysis of more than 4.4 million consultation records(Brian Hur, L. Hardefeldt, Karin M. Verspoor, Timothy Baldwin, J. Gilkerson, 2020, PLoS ONE)
- Assessing the Accuracy of Open-Source Named Entity Recognition in Veterinary Oncology(Sachin Kumar, P. Mishra, Kuntal Pramanik, Kshtij Tyagi, Vinti Gupta, 2025, 2025 International Conference on Artificial intelligence and Emerging Technologies (ICAIET))
- Using natural language processing and patient journey clustering for temporal phenotyping of antimicrobial therapies for cat bite abscesses.(Brian Hur, K. Verspoor, Timothy Baldwin, L. Hardefeldt, C. Pfeiffer, C. Mansfield, R. Scarborough, J. Gilkerson, 2023, Preventive veterinary medicine)
- Overcoming challenges in extracting prescribing habits from veterinary clinics using big data and deep learning.(Brian Hur, L. Hardefeldt, K. Verspoor, T. Baldwin, J. Gilkerson, 2022, Australian veterinary journal)
- Natural language processing in veterinary pathology: A commentary on opportunities, challenges, and future directions(L. Stimmer, Raoul V. Kuiper, Laura Polledo, Lorenzo Ressel, Josep M Monné Rodriguez, Inês B Veiga, Jonathan Williams, V. Herder, 2025, Veterinary Pathology)
- Natural language processing in veterinary pathology: A review(L. Stimmer, Raoul V. Kuiper, Laura Polledo, Lorenzo Ressel, Josep M Monné Rodriguez, Inês B Veiga, Jonathan Williams, V. Herder, 2025, Veterinary Pathology)
- Generative artificial intelligence provides accurate case selection in veterinary retrospective studies.(Armen M Brus, Thomas H. Edwards, G. Atiee, Vanna M Dickerson, Ryan F. Ortiz, Shakayla V Mosely, Sofia I. Hernandez Torres, Eric J. Snider, 2025, American journal of veterinary research)
- Using Comprehend, Medical Comprehend, Bedrock and others AI APIs/Services in Analysing Medical Records for Horses(Oleksii Fonin, 2025, Universal Library of Engineering Technology)
- Validation of text-mining and content analysis techniques using data collected from veterinary practice management software systems in the UK.(J. Jones-Diette, R. Dean, M. Cobb, M. Brennan, 2019, Preventive veterinary medicine)
- Classifying Message Board Posts with an Extracted Lexicon of Patient Attributes(Ruihong Huang, E. Riloff, 2013, No journal)
- SNOMED CT: A Clinical Terminology but Also a Formal Ontology(C. Koné, Michel Babri, J. Rodrigues, 2023, Journal of Biosciences and Medicines)
- Text mining for disease surveillance in veterinary clinical data: part one, the language of veterinary clinical records and searching for words(H. Davies, G. Nenadic, G. Alfattni, Mercedes Arguello Casteleiro, N. Al Moubayed, S. Farrell, Alan D. Radford, P. Noble, 2024, Frontiers in Veterinary Science)
疾病风险预测、知识图谱与临床决策支持系统
侧重于将多维临床数据(生化、体征、病史)转化为结构化知识或预测模型。包括构建兽医知识图谱、开发急诊分诊模型、预测术后生存率及自动化放疗计划制定。
- Companion Animal Disease Diagnostics Based on Literal-Aware Medical Knowledge Graph Representation Learning(Van Thuy Hoang, S. T. Nguyen, Sangmyeong Lee, Jooho Lee, Luong Vuong Nguyen, O-Joun Lee, 2023, IEEE Access)
- Computerised decision support in veterinary medicine, exemplified in a canine idiopathic epilepsy care pathway.(K. Fox, J. Fox, N. Bexfield, P. Freeman, 2021, The Journal of small animal practice)
- A Modern AI Framework Integrating Deep Imputation, Synthetic Data Balancing, and Explainable Modeling for Survival Prediction in Horse Colic.(Zeynep Banu Ozger, Pınar Cihan, Isa Ozaydin, 2025, Annals of anatomy = Anatomischer Anzeiger : official organ of the Anatomische Gesellschaft)
- Constructing a Companion Animal Disease Knowledge Graph by Utilizing LLMs in Data Preprocessing and Pseudo Annotation(Thien Nguyen, Eun-Soon You, Hyun Woo Kim, Van Thuy Hoang, Luong Vuong Nguyen, O-Joun Lee, 2025, Proceedings of the International Conference on Research in Adaptive and Convergent Systems)
- Performance of large language models versus clinicians and novices in veterinary theriogenology decision support.(D. T. Okur, Mehmet Cengiz, İbrahim Küçükaslan, C. Peker, A. Y. Çiplak, V. Tohumcu, Ş. Aydin, 2026, Journal of the American Veterinary Medical Association)
- When used for veterinary triage, artificial intelligence models recognise emergencies but are more likely than veterinary staff to flag non-urgent cases as urgent.(Arlene Wong, Madeleine L Roberts, Marie J A V P Pantangco, Ashleigh Arnold, Alexander Philp, J. Šlapeta, Samantha Livingstone, 2025, The Veterinary record)
- Automated Knowledge-Based Radiation Treatment Planning in Canine and Feline Nasal Tumors.(Waraporn Aumarm, W. Theerapan, Sawanee Suntiwong, Kittipol Dechaworakul, Winutpuksinee Wibulchan, S. Thongsawad, 2025, Veterinary radiology & ultrasound : the official journal of the American College of Veterinary Radiology and the International Veterinary Radiology Association)
- Machine Learning-Based Risk Prediction for Feline Mammary Tumours: A Comprehensive Epidemiological Analysis Using Multi-Model Ensemble Approach.(Kübra Nur Çalı Özçelik, S. Özçelik, S. Timurkaan, 2025, Veterinary and comparative oncology)
- Predicting Equine Health Outcomes Using Machine Learning Models Trained on Clinical Indicators and Limited Behavioral Data(K. Zhang, Zijie Niu, Kevin Zhang, 2025, 2025 25th International Conference on Software Quality, Reliability, and Security Companion (QRS-C))
- Making Sense of Pharmacovigilance and Drug Adverse Event Reporting: Comparative Similarity Association Analysis Using AI Machine Learning Algorithms in Dogs and Cats.(Xuan Xu, R. Mazloom, Arash Goligerdian, Joshua Staley, M. Amini, Gerald J. Wyckoff, Jim E. Riviere, Majid Jaberi-Douraki, 2019, Topics in companion animal medicine)
- Development and validation of a multivariable model and online decision-support calculator to aid in preoperative discrimination of benign from malignant splenic masses in dogs.(K. Burgess, L. Price, R. King, Manlik Kwong, E. Grant, K. A. Olson, J. Lyons, Nicholas A. Robinson, K. Wendelburg, J. Berg, 2021, Journal of the American Veterinary Medical Association)
- Lectin Microarray-based Glycomics and Machine Learning Identify Shared Osteoarthritis Biomarkers in Humans, Dogs, and Horses(Angelo G. Peralta, Parisa Raeisimakiani, Kei Hayashi, Lara K. Mahal, Heidi L. Reesink, 2025, bioRxiv)
- Explainable text-tabular models for predicting mortality risk in companion animals(James Burton, S. Farrell, Peter-John Mäntylä Noble, N. Al Moubayed, 2024, Scientific Reports)
- Retrospective cohort study on the development of keratoconjunctivitis sicca in dogs treated with trimethoprim sulfonamide: a VetCompass Australia study.(L. Hardefeldt, R. Scarborough, Brian Hur, 2026, Journal of veterinary internal medicine)
- Using a gradient boosted model for case ascertainment from free-text veterinary records.(U. Kennedy, Mandy B A Paterson, N. Clark, 2023, Preventive veterinary medicine)
实验室自动化检测与智能交互工具
涵盖实验室场景下的病原体识别(蜱虫、寄生虫)、尿液沉渣自动分析,以及面向宠物主的AI聊天机器人(用于初步诊断建议、营养指导等交互应用)。
- Comparison of the performance of the IDEXX SediVue Dx® with manual microscopy for the detection of cells and 2 crystal types in canine and feline urine(Annalisa M Hernandez, G. Bilbrough, D. Denicola, Celine Myrick, Suzanne Edwards, Jeremy M Hammond, Alexandra N. Myers, J. Heseltine, K. Russell, M. Giraldi, M. Nabity, 2018, Journal of Veterinary Internal Medicine)
- A Computer Vision-Based Approach for Tick Identification Using Deep Learning Models(Chu-Yuan Luo, P. Pearson, Guang Xu, S. Rich, 2022, Insects)
- Evaluation of Parasight All-in-One system for the automated enumeration of helminth ova in canine and feline feces(Timothy Graham Castle, L. Britton, B. Ripley, Elizabeth Ubelhor, P. Slusarewicz, 2024, Parasites & Vectors)
- Evaluation of the VETSCAN IMAGYST: an in-clinic canine and feline fecal parasite detection system integrated with a deep learning algorithm(Y. Nagamori, Ruth Hall Sedlak, A. DeRosa, Aleah Pullins, Travis Cree, Michael Loenser, Benjamin S. Larson, Richard Boyd Smith, Richard Goldstein, 2020, Parasites & Vectors)
- AI-Powered Veterinary Chatbot for Automated Diagnosis(Prashanna J, Vanita Jaitly, 2025, 2025 International Conference on Inventive Computation Technologies (ICICT))
- DEVELOPMENT OF AN ARTIFICIAL INTELLIGENCE-BASED CHATBOT SYSTEM FOR PET DOG CARE CONSULTATION IN VIETNAM(Than Gia Bao, Nguyen Duc Hai, Dong Huy Gioi, La Viet Hong, Le Thi Ngoc Quynh, Chu Duc Ha, Le Huy Ham, 2025, Tạp chí Khoa học và Công nghệ Trường Đại học Hùng Vương)
- AI-Driven Disease Detection and Intelligent CHATBOT for Pet Healthcare Management(Megha Dhotay, Srushti Kirve, Anuja Walke, Pratham Mulmule, Mohit Polisetty, 2025, Journal of Scientific Advances)
行业综述、伦理规制与人机协作框架
探讨AI在兽医行业实施的宏观环境。包括行业协会(如ACVR)的立场、论文报告准则、伦理法律挑战、人机协作模式(谁先诊断)以及对未来精准医疗的展望。
- What's in the box? A toolbox for safe deployment of artificial intelligence in veterinary medicine.(P. Basran, Ryan B. Appleby, 2024, Journal of the American Veterinary Medical Association)
- From bark to bytes: artificial intelligence transforming veterinary medicine.(Casey L Cazer, P. Basran, Renata Ivanek-Miojevic, 2025, American journal of veterinary research)
- Ethical concerns about the deployment of artificial intelligence applications in veterinary practice(Michail Zavlaris, 2025, In Practice)
- AI Integration in Veterinary Practice: Improving Care with Technology and Data(S. Shandilya, Manisha Gaur, Anju Gautam, Shalini Singhal, Priya Tanwar, Harleeen Kaur, 2025, 2025 International Conference on Digital Innovations for Sustainable Solutions (ICDISS))
- Role of Artificial Intelligence in Advancing Veterinary Medicine(Rana A. Jawad, Fatema Ali, Al Kafhage, T. S. Rahi, I. AL-Shemmari, Afrah Kamil Zabeel, Zainab Ali Jabur, Athra Abass Muteab, 2025, Kerbala Journal of Veterinary Medical Sciences)
- Artificial intelligence uses in clinical and laboratory diagnosis(M. M. El, 2025, Egyptian Journal of Animal Health)
- Reporting guidelines for manuscripts that use artificial intelligence–based automated image analysis in Veterinary Pathology(Christof A. Bertram, M. Schutten, Lorenzo Ressel, Katharina Breininger, Joshua D. Webster, M. Aubreville, 2025, Veterinary Pathology)
- Artificial intelligence and veterinary practice.(2025, The Veterinary record)
- Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging(Riccardo Fogliato, S. Chappidi, M. Lungren, Michael Fitzke, Mark Parkinson, Diane U Wilson, Paul Fisher, E. Horvitz, K. Inkpen, Besmira Nushi, 2022, Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency)
- American College of Veterinary Radiology and European College of Veterinary Diagnostic Imaging position statement on artificial intelligence.(Ryan B. Appleby, Matthew R DiFazio, Nicolette Cassel, Ryan Hennessey, P. Basran, 2025, Journal of the American Veterinary Medical Association)
- Role of Artificial Intelligence in Veterinary Anatomical Diagnostics and Zoonotic Disease Monitoring.(Ehsanullah, Bakhtawar Maqbool, Muhammad Imran Arshad, Nagah M. Abourashed, Shfaia Tehseen Gul, 2025, Annals of anatomy = Anatomischer Anzeiger : official organ of the Anatomische Gesellschaft)
- Harnessing artificial intelligence for enhanced veterinary diagnostics: A look to quality assurance, Part II External validation(Christina Pacholec, B. Flatland, Hehuang Xie, Kurt Zimmerman, 2025, Veterinary Clinical Pathology)
- Veterinary oncology data management in the era of artificial intelligence(S. Pu, M. Thompson, S. Ross, P. Basran, 2025, Veterinary Oncology)
- The potential application of artificial intelligence in veterinary clinical practice and biomedical research(O. C. Akinsulie, Ibrahim Idris, Victor Ayodele Aliyu, Sammuel Shahzad, O. Banwo, S. C. Ogunleye, M. Olorunshola, Deborah O. Okedoyin, C. Ugwu, I. Oladapo, J. Gbadegoye, Qudus Afolabi Akande, Pius I. Babawale, Sahar Rostami, K. Soetan, 2024, Frontiers in Veterinary Science)
- The Prospective Of Artificial Intelligence In Veterinary Care(Prakruthi H V, Dr. H. N. Prakash, 2025, INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT)
- Exploring the Automatisation of Animal Health Surveillance Through Natural Language Processing(Mercedes Argüello Casteleiro, Phil H. Jones, S. Robertson, R. Irvine, Fin Twomey, G. Nenadic, 2019, No journal)
- Smart Diagnosis: Artificial Intelligence Application in Veterinary medicine and Infectious Disease Control(R. Ali, A. Aljawad, Karar Sajad A Alkhanger, A. Kadhim, 2025, Kerbala Journal of Veterinary Medical Sciences)
公共卫生监测、手术研究与职业健康
关注AI在非诊疗直接场景的应用,包括利用机器学习进行宏观层面的动物疾病监测、手术过程中的先进媒体洞察以及评估兽医操作中的人体工程学风险。
- Applications of machine learning in animal and veterinary public health surveillance.(J. Guitian, E. Snary, M. Arnold, Y. Chang, 2023, Revue scientifique et technique)
- Design of Computer-vision Based System for Multi-person Ergonomics Assessment in Veterinary Practice – A Pilot Study(Jing Yang, S. Kim, Denny Yu, 2022, Proceedings of the Human Factors and Ergonomics Society Annual Meeting)
- Dog Bite Detection and Hybrid Recovery Mechanism to Ensure Human Safety using Deep Learning(Dr. S. Balaji, D. B. Kumar, Mr.D. Prabhu, T. S. Associate Professor, M. Shree, V. Kiruthiga, 2025, Journal of Information Systems Engineering and Management)
- Artificial Intelligence in Veterinary Surgical Research: Current Applications and Prospects(T. K. Almalki, Mohamed A. Marzok, Zakriya Al Mohamad, Abdelrahman M. Hereba, Mohamed W. El-Sherif, Mahmoud A. Hassan, Mahmoud S. Saber, 2025, Egyptian Journal of Veterinary Sciences)
- Media Insights Engine for Advanced Media Analysis: A Case Study of a Computer Vision Innovation for Pet Health Diagnosis(Anjanava Biswas, 2024, ArXiv)
- Precision in Parsing: Evaluation of an Open‐Source Named Entity Recognizer (NER) in Veterinary Oncology(C. Pinard, Andrew C. Poon, A. Lagree, Kuan-Chuen Wu, Jiaxu Li, William T. Tran, 2024, Veterinary and Comparative Oncology)
最终分组结果全面覆盖了人工智能在宠物医疗中的全产业链应用。研究矩阵以“计算机视觉影像诊断”和“临床文本NLP挖掘”为核心技术双翼,通过“决策支持系统”实现临床赋能。同时,研究已从单纯的算法开发延伸至“实验室自动化”与“终端用户交互”等具体场景。最后,行业对“伦理规制”、“人机协作模式”及“职业健康”的深度讨论,标志着AI在宠物医疗领域正从技术爆发期向稳健的规范化实施阶段过渡。
总计86篇相关文献
The American College of Veterinary Radiology (ACVR) and the European College of Veterinary Diagnostic Imaging (ECVDI) recognize the transformative potential of AI in veterinary diagnostic imaging and radiation oncology. This position statement outlines the guiding principles for the ethical development and integration of AI technologies to ensure patient safety and clinical effectiveness. Artificial intelligence systems must adhere to good machine learning practices, emphasizing transparency, error reporting, and the involvement of clinical experts throughout development. These tools should also include robust mechanisms for secure patient data handling and postimplementation monitoring. The position highlights the critical importance of maintaining a veterinarian in the loop, preferably a board-certified radiologist or radiation oncologist, to interpret AI outputs and safeguard diagnostic quality. Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety. The ACVR and ECVDI advocate for rigorous peer-reviewed research, unbiased third-party evaluations, and interdisciplinary collaboration to establish evidence-based benchmarks for AI applications. Additionally, the statement calls for enhanced education on AI for veterinary professionals, from foundational training in curricula to continuing education for practitioners. Veterinarians are encouraged to disclose AI usage to pet owners and provide alternative diagnostic options as needed. Regulatory bodies should establish guidelines to prevent misuse and protect the profession and patients. The ACVR and ECVDI stress the need for a cautious, informed approach to AI adoption, ensuring these technologies augment, rather than compromise, veterinary care.
Artificial intelligence (AI) is a fast-paced technological advancement in terms of its application to various fields of science and technology. In particular, AI has the potential to play various roles in veterinary clinical practice, enhancing the way veterinary care is delivered, improving outcomes for animals and ultimately humans. Also, in recent years, the emergence of AI has led to a new direction in biomedical research, especially in translational research with great potential, promising to revolutionize science. AI is applicable in antimicrobial resistance (AMR) research, cancer research, drug design and vaccine development, epidemiology, disease surveillance, and genomics. Here, we highlighted and discussed the potential impact of various aspects of AI in veterinary clinical practice and biomedical research, proposing this technology as a key tool for addressing pressing global health challenges across various domains.
Abstract Artificial intelligence (AI) is emerging as a valuable diagnostic tool in veterinary medicine, offering affordable and accessible tests that can match or even exceed the performance of medical professionals in similar tasks. Despite the promising outcomes of using AI systems (AIS) as highly accurate diagnostic tools, the field of quality assurance in AIS is still in its early stages. Our Part I manuscript focused on the development and technical validation of an AIS. In Part II, we explore the next step in development: external validation (i.e., in silico testing). This phase is a critical quality assurance component for any AIS intended for medical use, ensuring that high‐quality diagnostics remain the standard in veterinary medicine. The quality assurance process for evaluating an AIS involves rigorous: (1) investigation of sources of bias, (2) application of calibration methods and prediction of uncertainty, (3) implementation of safety monitoring systems, and (4) assessment of repeatability and robustness. Testing with unseen data is an essential part of in silico testing, as it ensures the accuracy and precision of the AIS output.
Simple Summary Artificial intelligence (AI) could enhance the field of radiology in both human and veterinary medicine by making diagnoses faster and more accurate. In human healthcare, AI assists in detecting diseases such as pneumonia and COVID-19, supporting physicians in pattern recognition and outcome prediction. However, human oversight remains essential due to data limitations and ethical concerns. In veterinary medicine, the use of AI is still limited due to several factors, including the lack of large databases, anatomical differences between animal breeds, and limited research in this field. Focusing on species with less anatomical variability, such as cats, and encouraging interdisciplinary collaboration could foster its development. Despite its potential, the radiologist’s expertise remains crucial. In this context, AI can be seen as a valuable support tool in the daily practice of radiology.
BACKGROUND The use of artificial intelligence (AI) also playing a significant role in veterinary medicine due to the changing pattern of diseases in terms of climatic changes and advances in treatment protocols. About 60% of emerging human diseases are zoonotic mainly originating from animals, so the conventional diagnostic tools and traceability protocols are not fast enough, precise and lack of ability to handle large number of cases. Use of AI tools can make a big difference in diagnosis of diseases/problems through diagnostic images, predicting outbreaks through the data from previous records ultimately leading to improved monitoring of zoonotic diseases in terms of early warning systems for future outbreaks, multisectoral collaborations to improve the health of humans, animals and environments. In diagnosis, AI shows great effectiveness, like being able to spot more than 90% of bone and joint issues in X-rays, predicting sickness in farm animals up to two to three days before they show symptoms, and even predicting animal diseases those can transmitted to humans up to weeks in advance by looking at data about the environment changes and animals' movements. But adoption to these AI systems is still not common because of many reasons including scattered data, lack of understanding about algorithms, ethical issues, and unequal access to technology etc. CONCLUSIONS: As climate change speeds up the spread of diseases from animals to humans, AI is becoming a crucial tool for reaching health goals that affect both people and animals. But this happens only if AI is used fairly and responsibly. This summary shows that working together across different fields is important to combine new technology with expert knowledge from vets. The goal is to use AI to support, not take over, what doctors do, and to make advanced care available to everyone around the world.
Through improved diagnosis, real-time monitoring, and predictive modeling, artificial intelligence (AI) is revolutionizing contemporary veterinary services. In order to improve clinical decision-making, facilitate illness diagnosis and maximize animal health outcomes, this assessment focuses on the use of AI technology in veterinary settings. The veterinary field is set to experience a radical change in how it approaches treatment, monitoring, and research as machine learning and big data integration advance.
No abstract available
Abstract—In veterinary medicine, artificial intelligence (AI) is making its mark. It is improving the efficiency and accuracy of diagnosis, clinical operations and animal health. It is used in diagnostic imaging, disease prediction, behavior monitoring and decision support system are widespread. The use of deep learning models particularly in complex neural networks in diagnostic imaging to detect anomalies of radiographs and other scanning of animals like dog, cat, livestock etc. These tools help veterinarians interpret faster with more accurate diagnoses. AI models that are trained on a number of epidemiological datasets can predict outbreaks of disease and in turn, disease management. An additional important area for the implementation of AI consists in monitoring animal behavior and welfare. Mobile apps and computer vision techniques powered by AI can detect facial expressions, movement patterns indicative of pain, distress, or illness. As a result, it is especially important for farm animals and companion animals to improve their welfare and clinical outcomes. AI clinical decision support systems (CDSS) offer real time diagnosis and treatment advice and improve efficiency and lessen human error. Today’s veterinary clinics increasingly find these systems indispensable tools.It is especially useful in emergencies or complex complications. Still, various challenges remain in the way such as data quality, algorithm transparency, and ethical use of AI. Veterinary profession should also be trained on new technologies like the rest of the society. Keywords—Artificial intelligence, Veterinary care, Machine Learning, Disease Prediction, Animal Behavior Monitoring ,Livestock Health Management, Real-time Monitoring, Smart Veterinary Tools
The Artificial Intelligence integration into veterinary medication has emerged as a transformative method for boosting diagnostic accuracy, enhancing ailment surveillance, and assisting evidence-based selection-making. This overview investigates present day programs, methodologies, and demanding situations of artificial intelligence pushed clever diagnosis in veterinary practice, with unique awareness on infectious disease detection and epidemiological tracking. By synthesizing latest advancements in machine studying, laptop vision, and records analytics, the paper highlights how AI fashions make contributions to early sickness identification, sample recognition, and predictive analytics for outbreak manage. The evaluate also examines the shortcomings of contemporary structures, which includes troubles with statistics pleasant, model generalizability, and moral troubles in animal health studies. According to the research, AI-enabled diagnostic technologies have a great deal of promise to strengthen veterinary public health systems, enhance animal welfare, and shorten diagnostic wait times. In order to develop artificial intelligence applications in veterinary medicine and open the door to more robust and data-driven methods of managing infectious diseases, this paper ultimately emphasizes the necessity of interdisciplinary cooperation and standardized frameworks.
RTIFICIAL intelligence (AI) has made great modification in human surgical research. Opportunities still exist for their application in veterinary surgical sciences with potentially transformative implications for animal health together with comparative medicine research. The objective of this review was to investigate what AI can currently do and what it might do in the future in terms of veterinary surgical research during the whole procedural process, which consists of pre, intra, and postoperative contributions, limitations, and future directions. Literature was searched to find out the use of AI in veterinary surgical sciences, a thorough literature search was conducted in the following databases (PubMed, Scopus, CAB Abstracts, Web of Science) and articles from the year 2018 to 2025 were evaluated. The studies were categorized based on pre, intra and postoperative application and a cross analysis of data management, ethics and translational applicability was carried out. The applications of AI in veterinary surgical research are advanced, with a focus mainly on preoperative diagnostic imaging and post operative histopathologic analysis. Machine learning models have the potential to accurately choose surgical cases, predict the results, and even assess the outcomes with a precision of more than 85.0% of the result. However, the intraoperative application of AI tools in veterinary practice is not as developed as in human surgery, mainly due to the scarcity of veterinary specialists' technological adaptation problems. AI is expected to revolutionize veterinary surgical research; however, it needs cooperations, uniform data gathering rules, and the creation of algorithms for different animal species.
BACKGROUND This study assesses the capability of ChatGPT and nurses in accurately triaging emergency patients compared to veterinarians. METHODS Retrospective observational study of canine patients that presented at a private veterinary specialist and emergency hospital. Given clinical signs and history, patients were assigned to a triage category (waiting times of 0, 15, 30‒60, 120 and 240 minutes). Triages were performed by three veterinarians, two nurses, ChatGPT-3.5 and ChatGPT-4.0. Statistical analysis was used to assess how often triage by ChatGPT and nurses agreed with veterinarians. RESULTS ChatGPT has high sensitivity in identifying severe emergencies, correctly prioritising 80%‒90% of critical cases, but over-triaged around 60% of non-urgent cases as requiring immediate attention. ChatGPT's triage performance was comparable to that of nurses. When ChatGPT was used as a tool to flag severe cases ('0 minutes') in concert with nurses, triage sensitivity rose to 95%. LIMITATIONS The small sample of nurses limits the ability to assess how performance relative to artificial intelligence (AI) may vary with nurses' triage experience. CONCLUSIONS AI models can be an effective tool for flagging severe cases and complementing nurse triages. However, the tendency to flag non-urgent cases as requiring immediate attention may lead to increased pressure on emergency clinics.
This report describes a comprehensive framework for applying artificial intelligence (AI) in veterinary medicine. Our framework draws on existing research on AI implementation in human medicine and addresses the challenges of limited technology expertise and the need for scalability. The critical components of this framework include assembling a diverse team of experts in AI, promoting a foundational understanding of AI among veterinary professionals, identifying relevant use cases and objectives, ensuring data quality and availability, creating an effective implementation plan, providing team training, fostering collaboration, considering ethical and legal obligations, integrating AI into existing workflows, monitoring and evaluating performance, managing change effectively, and staying up-to-date with technological advancements. Incorporating AI into veterinary medicine requires addressing unique ethical and legal considerations, including data privacy, owner consent, and the impact of AI outputs on decision-making. Effective change management principles aid in avoiding disruptions and building trust in AI technology. Furthermore, continuous evaluation of AI's relevance in veterinary practice ensures that the benefits of AI translate into meaningful improvements in patient care.
No abstract available
No abstract available
Digitalization of Veterinary Pathology in the Era of Artificial Intelligence: A Comprehensive Review
ABSTRACT
Objective To evaluate the agreement of automation tools with expert evaluators in identifying cases meeting inclusion and exclusion criteria for retrospective veterinary studies. Methods The review of medical records took place from December 16, 2024, through July 2, 2025. Medical records from 3 study populations (100 trauma dogs, 86 stent patients, and 100 cholecystectomy dogs) were assessed by 3 expert reviewers and were compared with automation tools, including AI applications (Gemini 2.5 Pro and NotebookLM) and a keyword search algorithm using Python, using standardized prompts for each study's criteria. Processing time and agreement with experts were compared. Results Gemini 2.5 Pro most closely matched expert selections across all initial studies, with high case detection accuracy (99% to 100%) and fast processing times (90 to 390 seconds). NotebookLM was comparable for the stent dataset but less accurate for the others. Python tools had variable performance throughout the different studies. Conclusions The study provides early evidence that AI is an effective tool for identifying cases using inclusion and exclusion criteria, which can accelerate the development of large retrospective studies. This approach has a multitude of other potential applications in both research and clinical practice. Clinical Relevance Generative AI models, particularly Gemini 2.5 Pro, can enhance the speed and scalability of veterinary retrospective studies. While promising, AI-generated selections should be verified by investigators to ensure the appropriate application of inclusion criteria before final data enrollment.
No abstract available
There is a growing trend of artificial intelligence (AI) applications in veterinary medicine, with the potential to assist veterinarians in clinical decisions. A commercially available, AI-based software program (AISP) for detecting common radiographic dental pathologies in dogs and cats was assessed for agreement with two human evaluators. Furcation bone loss, periapical lucency, resorptive lesion, retained tooth root, attachment (alveolar bone) loss and tooth fracture were assessed. The AISP does not attempt to diagnose or provide treatment recommendations, nor has it been trained to identify other types of radiographic pathology. Inter-rater reliability for detecting pathologies was measured by absolute percent agreement and Gwet's agreement coefficient. There was good to excellent inter-rater reliability among all raters, suggesting the AISP performs similarly at detecting the specified pathologies compared to human evaluators. Sensitivity and specificity for the AISP were assessed using human evaluators as the reference standard. The results revealed a trend of low sensitivity and high specificity, suggesting the AISP may produce a high rate of false negatives and may not be a good tool for initial screening. However, the low rate of false positives produced by the AISP suggests it may be beneficial as a “second set of eyes” because if it detects the specific pathology, there is a high likelihood that the pathology is present. With an understanding of the AISP, as an aid and not a substitute for veterinarians, the technology may increase dental radiography utilization and diagnostic potential.
Ethical concerns about the deployment of artificial intelligence applications in veterinary practice
No abstract available
The capacity to perceive and anticipate the health status of horses is a critical aspect of equine veterinary care. Recent studies have shown that machine learning algorithms can accurately diagnose and classify animal diseases based on physiological signs. Using properties like heart rate, temperature, and other clinical parameters, the study offers a classification model developed from a publicly available dataset of more than 2,000 equine health records to predict health outcomes. Among the algorithms tested, including k-Nearest Neighbors (KNN), Decision Tree, and Light Gradient Boosting Machine (LightGBM), LightGBM achieved the highest validation accuracy at approximately 76%. Exploratory data analysis was conducted to visualize feature distributions and identify correlations, followed by preprocessing steps such as handling missing values and encoding categorical variables. The model was trained using five-fold cross-validation and fine-tuned for optimal performance. Among the factors contributing to the success of LightGBM were its ability to handle categorical features and its leaf-wise tree growth strategy, which improved learning efficiency on a moderately sized dataset. In addition to structured data, limited behavioral descriptors were incorporated using a language model to provide additional context regarding stress and discomfort. While these features had a smaller role, they created new opportunities for interpreting subtle health cues not included in clinical data alone. This study demonstrates the potential for predictive modeling to assist veterinarians in early diagnosis and treatment planning. Future work may focus on expanding the dataset and implementing more detailed behavioral and physiological data to improve model generalization.
Post-traumatic osteoarthritis (PTOA) is a common sequela to joint injury in both humans and companion animal species such as horses and dogs. Despite the increasing prevalence of osteoarthritis (OA) in humans, investigation of glycosylation changes associated with OA remains in its infancy. Recent advances, such as lectin microarray analysis, now enable detailed glycan profiling in complex biofluids such as synovial fluid. Using lectin microarray technology, this study characterized glycosylation patterns in synovial fluid samples from healthy and OA-affected joints in horses, dogs, and humans. Comparative glycan-binding profiles within and between species revealed conserved and distinct glycomic signatures associated with OA. Machine learning models, including classification algorithms, effectively distinguished OA from healthy joints, identifying key lectins and glycan epitopes crucial to these predictions. The identified lectin markers reflect specific glycosylation pathways and potential inflammatory mechanisms, demonstrating their value in differentiating between healthy and OA phenotypes. Our findings underscore the promise of integrated glycomic profiling and machine learning to enhance our understanding of glycan involvement in the pathogenesis of OA and to facilitate the development of diagnostic and therapeutic strategies applicable to both veterinary and human medicine. In Brief Osteoarthritis affects humans and companion animals; however, its molecular features remain unclear. Using lectin microarrays and machine learning, we identified conserved and species-specific glycan signatures in synovial fluid that differentiate between control and osteoarthritic joints. This One Health approach highlights shared molecular mechanisms of joint degeneration and establishes data-driven glycomic profiling as a framework for understanding osteoarthritis across species. Graphical Abstract
Thoracic radiographs are an essential diagnostic tool in companion animal medicine and are frequently used as a part of routine workups in patients presenting for coughing, respiratory distress, cardiovascular diseases, and for staging of neoplasia. Quality control is a critical aspect of radiology practice in preventing misdiagnosis and ensuring consistent, accurate, and reliable diagnostic imaging. Implementing an effective quality control procedure in radiology can impact patient outcomes, facilitate clinical decision-making, and decrease healthcare costs. In this study, a machine learning-based quality classification model is suggested for canine and feline thoracic radiographs captured in both ventrodorsal and dorsoventral positions. The problem of quality classification was divided into collimation, positioning, and exposure, and then an automatic classification method was proposed for each based on deep learning and machine learning. We utilized a dataset of 899 radiographs of dogs and cats. Evaluations using fivefold cross-validation resulted in an F1 score and AUC score of 91.33 (95% CI: 88.37-94.29) and 91.10 (95% CI: 88.16-94.03), respectively. Results indicated that the proposed automatic quality classification has the potential to be implemented in radiology clinics to improve radiograph quality and reduce nondiagnostic images.
Drug-associated adverse events cause approximately 30 billion dollars a year of added health care expense, along with negative health outcomes including patient death. This constitutes a major public health concern. The US Food and Drug Administration (FDA) requires drug labeling to include potential adverse effects for each newly developed drug product. With the advancement in incidence of adverse drug events (ADEs) and potential adverse drug events, published studies have mainly concluded potential ADEs from labeling documents obtained from the FDA's preapproval clinical trials, and very few analyzed their research work based on reported ADEs after widespread use of a drug to animal subjects. The aforesaid procedure of deriving practice based on information from preapproval labeling may misrepresent or deprecate the incidence and prevalence of specific ADEs. In this study, we make the most of the recently disseminated ADE data by the FDA for animal drugs and devices used in animals to address this public and welfare concern. For this purpose, we implemented 5 different methods (Pearson distance, Spearman distance, cosine distance, Yule distance, and Euclidean distance) to determine the most efficient and robust approach to properly discover highly associated ADEs from the reported data and accurately exclude noise-induced reported events, while maintaining a high level of correlation precision. Our comparative analysis of ADEs based on an artificial intelligence (AI) approach for the 5 robust similarity methods revealed high ADE associations for 2 drugs used in dogs and cats. In addition, the described distance methods systematically analyzed and compared ADEs from the drug labeling sections with a specific emphasis on analyzing serious ADEs. Our finding showed that the cosine method significantly outperformed all the other methods by correctly detecting and validating ADEs based on the comparative similarity association analysis compared with ADEs reported by preapproval clinical trials, premarket testing, or postapproval complication experience of FDA-approved animal drugs.
As interest in using machine learning models to support clinical decision-making increases, explainability is an unequivocal priority for clinicians, researchers and regulators to comprehend and trust their results. With many clinical datasets containing a range of modalities, from the free-text of clinician notes to structured tabular data entries, there is a need for frameworks capable of providing comprehensive explanation values across diverse modalities. Here, we present a multimodal masking framework to extend the reach of SHapley Additive exPlanations (SHAP) to text and tabular datasets to identify risk factors for companion animal mortality in first-opinion veterinary electronic health records (EHRs) from across the United Kingdom. The framework is designed to treat each modality consistently, ensuring uniform and consistent treatment of features and thereby fostering predictability in unimodal and multimodal contexts. We present five multimodality approaches, with the best-performing method utilising PetBERT, a language model pre-trained on a veterinary dataset. Utilising our framework, we shed light for the first time on the reasons each model makes its decision and identify the inclination of PetBERT towards a more pronounced engagement with free-text narratives compared to BERT-base’s predominant emphasis on tabular data. The investigation also explores the important features on a more granular level, identifying distinct words and phrases that substantially influenced an animal’s life status prediction. PetBERT showcased a heightened ability to grasp phrases associated with veterinary clinical nomenclature, signalling the productivity of additional pre-training of language models.
Simple Summary Skin and subcutaneous tumors are among the most frequent neoplasms in dogs and cats. We studied 51 samples of canine and feline skin, lipomas, soft tissue sarcomas, and mast cell tumors using a multimodal approach based on optical coherence tomography and Raman spectroscopy. A supervised machine learning algorithm detected malignant tumors with the sensitivity and specificity of 94% and 98%, respectively. The proposed multimodal algorithm is a novel approach in veterinary oncology that can outperform the existing clinical methods such as the fine-needle aspiration method. Abstract As in humans, cancer is one of the leading causes of companion animal mortality. Up to 30% of all canine and feline neoplasms appear on the skin or directly under it. There are only a few available studies that have investigated pet tumors by biophotonics techniques. In this study, we acquired 1115 optical coherence tomography (OCT) images of canine and feline skin, lipomas, soft tissue sarcomas, and mast cell tumors ex vivo, which were subsequently used for automated machine vision analysis. The OCT images were analyzed using a scanning window with a size of 53 × 53 μm. The distributions of the standard deviation, mean, range, and coefficient of variation values were acquired for each image. These distributions were characterized by their mean, standard deviation, and median values, resulting in 12 parameters in total. Additionally, 1002 Raman spectral measurements were made on the same samples, and features were generated by integrating the intensity of the most prominent peaks. Linear discriminant analysis (LDA) was used for sample classification, and sensitivities/specificities were acquired by leave-one-out cross-validation. Three datasets were analyzed—OCT, Raman, and combined. The combined OCT and Raman data enabled the best sample differentiation with the sensitivities of 0.968, 1, and 0.939 and specificities of 0.956, 1, and 0.977 for skin, lipomas, and malignant tumors, respectively. Based on these results, we concluded that the proposed multimodal approach, combining Raman and OCT data, can accurately distinguish between malignant and benign tissues.
Understanding antimicrobial usage patterns and encouraging appropriate antimicrobial usage is a critical component of antimicrobial stewardship. Studies using VetCompass Australia and Natural Language Processing (NLP) have demonstrated antimicrobial usage patterns in companion animal practices across Australia. Doing so has highlighted the many obstacles and barriers to the task of converting raw clinical notes into a format that can be readily queried and analysed. We developed NLP systems using rules-based algorithms and machine learning to automate the extraction of data describing the key elements to assess appropriate antimicrobial use. These included the clinical indication, antimicrobial agent selection, dose and duration of therapy. Our methods were applied to over 4.4 million companion animal clinical records across Australia on all consultations with antimicrobial use to help us understand what antibiotics are being given and why on a population level. Of these, approximately only 40% recorded the reason why antimicrobials were prescribed, along with the dose and duration of treatment. NLP and deep learning might be able to overcome the difficulties of harvesting free text data from clinical records, but when the essential data are not recorded in the clinical records, then, this becomes an insurmountable obstacle.
Knowledge graph (KG) embedding has been used to benefit the diagnosis of animal diseases by analyzing electronic medical records (EMRs), such as notes and veterinary records. However, learning representations to capture entities and relations with literal information in KGs is challenging as the KGs show heterogeneous properties and various types of literal information. Meanwhile, the existing methods mostly aim to preserve graph structures surrounding target nodes without considering different types of literals, which could also carry significant information. In this paper, we propose a knowledge graph embedding model for the efficient diagnosis of animal diseases, which could learn various types of literal information and graph structure and fuse them into unified representations, namely LiteralKG. Specifically, we construct a knowledge graph that is built from EMRs along with literal information collected from various animal hospitals. We then fuse different types of entities and node feature information into unified vector representations through gate networks. Finally, we propose a self-supervised learning task to learn graph structure in pretext tasks and then towards various downstream tasks. Experimental results on link prediction tasks demonstrate that our model outperforms the baselines that consist of state-of-the-art models.
Machine learning (ML) is an approach to artificial intelligence characterised by the use of algorithms that improve their own performance at a given task (e.g. classification or prediction) based on data and without being explicitly and fully instructed on how to achieve this. Surveillance systems for animal and zoonotic diseases depend upon effective completion of a broad range of tasks, some of them amenable to ML algorithms. As in other fields, the use of ML in animal and veterinary public health surveillance has greatly expanded in recent years. Machine learning algorithms are being used to accomplish tasks that have become attainable only with the advent of large data sets, new methods for their analysis and increased computing capacity. Examples include the identification of an underlying structure in large volumes of data from an ongoing stream of abattoir condemnation records, the use of deep learning to identify lesions in digital images obtained during slaughtering, and the mining of free text in electronic health records from veterinary practices for the purpose of sentinel surveillance. However, ML is also being applied to tasks that previously relied on traditional statistical data analysis. Statistical models have been used extensively to infer relationships between predictors and disease to inform risk-based surveillance, and increasingly, ML algorithms are being used for prediction and forecasting of animal diseases in support of more targeted and efficient surveillance. While ML and inferential statistics can accomplish similar tasks, they have different strengths, making one or the other more or less appropriate in a given context.
Companion animal medical data are abundant but fragmented, highlighting the need for knowledge graphs (KGs) to improve interoperability, analytics, and applications in veterinary medicine. However, recent veterinary KGs are narrow and often rely on manual or semi-structured data, limiting scalability, integration with standards, and handling of unstructured texts. To solve it, we propose an LLM-driven KG covering 54 major companion animal diseases, integrating clinical data across species and aligned with SNOMED-CT and VeNOM. The proposed KG schema supports entity, relation, and attribute types, enabling efficient querying via Cypher, Gremlin, or SPARQL. Our pipeline combines LLM-based corpus preprocessing, zero-shot pseudo-annotation with confidence scoring, expert refinement, semi-supervised learning. Detailed attributes are embedded to balance schema simplicity with Named Entity Recognition (NER) and Relation Extraction (RE) performance. As a result, our proposed KG can mitigate hallucinations of LLMs through structured representations and GraphRAG, enabling robust applications such as veterinary chatbots, symptom-based diagnosis, and cross-species analysis, establishing a scalable foundation for precision veterinary medicine.
Objective To compare the clinical decision-support performance of 2 large language models (LLMs), ChatGPT-5 and ChatGPT-5 Thinking, with that of experienced clinicians and novices in veterinary theriogenology. Methods 15 standardized obstetric and gynecologic scenarios were independently evaluated by 2 expert clinicians, 2 novice veterinarians, and both LLMs under matched, cold-start conditions. Responses were assessed with a 5-point global quality score by a blinded expert panel. Results ChatGPT-5 Thinking achieved the highest overall quality ratings, followed by ChatGPT-5 and the expert clinicians. Novice veterinarians received the lowest scores. Responses generated by LLM were generally more consistent and complete than those of human readers. Conclusions Within the constraints of a simulated scenario design, LLMs, particularly ChatGPT-5 Thinking, provided clinically appropriate guidance that exceeded novice performance and approached that of expert clinicians. These findings support the potential role of LLMs as adjunct decision-support tools in time-sensitive obstetric and gynecologic cases. Clinical Relevance LLMs may assist clinicians and trainees in managing reproductive emergencies by offering rapid, structured, guideline-aligned recommendations. Further evaluation in real clinical settings is warranted.
Computerised decision support is of emerging and increasing importance in human medicine, but as yet has not been thoroughly applied or evaluated in veterinary medicine. In this essay, the authors report on the first example of a veterinary care pathway, a specific form of computerised decision support, which guides clinicians through a clinical workflow and incorporates individual patient data to inform patient-specific decision recommendations. The veterinary care pathway was designed using consensus statements and specialist neurologist opinion to create a decision support tool concerning canine idiopathic epilepsy. The authors evaluated the care pathway by comparing 35 clinical decisions made by referral clinicians in historical cases of idiopathic epilepsy to decisions recommended by the care pathway when presented with the same clinical case. Their results show that in 77.1% (95% confidence interval [59.9, 89.6]) of cases the care pathway recommended a decision that was the same or similar to a specialist neurologist's decision. Whilst further studies are needed to explore the potential use of such technology in clinical practice, the authors believe this first application provides great promise of a new and alternative method of clinical decision support.
OBJECTIVE To develop a multivariable model and online decision-support calculator to aid in preoperative discrimination of benign from malignant splenic masses in dogs. ANIMALS 522 dogs that underwent splenectomy because of splenic masses. PROCEDURES A multivariable model was developed with preoperative clinical data obtained retrospectively from the records of 422 dogs that underwent splenectomy. Inclusion criteria were the availability of complete abdominal ultrasonographic examination images and splenic histologic slides or histology reports for review. Variables considered potentially predictive of splenic malignancy were analyzed. A receiver operating characteristic curve was created for the final multivariable model, and area under the curve was calculated. The model was externally validated with data from 100 dogs that underwent splenectomy subsequent to model development and was used to create an online calculator to estimate probability of splenic malignancy in individual dogs. RESULTS The final multivariable model contained 8 clinical variables used to estimate splenic malignancy probability: serum total protein concentration, presence (vs absence) of ≥ 2 nRBCs/100 WBCs, ultrasonographically assessed splenic mass diameter, number of liver nodules (0, 1, or ≥ 2), presence (vs absence) of multiple splenic masses or nodules, moderate to marked splenic mass inhomogeneity, moderate to marked abdominal effusion, and mesenteric, omental, or peritoneal nodules. Areas under the receiver operating characteristic curves for the development and validation populations were 0.80 and 0.78, respectively. CONCLUSIONS AND CLINICAL RELEVANCE The online calculator (T-STAT.net or T-STAT.org) developed in this study can be used as an aid to estimate the probability of malignancy in dogs with splenic masses and has potential to facilitate owners' decisions regarding splenectomy.
Details of the designs and mechanisms in support of human-AI collaboration must be considered in the real-world fielding of AI technologies. A critical aspect of interaction design for AI-assisted human decision making are policies about the display and sequencing of AI inferences within larger decision-making workflows. We have a poor understanding of the influences of making AI inferences available before versus after human review of a diagnostic task at hand. We explore the effects of providing AI assistance at the start of a diagnostic session in radiology versus after the radiologist has made a provisional decision. We conducted a user study where 19 veterinary radiologists identified radiographic findings present in patients’ X-ray images, with the aid of an AI tool. We employed two workflow configurations to analyze (i) anchoring effects, (ii) human-AI team diagnostic performance and agreement, (iii) time spent and confidence in decision making, and (iv) perceived usefulness of the AI. We found that participants who are asked to register provisional responses in advance of reviewing AI inferences are less likely to agree with the AI regardless of whether the advice is accurate and, in instances of disagreement with the AI, are less likely to seek the second opinion of a colleague. These participants also reported that the AI advice to be less useful. Surprisingly, requiring provisional decisions on cases in advance of the display of AI inferences did not lengthen the time participants spent on the task. The study provides generalizable and actionable insights for the deployment of clinical AI tools in human-in-the-loop systems and introduces a methodology for studying alternative designs for human-AI collaboration. We make our experimental platform available as open source to facilitate future research on the influence of alternate designs on human-AI workflows.
Context and Objective: Over the past few decades, terminologies developed for clinical descriptions have been increasingly used as key resources for knowledge management, data integration, and decision support to the extent that today they have become essential in the biomedical and health field. Among these clinical terminologies, some may possess the characteristics of one or several types of representation. This is the case for the Systematized Nomenclature of Human and Veterinary Medicine—Clinical Terms (SNOMED CT), which is both a clinical medical terminology and a formal ontology based on the principles of semantic web. Methods: We present and discuss, on one hand, the compliance of SNOMED CT with the requirements of a reference clinical terminology and, on the other hand, the specifications of the features and constructions of descriptive of SNOMED CT. Results: We demonstrate the consistency of the reference clinical terminology SNOMED CT with the principles stated in James J. Cimino’s desiderata and we also show that SNOMED CT contains an ontology based on the EL profile of OWL2 with some simplifications. Conclusions: The duality of SNOMED CT shown is crucial for understanding the versatility, depth, and scope in the health field.
Electronic patient records from practice management software systems have been used extensively in medicine for the investigation of clinical problems leading to the creation of decision support frameworks. To date, technologies that have been utilised for this purpose such as text mining and content analysis have not been employed significantly in veterinary medicine. The aim of this research was to pilot the use of content analysis and text-mining software for the synthesis and analysis of information extracted from veterinary electronic patient records. The purpose of the work was to be able to validate this approach for future employment across a number of practices for the purposes of practice based research. The approach utilised content analysis (Prosuite) and text mining (WordStat) software to aggregate the extracted text. Text mining tools such as Keyword in Context (KWIC) and Keyword Retrieval (KR) were employed to identify specific occurrences of data across the records. Two different datasets were interrogated, a bespoke test dataset that had been set up specifically for the purpose of the research, and a functioning veterinary clinic dataset that had been extracted from one veterinary practice. Across both datasets, the KWIC analysis was found to have a high level of accuracy with the search resulting in a sensitivity of between 85.3-100%, a specificity of between 99.1-99.7%, a positive predictive value between 93.5-95.8% and a negative predictive value between 97.7-100%. The KR search, based on machine learning, was utilised for the clinic-based dataset and was found to perform slightly better than the KWIC analysis. This study is the first to demonstrate the application of content analysis and text mining software for validation purposes across a number of different datasets for the purpose of search and recall of specific information across electronic patient records. This has not been demonstrated previously for small animal veterinary epidemiological research for the purposes of large scale analysis for practice-based research. Extension of this work to investigate more complex diseases across larger populations is required to fully explore the use of this approach in veterinary practice.
In veterinary medicine, conventional radiography is the first–choice method for most diagnostic imaging applications in both small animal and equine practice. One direction in its development is the integration of bone density evaluation and artificial intelligence–assisted clinical decision–making, which is expected to enhance and streamline veterinarians’ daily practices. One such decision–support method is k–means clustering, a machine learning and data mining technique that can be used clinically to classify radiographic signs into healthy or affected clusters. The study aims to investigate whether the k–means clustering algorithm can differentiate cortical and trabecular bone in both healthy and affected horse limbs. Therefore, identifying the optimal computed digital absorptiometry parameters was necessary. Five metal–made density standards, made of pure aluminum, aluminum alloy (duralumin), cuprum alloy, iron–nickel alloy, and iron–silicon alloy, and ten X–ray tube settings were evaluated for the radiographic imaging of equine distal limbs, including six healthy limbs and six with radiographic signs of osteoarthritis. Density standards were imaged using ten combinations of X–ray tube settings, ranging from 50 to 90 kV and 1.2 to 4.0 mAs. The relative density in Hounsfield units was firstly returned for both bone types and density standards, then compared, and finally used for clustering. In both healthy and osteoarthritis–affected limbs, the relative density of the long pastern bone (the proximal phalanx) differed between bone types, allowing the k–means clustering algorithm to successful differentiate cortical and trabecular bone. Density standard made of duralumin, along with the 60 kV, 4.0 mAs X–ray tube settings, yielded the highest clustering metric values and was therefore considered optimal for further research. We believe that the identified optimal computed digital absorptiometry parameters may be recommended for further researches on the relative quantification of conventional radiographs and for distal limb examination in equine veterinary practice.
BACKGROUND Artificial intelligence (AI) has emerged as one of the most transformative tools for developing clinical decision-support systems in veterinary medicine. Despite its growing use, its full potential remains underutilized in equine medicine, an area of both high economic and clinical importance. Accurate survival prediction in horses with colic is crucial for timely intervention and improved clinical outcomes. METHODS This study aimed to predict survival outcomes in horse colic cases by developing models that combine traditional machine-learning algorithms (XGBoost, Light Gradient Boosting Machine [LightGBM], and Categorical Boosting [CatBoost]) with advanced deep-learning architectures (TabNet, Feature Tokenizer Transformer [FT_Transformer], and Neural Oblivious Decision Ensemble [NODE]). Missing clinical data were imputed using deep-learning-based approaches-Generative Adversarial Imputation Networks (GAIN-OneHot, GAIN-Emb) and Missing Data Imputation via Denoising Autoencoder (MIDAS). Class imbalance was addressed through Conditional Tabular Generative Adversarial Network (CTGAN) and Tabular Variational Autoencoder (TVAE). Model interpretability was assessed using the SHapley Additive exPlanations (SHAP)-based Explainable Artificial Intelligence (XAI) framework to identify the most influential features contributing to survival prediction. RESULTS Among the tested combinations, the TVAE-GAIN-OneHot-LightGBM pipeline achieved the highest classification performance, with an area under the curve (AUC) value of 0.928, outperforming conventional statistical and machine-learning baselines. SHAP analysis revealed that total_protein, abdomo_appearance, mucous_membrane, packed_cell_volume, and temp_of_extremities were the most decisive clinical variables influencing the model's predictions. CONCLUSIONS The findings demonstrate that ensuring data integrity, optimizing model complexity, and integrating XAI-based interpretability substantially enhance the reliability and clinical applicability of AI-driven models in veterinary medicine. The proposed framework provides a pioneering and explainable approach for developing accurate prognostic systems in equine colic, paving the way for broader AI adoption in clinical veterinary practice.
Feline mammary tumours represent the third most common malignancy in cats, with limited evidence-based tools available for risk assessment and screening guidance. Traditional veterinary approaches rely on subjective clinical judgement, lacking quantitative risk stratification methods that could optimise preventive care delivery. To develop and validate the first comprehensive machine learning-based risk prediction system for feline mammary tumours, providing evidence-based clinical decision support for veterinary practice. We developed a comprehensive synthetic dataset of 4399 feline cases spanning 2002-2022, systematically calibrated against real-world epidemiological data from published literature. The synthetic data incorporated demographic, clinical, reproductive, and environmental variables that precisely replicated actual epidemiological relationships. Five machine learning algorithms (Random Forest, XGBoost, Neural Network, SVM, Logistic Regression) were trained and combined using soft voting ensemble methodology. Model performance was evaluated using area under the curve (AUC), calibration metrics, and clinical utility measures. The ensemble model achieved excellent discrimination capability (AUC = 0.888, 95% CI: 0.873-0.903) with 80.5% accuracy, 85.7% sensitivity, and 76.0% specificity. Risk stratification demonstrated clear clinical utility: low-risk cats (< 30% probability) had 12.4% tumour prevalence, while very high-risk cats (> 80% probability) showed 89.5% prevalence. The machine learning approach substantially outperformed traditional assessment methods, showing 64.8% improvement in discriminative ability and a 163% increase in net clinical benefit. This study establishes the first validated machine learning-based clinical decision support system for feline mammary tumour risk assessment. The risk stratification approach enables personalised screening recommendations while optimising resource allocation, potentially transforming preventive veterinary oncology practice.
Skin diseases in dogs, such as hypersensitive dermatitis, fungal infections, and bacterial dermatoses, present diverse clinical signs that complicate diagnosis in veterinary practice. This study employs MATLAB as an image-processing tool to enhance diagnostic accuracy through a structured pipeline. A dataset of 500 canine skin images obtained from Kaggle was processed using enlargement, histogram equalization, Gaussian filtering, and Sobel convolution. These methods improved image quality by enhancing contrast, reducing noise, and clarifying lesion boundaries. The experimental results demonstrate that the processed images allow veterinarians to more easily detect key diagnostic features, including changes in lesion texture, color, and shape. Enhanced visual clarity supports faster identification of disease patterns and reduces diagnostic ambiguity in clinical settings. This study highlights the potential of MATLAB-based image processing as an effective decision-support tool for veterinary dermatology, enabling quicker and more reliable treatment planning. Future work may integrate deep learning classification to further automate disease recognition.
Taking into account the achievements of state-of-the-art computer vision methods in recent years, the aim of this research was to examine the extent to which their application can help in the detection of symptoms of eye diseases in dogs and the diagnosis of ophthalmological conditions in order to provide owners with preliminary information about the disease of their pets and speed up making diagnoses to veterinarians. In the research, clinical data of canine eye diseases including at least one of the 4 symptoms of the disease was collected and a set was formed to train the segmentation model, which was expanded with synthesized data generated using the LoRA Stable Diffusion model verified by an ophthalmologist. An extended segmentation model based on U-Net architecture with ResNet34 backbone was fine-tuned on the prepared set and compared to zero-training GPT-4o and Grounding SAM. The results show that the fine-tuned U-Net model gives the best segmentation results of eye disease symptoms of 97% base of pixel accuracy metric and significantly outperforms other tested methods. The segmentation masks are used as part of the prompts for GPT-4 and GPT-4o to generate diagnoses of diseases having the specified symptoms. The generated diagnostic results were evaluated using text evaluation metrics and that the most accurate diagnosis according to the Bert score of 84% is achieved using GPT-4o in combination with the U-Net segmentation mask. The article proposes a pipeline that gives the best results and solutions to be considered for other diagnostic procedures in ophthalmology and veterinary medicine.
This article provides a detailed method for detecting kidney disease using both images from radiology and analyzing data from urine samples. A Convolutional Neural Network (CNN) was used to categorize the radiology images into four classes; normal, cyst, tumor, and stone. The CNN model was effective with an impressive validation accuracy of 92%, differentiating the different kidney conditions with efficiency. In addition, urine analysis data was utilized to predict the chances of having kidney diseases using conventional machine learning models such as Support Vector Machines (SVM) and Random Forest. Where the Random Forest model scored an 87% accuracy, 85% precision, 88% recall, and 86% f1-score, outperforming the SVM model. Hence the findings indicate that it is possible to achieve early diagnosis of kidney-associated disorders through a mix of imaging and non-imaging modalities. The system shows great potential in enhancing diagnostic accuracy and may be widely adopted in the healthcare setting. From this point on, another direction of improvement would be the implementation of even more complex deep learning models and segmentation techniques to increase the performance of the system.
Compared with other occupations, veterinary medicine and animal care (VMAC) has the 2nd highest incidence rate for nonfatal occupational injuries and illnesses [1]. Specifically, survey studies have shown that an increasing number of veterinarians are experiencing musculoskeletal disorders (MSD) [2,3]. These injuries have created a tremendous burden on workers, their families, companies, and the health care system. Many approaches have been employed to measure ergonomic injury risks. Traditionally, the ergonomic risk is assessed through human observation, where the researcher observes and rates subjects ergonomic risk using tools such as Rapid Upper Limb Assessment (RULA) [5]. However, the observation-based techniques may be limited by the subjectivity of the grader. Direct measurement that uses wearable devices such as inertial measurement units to collect humans' body position posture has also been proposed [4]. Although it offers a more reliable assessment of ergonomic risk, this approach can be intrusive and thus interfere with the natural motions of worker activities. Conversely, the recent computer vision approach can capture human posture objectively and continuously by taking advantage of deep learning techniques and advanced cameras. However, computer vision approaches have not yet addressed the complexities associated with ergonomic risk among veterinary surgeons. Hence, our pilot study aims to evaluate veterinarians’ ergonomics through a novel computer vision program and compare these findings to traditional grading tools. A total of five (two males, three females) participants from Purdue small animal hospital were recruited. Data was collected in an actual operating room on typical surgical days, where surgeons perform surgery on an animal. During each case, the RealSense 3D camera was placed at each corner of the room. A 15 frames/second sampling rate and RGB resolution of 384 x 480 were used to record the participants’ body postures and movements (Fig.1). Depending on the type of surgery, task duration ranged from 30 min to 120 min. For data analysis, the open-source package was used. The openpose [6] package generated a vector that indicated the location of each identified body point in the given image. The acquisition of 3D posture data from a group of subjects, individually or collectively, was achieved using a dynamic region of interest (ROI) (Fig.2). Each participant’s RULA score was calculated based on the identified body points every 15s. Additionally, body position was graded using the RULA tool by one independent grader, who reviewed the videos and assigned scores every 15s. Given the limited sample size, the association between human and computer scores was performed using nonparametric statistics, Kruskal Wallis test. Statistical significance was considered at a p-value of .05 or less. The computer-graded score did correlation with humangraded score with p-value of 0.04; This study provided evidence regarding the feasibility of body point detection in VMAC practice using computer vision techniques. There are several limitations to the study. As a pilot study, the sample size was small. The next step would be to collect more data to increase the power of statistical analysis. In addition, While the software utilized in the computer program is well validated for detecting postures, continued investigation of dealing with human body occlusion and missing data are desired for ongoing improvements in its accuracy.
This paper presents a case study of how Petco, a leading pet retailer, innovated their pet health analysis processes using the Media Insights Engine to reduce the time to first diagnosis. The company leveraged this framework to build custom applications for advanced computer vision tasks, such as identifying potential health issues in pet videos and images, and validating AI outcomes with pre-built veterinary diagnoses. The Media Insights Engine provides a modular and extensible solution that enabled Petco to quickly build machine learning applications for media workloads. By utilizing this framework, Petco was able to accelerate their project development, improve the efficiency of their pet health analysis, and ultimately reduce the time to first diagnosis for pet health issues. This paper discusses the challenges of pet health analysis using media, the benefits of using the Media Insights Engine, and the architecture of Petco's custom applications built using this framework.
Recently, with increasing interest in pet healthcare, the demand for computer-aided diagnosis (CAD) systems in veterinary medicine has increased. The development of veterinary CAD has stagnated due to a lack of sufficient radiology data. To overcome the challenge, we propose a generative active learning framework based on a variational autoencoder. This approach aims to alleviate the scarcity of reliable data for CAD systems in veterinary medicine. This study utilizes datasets comprising cardiomegaly radiographic image data and chronic kidney disease ultrasound image data. After removing annotations and standardizing images, we employed a framework for data augmentation, which consists of a data generation phase and a query phase for filtering the generated data. The experimental results revealed that as the data generated through this framework was added to the training data of the generative model, the frechet inception distance decreased from 84.14 to 50.75 in the radiographic image and from 127.98 to 35.16 in an ultrasound image. Subsequently, when the generated data were incorporated into the training of the classification model, the true negative of the confusion matrix also improved from 0.16 to 0.66 on the radiograph and from 0.44 to 0.64 on the ultrasound image. The proposed framework has the potential to address the challenges of data scarcity in medical CAD, contributing to its advancement.
BACKGROUND Artificial intelligence (AI) has been used successfully in human dermatology. AI utilises convolutional neural networks (CNN) to accomplish tasks such as image classification, object detection and segmentation, facilitating early diagnosis. Computer vision (CV), a field of AI, has shown great results in detecting signs of human skin diseases. Canine paw skin diseases are a common problem in general veterinary practice, and computer vision tools could facilitate the detection and monitoring of disease processes. Currently, no such tool is available in veterinary dermatology. ANIMALS Digital images of paws from healthy dogs and paws with pododermatitis or neoplasia were used. OBJECTIVES We tested the novel object detection model Pawgnosis, a Tiny YOLOv4 image analysis model deployed on a microcomputer with a camera for the rapid detection of canine pododermatitis and neoplasia. MATERIALS AND METHODS The prediction performance metrics used to evaluate the models included mean average precision (mAP), precision, recall, average precision (AP) for accuracy and frames per second (FPS) for speed. RESULTS A large dataset labelled by a single individual (Dataset A) used to train a Tiny YOLOv4 model provided the best results with a mean mAP of 0.95, precision of 0.86, recall of 0.93 and 20 FPS. CONCLUSIONS AND CLINICAL RELEVANCE This novel object detection model has the potential for application in the field of veterinary dermatology.
Simple Summary Ticks are ectoparasites of humans, livestock, and wild animals and, as such, they are a nuisance, as well as vectors for disease transmission. Since the risk of tick-borne disease varies with the tick species, tick identification is vitally important in assessing threats. Standard taxonomic approaches are time-consuming and require skilled microscopy. Computer vision may provide a tenable solution to this problem. The emerging field of computer vision has many practical applications already, such as medical image analyses, facial recognition, and object detection. This tool may also help with the identification of ticks. To train a computer vision model, a substantial number of images are required. In the present study, tick images were obtained from a tick passive surveillance program that receives ticks from public individuals, partnering agencies, or veterinary clinics. We developed a computer vision method to identify common tick species and our results indicate that this tool could provide accurate, affordable, and real-time solutions for discriminating tick species. It provides an alternative to the present tick identification strategies. Abstract A wide range of pathogens, such as bacteria, viruses, and parasites can be transmitted by ticks and can cause diseases, such as Lyme disease, anaplasmosis, or Rocky Mountain spotted fever. Landscape and climate changes are driving the geographic range expansion of important tick species. The morphological identification of ticks is critical for the assessment of disease risk; however, this process is time-consuming, costly, and requires qualified taxonomic specialists. To address this issue, we constructed a tick identification tool that can differentiate the most encountered human-biting ticks, Amblyomma americanum, Dermacentor variabilis, and Ixodes scapularis, by implementing artificial intelligence methods with deep learning algorithms. Many convolutional neural network (CNN) models (such as VGG, ResNet, or Inception) have been used for image recognition purposes but it is still a very limited application in the use of tick identification. Here, we describe the modified CNN-based models which were trained using a large-scale molecularly verified dataset to identify tick species. The best CNN model achieved a 99.5% accuracy on the test set. These results demonstrate that a computer vision system is a potential alternative tool to help in prescreening ticks for identification, an earlier diagnosis of disease risk, and, as such, could be a valuable resource for health professionals.
The Diagnosis and treatment of skin diseases of dogs like fungal infection, bacterial dermatosis and hypersensitivity dermatitis are challenging tasks in veterinary dermatology. Traditional methods of diagnosis are laborious and more subjective measurements that can lead to human error and delay in finding the true cause of a problem. The proposed method in this paper is based on deep learning which has a Convolutional Neural Network (CNN) structure with the goal of automated detection and classification of dog skin disease cases into four categories, namely; fungal infections, bacterial dermatosis, hypersensitivity dermatitis, and healthy skin. The presented methodology trains and validates a high-resolution labeled database image set to the model. Data augmentation techniques such as azimuth, scale, and flip are also applied for better generalization of the model. The results showed that the proposed algorithm could give high performance measured by sensitivity, specificity, and accuracy against traditional diagnostic measures. This work can provide a path toward more efficient veterinary dermatology to enable timely interventions with better health outcomes for dogs. Besides, the model could be integrated into an intuitive application to ensure greater outreach for implementation in a clinical-facing setting. This study presents a lightweight convolutional neural network (CNN)-based approach for the classification of common canine skin diseases, including bacterial, fungal, hypersensitivity-related, and healthy conditions. The proposed model was trained on a publicly available dermatological dataset of labeled dog images. Key innovations include a streamlined preprocessing pipeline, application of data augmentation to improve generalization, and optimization for deployment in resource-constrained environments. The model achieved an accuracy of 95% and demonstrated robust performance across all classes.
In this study, a deep learning-based web application aimed at a classification of canine skin conditions using dermal images is presented. The paper elaborates on how the ResNet-50 architecture is exploited to educate the model in recognizing six basic skin conditions in dogs. The Dog-centric Skin Disease Classification System includes Dermatitis, Fungal Infections, Healthy Skin, Hypersensitivity, Demodicosis, and Ringworm. AI- based technologies like CNN are incorporated into the web app to avoid human error, increase the accuracy of diagnosis, make the diagnosis process faster, and make the treatment process more accurate. The problems such as the web server responding with the expected code, the client-side components executing as they should, and all the visual elements rendering correctly are resolved. Self-trained neural network (CNN), a reinforcement- type chatbot, and diagnostic data storage solutions from Gemini run side by side. The system is designed so that it processes the image millions of times to get the best possible answer from all the probabilities of the disease mentioned. How about having a conversation structure plug in the most probable issue and the corresponding therapy from the clinical non-technical staff most possibly over the phone? As shown in a preliminary study, high classification performance without any confusion among different inputs is a good sign of model stability. There is a compelling demonstration of how CNN-based architectures like ResNet-50 can be beneficial in veterinary diagnostics by David Marquardt, Priscilla Rizal, and Anifah Lestari. Their findings indicate that these models would serve as the foundation of studies involving more extensive datasets, cross-breed universality, and clinical embedding in the future.
Skin disease in dogs is among the most common health problems affecting pet animals, often leading to discomfort and severe infections if not diagnosed early. Conventional veterinary diagnosis by visual inspection is susceptible to human error and might be subjective. In the proposed study, the Deep Learning-based methods employing Convolutional Neural Networks (CNNs) and Transfer Learning are used to classify skin conditions in dogs as either Healthy or Infected. The dog skin disease image-based datasets are categorized into four types: Fungal Infections, Bacterial Dermatosis, Hypersensitivity (allergic dermatosis), and Healthy Skin. Using Transfer Learning approaches, pre-trained deep learning models such as MobileNetV2, EfficientB0, EfficientB3, DenseNet121, and VGG19 were optimized to enhance model performance. With the highest validation accuracy of 76.40%, DenseNet121 was achieved as compared to other state-of-the-art methods. Moreover, this study will help veterinarians by providing an automated system for early disease detection. It can be further integrated into real-time mobile or web-based applications to support practical veterinary diagnosis.
Easily contagious to flea allergy, mange, ringworm and hotspots dog skin ailments are usual and in most cases unfelt until they are way too late-term and thus, make such dogs suffer incessantly. The cause of the late detection is mostly either due to the infrequency of vet visits or inattentiveness of owners. A full-stack, real-time web application is developed in this implementation: the owners of the dogs will post pictures of the dogs on the internet so that the computer makes a diagnosis of the possible skin diseases that dogs can have. The system is implemented with the use of a marker-classifier in the form of the ResNet-18 model, having been implemented using the Roboflow service, and demonstrating the accuracy of 94.9 percent on a testing regime. It unites and connects Python Flask microservice, inference, a Spring Boot backend, responses, and database control, and ReactJS user-friendly interface. A feedback system is also structured to be able to make model reliable using corrective feedback in real life situations. The platform, being modular as well as scalable, is a cost-effective, responsive diagnostic tool where pet owners will be able to act in time in response to a problem and within this context avoid unnecessary trips to the vet. This arrangement, therefore, presents one of the means through which deep learning and microservice architectures could be utilized in veterinary telemedicine to provide an excellent way to manage pet health.
Dirofilaria immitis (D. immitis) or Heartworm is the most pathogenic filariae in dogs which also occasionally infects humans. Dirofilariasis has been found all over the world, and in Iran, on average, 11.5% of dogs are infected. Microscopic examination, the modified Knott method, is a definitive and very common diagnosis method for detecting microfilariae in peripheral blood. It is inexpensive, relatively quick, and does not require advanced and expensive laboratory equipment. However, identification and differentiation of microfilariae from artifacts stand on the abilities and expertise of technicians. The aim of this study was to remove this limitation by developing an artificial intelligence, deep learning-based system that detects microfilariae in blood slides and differentiates microfilaria from thread-like artifacts automatically. To this end, blood samples (n=300) were obtained from stray dogs in Guilan province. The existence of microfilariae was assessed by modified Knott's test under microscopic examinations which identified 29 cases infected with microfilaria. These positive results were confirmed with conventional PCR. The Microfilariae measuring found 295.13±14.9 µm in length and 5.8±0.43 µm in width. The images captured of microfilariae and artifacts were applied to educate and test the suggested deep learning-based system. The developed system diagnoses D. immitis with an accuracy of greater than 95% and thus, can be widely used for epidemiological studies. Since the microfilariae can be miss-diagnosed with thread-shaped artifacts, the proposed system plays an effective role in accurate and reliable diagnosis of D. immitis and can be used in field studies.
ABSTRACT Canine Hip Dysplasia (CHD) is a congenital disease with a polygenic hereditary component, characterised by abnormal development of the coxo-femoral joint which results in poor coaptation of the femoral head in the acetabulum; the disease rapidly progresses to osteoarthritis of the hip. While dysplasia has been recognised in practically all canine breeds, it is much more common and of concern in medium and large dog breeds with rapid development. Dysplasia in predisposed breeds, particularly the German Shepherd, is the object of screening based on systematic radiological control in some countries. Our collected dataset comprises 507 X-ray images of dogs affected by hip dysplasia (HD). These images were meticulously evaluated using six Deep Convolutional Neural Network (CNN) models. Following an extensive analysis of the top-performing models, VGG16 emerged as the leader, achieving remarkable accuracy, recall, and precision scores of 98.32%, 98.35%, and 98.44%, respectively. Leveraging deep learning (DL) techniques, this approach excels in diagnosing CHD from hip X-rays with a high degree of accuracy.
Introduction: Dog bites are a serious public health issue that cause medical complications and require immediate interventions. Human decisions regarding the severity of the bites are subjective and unreliable, thereby leading to uneven treatment outcomes. An automatic dog bite detector and a novel approach to severity classification enhance the speed and efficiency of response. By pre-processing input data and executing machine learning methods, severity of the dog bites' wounds is classified into minor, moderate, and severe. Personalized medical treatments are facilitated by the system, enhancing patient care and public safety. Interfacing with veterinary clinics, healthcare centers, and emergency response teams ensures immediate assessment and treatment. Clinical effectiveness and workability will be tested in a pilot study to develop a more systematic and evidence-based approach towards handling dog bite cases. Objectives: Detection and classification of severity of dog bites enhance efficacy and response rate. Machine learning categorization of the bite as minor, moderate, and severe enhances timely and proper medical intervention and helps enhance treatment result. Integration of the system in veterinary clinics and health centers is directed towards quick assessment and response. Clinical efficacy of the planned intervention will be assessed through pilot studies to aim at applying systematic and evidence-based methods in an attempt to provide effective treatment of dog bites and community safety. Methods: Dog bite detection and severity classification employ a systematic process of preprocessing, feature extraction, and classification for precise estimation of dog bite severity. Preprocessing normalizes the datasets through resizing, pixel normalization, and data augmentation. Feature extraction employs CNN for wound features and ResNet-50 with residual connections for improved accuracy. Hybrid architecture CNN-ResNet50 classifies wounds, detects severity, and suggests treatments. Through integration, it facilitates feature learning, reduces diagnostic error, and increases speed of medical response, thereby facilitating timely and uniform treatment of dog bites. Results: Dog Bite Diagnosis and Classification System gives real-time evaluation, accurate diagnosis, and sufficient treatment. It reduces human error, indexes emergency cases, and optimizes healthcare resources. It allows for remote AI-facilitated consultations, optimizes accessibility, especially in rural regions. It is available to all the population, offering protection, timely response, and medical reliability. Conclusions: Dog Bite Detection and Classification System uses deep learning, the integration of CNN and ResNet-50, to effectively detect and classify dog bites in vet images. Through improved diagnostic precision, less work for veterinarians, and faster decision-making on the treatment, it guarantees effective medical intervention. AI-led innovation transforms veterinary diagnosis, establishing a new benchmark in animal care by improving accuracy, speed, and quality of treatment.
Natural language processing (NLP), a branch of artificial intelligence that focuses on the interaction between computers and human language, has potential in advancing veterinary pathology through its ability to source knowledge efficiently from vast data sets, generate high-quality text rapidly, and enhance data searchability. This review explores the applications of NLP in veterinary pathology, emphasizing its potential role in diagnostics, training pathologists, and research. NLP might offer many advantages, such as accuracy, speed, and cost reduction, especially for routine tasks including text summarization and report generation. These benefits make NLP a promising technology for achieving precision, adding value, and driving innovation in health care. However, caution is warranted, as NLP models may introduce biases and errors due to the quality of the data they are trained on, have limitations in interpreting nuanced or context-specific information, and lead to private data leakage. Furthermore, the multifaceted nature of veterinary pathology data may require specifically trained and expert-validated algorithms for accurate interpretation. To ensure the credibility and validity of research findings, pathologists must critically evaluate and complement obtained outputs with human expertise and judgment. This article highlights the transformative potential of NLP in veterinary pathology, underscores the importance of integrating this technology into the field for enhanced diagnostic accuracy and research advancements, and gives real-life examples from pathologists for pathologists, which illustrate how NLP can be applied in veterinary pathology.
No abstract available
No abstract available
BACKGROUND Temporal phenotyping of patient journeys, which capture the common sequence patterns of interventions in the treatment of a specific condition, is useful to support understanding of antimicrobial usage in veterinary patients. Identifying and describing these phenotypes can inform antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals, in which veterinarians have an important role to play. OBJECTIVE This research proposes a framework for extracting temporal phenotypes of patient journeys from clinical practice data through the application of natural language processing (NLP) and unsupervised machine learning (ML) techniques, using cat bite abscesses as a model condition. By constructing temporal phenotypes from key events, the relationship between antimicrobial administration and surgical interventions can be described, and similar treatment patterns can be grouped together to describe outcomes associated with specific antimicrobial selection. METHODS Cases identified as having a cat bite abscess as a diagnosis were extracted from VetCompass Australia, a database of veterinary clinical records. A classifier was trained and used to label the most clinically relevant event features in each record as chosen by a group of veterinarians. The labeled records were processed into coded character strings, where each letter represents a summary of specific types of treatments performed at a given visit. The sequences of letters representing the cases were clustered based on weighted Levenshtein edit distances with KMeans+ + to identify the main variations of the patient treatment journeys, including the antimicrobials used and their duration of administration. RESULTS A total of 13,744 records that met the selection criteria was extracted and grouped into 8436 cases. There were 9 clinically distinct event sequence patterns (temporal phenotypes) of patient journeys identified, representing the main sequences in which surgery and antimicrobial interventions are performed. Patients receiving amoxicillin and surgery had the shortest duration of antimicrobial administration (median of 3.4 days) and patients receiving cefovecin with no surgical intervention had the longest antimicrobial treatment duration (median of 27 days). CONCLUSION Our study demonstrates methods to extract and provide an overview of temporal phenotypes of patient journeys, which can be applied to text-based clinical records for multiple species or clinical conditions. We demonstrate the effectiveness of this approach to derive real-world evidence of treatment impacts using cat bite abscesses as a model condition to describe patterns of antimicrobial therapy prescriptions and their outcomes.
BACKGROUND Currently there is an incomplete understanding of antimicrobial usage patterns in veterinary clinics in Australia, but such knowledge is critical for the successful implementation and monitoring of antimicrobial stewardship programs. METHODS VetCompass Australia collects medical records from 181 clinics in Australia (as of May 2018). These records contain detailed information from individual consultations regarding the medications dispensed. One unique aspect of VetCompass Australia is its focus on applying natural language processing (NLP) and machine learning techniques to analyse the records, similar to efforts conducted in other medical studies. RESULTS The free text fields of 4,394,493 veterinary consultation records of dogs and cats between 2013 and 2018 were collated by VetCompass Australia and NLP techniques applied to enable the querying of the antimicrobial usage within these consultations. CONCLUSION The NLP algorithms developed matched antimicrobial in clinical records with 96.7% accuracy and an F1 Score of 0.85, as evaluated relative to expert annotations. This dataset can be readily queried to demonstrate the antimicrobial usage patterns of companion animal practices throughout Australia.
No abstract available
The development of natural language processing techniques for deriving useful information from unstructured clinical narratives is a fast-paced and rapidly evolving area of machine learning research. Large volumes of veterinary clinical narratives now exist curated by projects such as the Small Animal Veterinary Surveillance Network (SAVSNET) and VetCompass, and the application of such techniques to these datasets is already (and will continue to) improve our understanding of disease and disease patterns within veterinary medicine. In part one of this two part article series, we discuss the importance of understanding the lexical structure of clinical records and discuss the use of basic tools for filtering records based on key words and more complex rule based pattern matching approaches. We discuss the strengths and weaknesses of these approaches highlighting the on-going potential value in using these “traditional” approaches but ultimately recognizing that these approaches constrain how effectively information retrieval can be automated. This sets the scene for the introduction of machine-learning methodologies and the plethora of opportunities for automation of information extraction these present which is discussed in part two of the series.
Optimizing antibiotic dosing recommendations is a vital aspect of antimicrobial stewardship (AMS) programs aimed at combating antimicrobial resistance (AMR), a significant public health concern, where inappropriate dosing contributes to the selection of AMR pathogens. A key challenge is the extraction of dosing information, which is embedded in free-text clinical records and necessitates numerical transformations. This paper assesses the utility of Large Language Models (LLMs) in extracting essential prescription attributes such as dose, duration, active ingredient, and indication. We evaluate methods to optimize LLMs on this task against a baseline BERT-based ensemble model. Our findings reveal that LLMs can achieve exceptional accuracy by combining probabilistic predictions with deterministic calculations, enforced through functional prompting, to ensure data types and execute necessary arithmetic. This research demonstrates new prospects for automating aspects of AMS when no training data is available.
Artificial Intelligence is altering how veterinarians work by making it easier to make quick, accurate, and evidence-based treatment decisions. Its integration improves diagnosis of diseases, radiography, advanced analytics, and medical management for many different species. This study looks at the newest developments in AI applications in veterinary medicine, concentrating on four areas: electronic health records (EHR), radiology, natural language processing (NLP), and advanced analytics. Deep learning models, especially CNNs, have done quite well in medicine. The accuracy of imaging and the strong prediction power of ML algorithms used on EHRs are both impressive. NLP frameworks are used to make sense of unstructured medical records, and predictive models are helping clinicians identify diseases earlier. Even with these advancements, it is still challenging to put them into action because of challenges including malfunctioning data systems, a lack of structure, regulatory issues, and a lack of resources. This paper looks at research that were published between 2022 and 2025 and talks about methods, data models, and performance measures. The research states that AI shouldn't take the place of what veterinarians know, but instead work with it. AI needs to do a better job, manage data better, and work better with other departments if people are going to trust it with animal health, ethics, and medicine.
Natural Language Processing (NLP), a branch of AI, can enhance how veterinary oncology clinical records are analyzed. Named Entity Recognition (NER) is crucial for this, automating data extraction and labeling for research and clinical use. This study tested Bio-EpidemiologyNER (BioEN), an NER tool trained on human data, on veterinary oncology records, comparing its output to annotations by a veterinary oncologist and intern. Using metrics like precision, recall, F1, Jaccard similarity, intra-rater reliability, and ROUGE, the evaluation showed BioEN's direct application to veterinary text was ineffective, requiring improvement. It performed marginally better against the oncologist's annotations. These findings, though showing limitations, support the development of veterinary-specific AI tools and stress the need for models suited to the unique demands of veterinary medicine. The primary goal of this research was to evaluate the effectiveness of the BioEN tool specifically for use with veterinary medical oncology records. Manuscript assessed the tool's performance by analyzing its named entity recognition output and comparing it against manual annotations.to evaluate the effectiveness of the tool.
The busy lifestyle today results in ignoring healthcare for want of time and inconvenience. Healthcare is essential for pets, but going to the veterinarian wastes plenty of time and money. This paper proposes the use of an AI-driven veterinary chatbot for disease diagnosis and providing preliminary information on health before a veterinarian consultation. The system operates based on machine learning (ML) and natural language processing (NLP) algorithms to emulate veterinary consultations in helping users suffering from health issues efficiently. The chatbot can be compared to a doctor's aide, saving on the costs of veterinary medicine and increasing the availability. The system is developed from a mammoth database in order to optimize diagnostic accuracy with a simple interface for real-time interaction. The system facilitates the purchase of medicines and integration of payments, thereby making healthcare more efficient. By employing AI, this chatbot enhances animal healthcare services to pet owners and vets effectively, at a reasonable price, and within affordable reach.
Veterinary medical records represent a large data resource for application to veterinary and One Health clinical research efforts. Use of the data is limited by interoperability challenges including inconsistent data formats and data siloing. Clinical coding using standardized medical terminologies enhances the quality of medical records and facilitates their interoperability with veterinary and human health records from other sites. Previous studies, such as DeepTag and VetTag, evaluated the application of Natural Language Processing (NLP) to automate veterinary diagnosis coding, employing long short-term memory (LSTM) and transformer models to infer a subset of Systemized Nomenclature of Medicine - Clinical Terms (SNOMED-CT) diagnosis codes from free-text clinical notes. This study expands on these efforts by incorporating all 7,739 distinct SNOMED-CT diagnosis codes recognized by the Colorado State University (CSU) Veterinary Teaching Hospital (VTH) and by leveraging the increasing availability of pre-trained language models (LMs). 13 freely available pre-trained LMs (GatorTron, MedicalAI ClinicalBERT, medAlpaca, VetBERT, PetBERT, BERT, BERT Large, RoBERTa, GPT-2, GPT-2 XL, DeBERTa V3, ModernBERT, and Clinical ModernBERT) were fine-tuned on the free-text notes from 246,473 manually-coded veterinary patient visits included in the CSU VTH's electronic health records (EHRs), which resulted in superior performance relative to previous efforts. The most accurate results were obtained when expansive labeled data were used to fine-tune relatively large clinical LMs, but the study also showed that comparable results can be obtained using more limited resources and non-clinical LMs. The results of this study contribute to the improvement of the quality of veterinary EHRs by investigating accessible methods for automated coding and support both animal and human health research by paving the way for more integrated and comprehensive health databases that span species and institutions.
ABSTRACT Integrating Artificial Intelligence (AI) through Natural Language Processing (NLP) can improve veterinary medical oncology clinical record analytics. Named Entity Recognition (NER), a critical component of NLP, can facilitate efficient data extraction and automated labelling for research and clinical decision‐making. This study assesses the efficacy of the Bio‐Epidemiology‐NER (BioEN), an open‐source NER developed using human epidemiological and medical data, on veterinary medical oncology records. The NER's performance was compared with manual annotations by a veterinary medical oncologist and a veterinary intern. Evaluation metrics included Jaccard similarity, intra‐rater reliability, ROUGE scores, and standard NER performance metrics (precision, recall, F1‐score). Results indicate poor direct translatability to veterinary medical oncology record text and room for improvement in the NER's performance, with precision, recall, and F1‐score suggesting a marginally better alignment with the oncologist than the intern. While challenges remain, these insights contribute to the ongoing development of AI tools tailored for veterinary healthcare and highlight the need for veterinary‐specific models.
Case ascertainment for prevalence and incidence studies from veterinary clinical data poses a major challenge because medical notes are not consistently structured or complete. Using natural language processing (NLP) and machine learning, this study aimed to obtain accurate case recognition for feline upper respiratory tract infections (primarily caused by viruses such as feline herpes virus (FHV-1) and feline calici virus (FCV), and bacteria such as Chlamydophila felis, Mycoplasma felis and Bordetella bronchiseptica using retrospective electronic veterinary records from the Royal Society for Prevention of Cruelty to Animals, Queensland (RSPCA Qld). Data cleaning and NLP on eight years of free-text veterinary records from RSPCA Queensland was carried out to derive text-based predictors. The NLP steps included sorting records by length of stay, vectorising, tokenising and spell checking against a bespoke veterinary database. A gradient boosted model (GBM) was trained to predict the probability of each animal having a diagnosis of upper respiratory infection. A manually annotated dataset was used for training the algorithm to learn dominant patterns between predictors (frequencies of n-grams) and responses (manual binary case classification). The GBM's performance was tested against an out of sample validation dataset, and model agnostics were used to interrogate the model's learning process. The GBM used patient-level frequencies of 1250 unique n-grams as predictor variables and was able to predict the probability of cases in the validation dataset with an accuracy of 0.95 (95% CI 0.92, 0.97) and F1 score of 0.96. Predictors that exerted the highest influence on the model included frequencies of "doxycycline", "flu", "sneezing", "doxybrom" and "ocular". The trained GBM was deployed on the full dataset spanning eight years, comprising 60,258 clinical entries. The prevalence in the full dataset was predicted to be 23.59%, which is in line with domain expertise from practicing veterinarians at the shelter. Case ascertainment is a crucial step for further epidemiological study of cat flu. Ultimately, this tool can be extended to other clinical procedures, conditions, and diseases such as intensive care treatment due to snake bites and tick paralysis, physical injuries such as orthopaedic fractures or chest injuries and labour-intensive infectious diseases like parvovirus, canine cough, and ringworm, all of which require prolonged quarantine and care.
Antimicrobial Resistance is a global crisis that veterinarians contribute to through their use of antimicrobials in animals. Antimicrobial stewardship has been shown to be an effective means to reduce antimicrobial resistance in hospital environments. Effective monitoring of antimicrobial usage patterns is an essential part of antimicrobial stewardship and is critical in reducing the development of antimicrobial resistance. The aim of this study is to describe how frequently antimicrobials were used in veterinary consultations and identify the most frequently used antimicrobials. Using VetCompass Australia, Natural Language Processing techniques, and the Australian Strategic Technical Advisory Group’s (ASTAG) Rating system to classify the importance of antimicrobials, descriptive analysis was performed on the antimicrobials prescribed in consultations from 137 companion animal veterinary clinics in Australia between 2013 and 2017 (inclusive). Of the 4,400,519 consultations downloaded there were 595,089 consultations where antimicrobials were prescribed to dogs or cats. Antimicrobials were dispensed in 145 of every 1000 canine consultations; and 38 per 1000 consultations involved high importance rated antimicrobials. Similarly with cats, 108 per 1000 consultations had antimicrobials dispensed, and in 47 per 1000 consultations an antimicrobial of high importance rating was administered. The most common antimicrobials given to cats and dogs were cefovecin and amoxycillin clavulanate, respectively. The most common topical antimicrobial and high-rated topical antimicrobial given to dogs and cats was polymyxin B. This study provides a descriptive analysis of the antimicrobial usage patterns in Australia using methods that can be automated to inform antimicrobial use surveillance programs and promote antimicrobial stewardship.
Identifying the reasons for antibiotic administration in veterinary records is a critical component of understanding antimicrobial usage patterns. This informs antimicrobial stewardship programs designed to fight antimicrobial resistance, a major health crisis affecting both humans and animals in which veterinarians have an important role to play. We propose a document classification approach to determine the reason for administration of a given drug, with particular focus on domain adaptation from one drug to another, and instance selection to minimize annotation effort.
The article presents a broad-based analysis of the opportunities and limitations of applying artificial intelligence services to the processing of medical records of horses. The study is conducted within an interdisciplinary paradigm that combines methods of text data analysis, a comparative review of machine learning algorithms, and a systematization of the experience of applying natural language processing technologies in veterinary medicine. Special attention is paid to the problems of insufficiently representative datasets, the risks of false-positive classifications, and the need to adapt existing solutions to the specifics of sports medicine. The strengths and weaknesses of various approaches are analyzed, ranging from regular expressions and coding algorithms to deep learning models, including their application for predicting outcomes in horses with abdominal pathologies. It is noted that the greatest effectiveness is demonstrated by hybrid systems that combine automated data extraction with expert validation. From a comparative perspective, it is shown that foreign studies predominantly focus on dogs and cats, whereas the area related to equine sports remains underexplored. It is established that further development requires the creation of specialized horse databases, the standardization of clinical records, and the integration of multi-level analytical models. Promising directions include the automation of condition monitoring, early detection of injuries, and the development of personalized support programs, which makes it possible to shift from reactive treatment to preventive control. The article will be useful for artificial intelligence researchers and equestrian specialists interested in improving diagnostic efficiency, reducing injuries, and implementing innovative technologies in sports medicine.
The growing demand for veterinary services in Vietnam, especially for dog nutrition, vaccination, and preventive health, has placed pressure on clinics due to limited staff and repetitive consultations. This study developed and evaluated an artificial intelligence-based chatbot system to provide accessible and reliable dog care consultation in Vietnamese. System requirements were identified through consultations with veterinarians and pet owners, and the chatbot was built on the Chatfuel platform with a modular architecture that included natural language processing, dialogue management, and a validated veterinary knowledge base. Deployed on Facebook Messenger, the chatbot delivered automated responses across four domains, including nutrition, vaccination, symptom recognition, and emergency first aid, supported by more than 150 structured templates. Technical evaluation showed stable performance, with an average response time of 1.2 seconds, intent recognition accuracy of 87%, and an automation rate of 82%. Pilot testing with 30 dog owners over two weeks recorded 426 queries, of which vaccination (41%) and nutrition (33%) were the most frequent topics. User satisfaction was high, with 86% of participants reporting positive experiences, while veterinary staff confirmed a reduction in repetitive consultations that enabled them to focus on specialized cases. These findings demonstrate the feasibility and value of chatbots for dog care consultation in Vietnam and highlight future opportunities to enhance system performance through advanced natural language models, multi-platform deployment, and expansion to other companion animals.
T he clinical applications of medical artificial intelligence (AI) aids in diagnosis which playing important role in treatment protocols and decision - making, so that accelerate the system of health care via introducing more information accompanied by available health data. Artificial intelligence technologies include Varity of machine learning approaches with the structural data, for example traditional support vector machines as well as neural networks, plus advanced deep learning techniques. As well as, they provide un-structured data for the natural language processing. Recently, AI applications playing important role in veterinary medicine, in addition to broader industry valuable tool. AI recorded its value in accelerate the accuracy ai addition to efficiency of detecting and diagnosing animal diseases, moreover their treatment. This technology considered as a very good support system in collaboration with veterinarians. Moreover, innovation in healthcare system acceleration. AI accompanied with everyday life plus medical practices via advancements as excellent health monitors moreover diagnostic algorithms.
The growing demand for smart pet healthcare systems has revealed the absence of AI-based platforms for preventive care. We present PAWMATE, an AI-integrated web platform that offers two essential functionalities: (1) image classification-based automatic detection of diseases, and (2) context-aware conversational chatbot answering veterinary-related questions. A Convolutional Neural Network (ResNet-50) is trained on pet datasets to categorize visible symptoms from pet images with an average accuracy of 89%. For real-time interaction, a Natural Language Processing (NLP) model—domain-specific intent fine-tuned—achieves 92.5% intent recognition accuracy and multi-turn dialogue support. Compared to conventional rule-based systems, our architecture improves responsiveness and decision support in pet care. Apart from application, the architecture can be generalized to other fields in which AI-based health diagnostic and advisory services are required. This paper presents the architecture, implementation, evaluation, and future generalizability of PAWMATE as a scientific contribution towards AI-based health systems.
BACKGROUND Keratoconjunctivitis sicca (KCS) is a common and important eye disease of dogs and has been associated with the administration of trimethoprim sulfonamide (TMS). HYPOTHESIS/OBJECTIVES Determine the prevalence of KCS after TMS treatment at a population level and describe risk factors for KCS development. ANIMALS Dogs evaluated in general veterinary practice in Australia with records in VetCompass Australia between 2012 and 2022. METHODS Natural language processing was used to detect dogs treated with TMS and to detect dogs that subsequently developed KCS. Cox proportional hazards modeling was performed to investigate risk factors such as drug dose, duration of treatment, and patient level characteristics (breed, age, sex). RESULTS A total of 2243 dogs were treated with TMS during the study period. Four definitive cases of KCS and an additional 35 cases of possible KCS were detected (prevalence 1.8%; 95% confidence interval [CI], 1.3-2.5%). Median duration of TMS treatment was 10 days for both cases (interquartile range [IQR], 7-17 days) and non-cases (IQR, 7-15 days). Median doses were 32 and 33 mg/kg/day for cases and non-cases, respectively. Trimethoprim sulfonamide dose and duration of treatment were not associated with KCS. Some breeds were over-represented and older dogs were more likely to be affected (hazard ratio [HR], 1.076; 95% CI, 1.005-1.152; P = .04). CONCLUSIONS AND CLINICAL IMPORTANCE Keratoconjunctivitis sicca is rare in dogs treated with TMS.
No abstract available
The goal of our research is to distinguish veterinary message board posts that describe a case involving a specific patient from posts that ask a general question. We create a text classifier that incorporates automatically generated attribute lists for veterinary patients to tackle this problem. Using a small amount of annotated data, we train an information extraction (IE) system to identify veterinary patient attributes. We then apply the IE system to a large collection of unannotated texts to produce a lexicon of veterinary patient attribute terms. Our experimental results show that using the learned attribute lists to encode patient information in the text classifier yields improved performance on this task.
Background Digital imaging combined with deep-learning-based computational image analysis is a growing area in medical diagnostics, including parasitology, where a number of automated analytical devices have been developed and are available for use in clinical practice. Methods The performance of Parasight All-in-One (AIO), a second-generation device, was evaluated by comparing it to a well-accepted research method (mini-FLOTAC) and to another commercially available test (Imagyst). Fifty-nine canine and feline infected fecal specimens were quantitatively analyzed by all three methods. Since some samples were positive for more than one parasite, the dataset consisted of 48 specimens positive for Ancylostoma spp., 13 for Toxocara spp. and 23 for Trichuris spp. Results The magnitude of Parasight AIO counts correlated well with those of mini-FLOTAC but not with those of Imagyst. Parasight AIO counted approximately 3.5-fold more ova of Ancylostoma spp. and Trichuris spp. and 4.6-fold more ova of Toxocara spp. than the mini-FLOTAC, and counted 27.9-, 17.1- and 10.2-fold more of these same ova than Imagyst, respectively. These differences translated into differences between the test sensitivities at low egg count levels (< 50 eggs/g), with Parasight AIO > mini-FLOTAC > Imagyst. At higher egg counts Parasight AIO and mini-FLOTAC performed with comparable precision (which was significantly higher that than Imagyst), whereas at lower counts (> 30 eggs/g) Parasight was more precise than both mini-FLOTAC and Imagyst, while the latter two methods did not significantly differ from each other. Conclusions In general, Parasight AIO analyses were both more precise and sensitive than mini-FLOTAC and Imagyst and quantitatively correlated well with mini-FLOTAC. While Parasight AIO produced lower raw counts in eggs-per-gram than mini-FLOTAC, these could be corrected using the data generated from these correlations. Graphical Abstract
Degenerative diseases of the vertebral column, such as spondylosis, are not uncommon in dogs. Detecting these diseases can be challenging and is often overlooked due to lack of effective treatments. The intricacy of medical images poses difficulties in disease identification. Our approach focuses on segmenting the vertebral column from X-ray images of dogs, facilitating targeted disease detection and improving accuracy. Conventional segmentation methods are ineffective for medical images due to presence of occlusions, and pixel classification methods require the laborious creation of masks. By training our UNet models, we successfully achieved precise segmentation of the vertebral column from radiographic images, significantly reducing the time required for generating ground truth masks. Our UNet models effectively enhanced the isolation of the vertebral column from X-ray images, thereby optimizing the accuracy of degenerative disease detection methods for the vertebral column. Key Words: segmentation, medical imaging, UNet, radiographs, computer vision, canine
Canine/feline (sub-)cutaneous tumors, which include lipomas, mastocytomas and soft tissue sarcomas, introduce diagnostic challenges due to inherent tissue heterogeneity, accompanied by diverse clinical pathogenesis. Current study integrates conventional imaging techniques optical (white light and autofluorescence) as well as high frequency ultrasound imaging to train machine learning classifiers: linear discriminant analysis, support vector machine and random forest. Study resulted in ~ 100% classification efficiency between benign lipoma and combined mastocytoma and sarcoma tissues for all the classifiers. For the differentiation between mastocytoma and sarcoma tumors, both support vector machine and random forest outperformed conventional linear discriminant analysis classifier. Support vector machine displayed the highest classification efficiency for bimodal groups: (i) ultrasound + fluorescence and (ii) ultrasound + white light as well as (iii) fluorescence + white light. However, it failed for trimodal ultrasound + optics combination, indicating possible upper limit for imaging mode addition. The multimodal effect was obtained using both statistically significant set of features as well as optimal set of features, determined using sequential feature addition. Resulting classification efficiency for combined ultrasound + fluorescence approach was > 85% and even higher for ultrasound + white light or ultrasound + optics multimodal approaches reaching ~ 95%. In the classification of mastocytoma and sarcoma, support vector machine classifier was able to detect significant (p < 0.05) multimodal effect for bimodal groups of: (i) fluorescence + white light, (ii) ultrasound + fluorescence and (iii) ultrasound + white light. On the contrary, random forest demonstrated relevant increment only for the combination of fluorescence and white light. Inferior features of ultrasound or fluorescence have been evaluated to be competitive with the features of highly-efficient white light as they were automatically selected during the process of feature optimization. In addition, another phenomenon of manifestation of multimodality has been observed: in multimodal groups, ultrasound features tended to substitute the features of white light, not just simply be added to them. Multimodal approach was determined to be highly-required for the classification of heterogeneous mastocytoma and sarcoma tumors, which display more similar morphological characteristics. However, when differentiating very distinct lipomas from mastocytomas or sarcomas, the multimodal approach was not a requisite.
This study aims to develop an automated knowledge-based planning (KBP) system for small animal radiation treatment, specifically targeting nasal tumors. The potential impact of this system on the field of veterinary medicine and radiation therapy is significant, as it can revolutionize the planning process, making it more efficient and less resource-intensive. A total of twenty previous radiation treatment plans were collected to generate an averaged dose-volume histogram (DVH), which was then used to set the optimization parameters for the automated KBP process. The application programming interfaces (API) scripting was used to automate the planning process, including adding beams, optimization, and dose calculation at CT images. To validate the efficiency of the automated system, another set of twenty prior treatment plans was used for comparison. The study evaluated both the time required to generate the plans and the quality of the plans produced by the automated KBP system against the original manual plans. Most of the dose statistics from automated plans were similar to the original plan, such as tumor dose coverage and dose at OARs. The only exception is the dose at the right eye (D2cc), which is lower in the automated plan compared with the original plan. The automated plans generated by KBP using API scripting demonstrated comparable plan quality to the original plans, especially in terms of tumor coverage and OAR sparing. It also significantly reduced planning time from 33 min to just 5 min compared with manual optimization. This capability is particularly beneficial for high-workload departments with limited medical physicist resources, enabling them to consistently generate high-quality treatment plans.
ABSTRACT Middle ear disease occurs frequently in dogs. CT has proven to be an excellent diagnostic tool for detecting middle ear structures, helping to achieve rapid and accurate diagnoses. Deep learning techniques are now widely used in CT scan‐based human medical image analysis, providing decision support and diagnostics. However, such techniques are currently underutilized in veterinary radiology. The focus of this study was to develop a deep learning model capable of diagnosing middle ear disease in dogs using CT images. To achieve this with a relatively small dataset, transfer learning and data augmentation techniques were applied. During the experimental phase of the study, we tested 10 binary classification models based on the ResNet architecture, combined with data augmentation and transfer learning, on a dataset consisting of a total of 535 canine CT images. We achieved a classification accuracy of up to 84.7%. The developed classifier, trained on relatively few CT images, can detect normal middle ears and middle ear disease in dogs with over 80% accuracy.
With increasing interest in artificial intelligence (AI) for veterinary medical imaging, there will be an increasing need for the segmentation of medical images. Image segmentation-the process of delineating anatomical structures in medical images-is a critical step for enabling analysis and decision support in veterinary radiology. Manual segmentation of medical images is a time-consuming and tedious task associated with user variation. Many segmentation tasks require a radiologist's expertise. To date, there have been limited evaluations of segmentation methods in veterinary medicine. It is unknown whether novice evaluators can segment radiographs with similar accuracy to experts. The present study aimed to evaluate the performance of an AI segmentation tool in enhancing the accuracy and reducing the time of canine radiograph segmentation of novice, intermediate, and expert users when using an internally developed software that allows both AI-assisted semiautomated and manual segmentation. The AI model was trained using 50 thoracic radiographs from patients referred to the Ontario Veterinary College between January 2020 and July 2021. The intersection over union scores (IoU) for the abdomen, heart, and spinous process labels were higher when all cohorts used the semiautomated method (0.98, 0.98, and >0.74, respectively) versus the manual method (>0.93, >0.94, and >0.42, respectively). The Hausdorff distance for the structure labels was significantly lower when the participants used the semiautomated method than the manual method (p < .0001). The intraobserver intraclass correlation coefficients (ICC) for the semiautomatic and manual methods were 0.81 and 0.36, respectively. In conclusion, the semiautomated tool effectively assisted users with segmenting canine thoracic radiographs.
Objective This study aimed to investigate the feasibility of computed tomography (CT) texture analysis for distinguishing canine adrenal gland tumors and its usefulness in clinical decision-making. Materials and methods The medical records of 25 dogs with primary adrenal masses who underwent contrast CT and a histopathological examination were retrospectively reviewed, of which 12 had adenomas (AAs), 7 had adenocarcinomas (ACCs), and 6 had pheochromocytomas (PHEOs). Conventional CT evaluation of each adrenal gland tumor included the mean, maximum, and minimum attenuation values in Hounsfield units (HU), heterogeneity of the tumor parenchyma, and contrast enhancement (type, pattern, and degree), respectively, in each phase. In CT texture analysis, precontrast and delayed-phase images of 18 adrenal gland tumors, which could be applied for ComBat harmonization were used, and 93 radiomic features (18 first-order and 75 second-order statistics) were extracted. Then, ComBat harmonization was applied to compensate for the batch effect created by the different CT protocols. The area under the receiver operating characteristic curve (AUC) for each significant feature was used to evaluate the diagnostic performance of CT texture analysis. Results Among the conventional features, PHEO showed significantly higher mean and maximum precontrast HU values than ACC (p < 0.05). Eight second-order features on the precontrast images showed significant differences between the adrenal gland tumors (p < 0.05). However, none of them were significantly different between AA and PHEO, or between precontrast images and delayed-phase images. This result indicates that ACC exhibited more heterogeneous and complex textures and more variable intensities with lower gray-level values than AA and PHEO. The correlation, maximal correlation coefficient, and gray level non-uniformity normalized were significantly different between AA and ACC, and between ACC and PHEO. These features showed high AUCs in discriminating ACC and PHEO, which were comparable or higher than the precontrast mean and maximum HU (AUC = 0.865 and 0.860, respectively). Conclusion Canine primary adrenal gland tumor differentiation can be achieved with CT texture analysis on precontrast images and may have a potential role in clinical decision-making. Further prospective studies with larger populations and cross-validation are warranted.
Fecal examination is an important component of routine companion animal wellness exams. Sensitivity and specificity of fecal examinations, however, are influenced by sample preparation methodologies and the level of training and experience of personnel who read fecal slides. The VETSCAN IMAGYST system consists of three components: a sample preparation device, a commercially available scanner, and an analysis software. The VETSCAN IMAGYST automated scanner and cloud-based, deep learning algorithm, locates, classifies, and identifies parasite eggs found on fecal microscopic slides. The main study objectives were (i) to qualitatively evaluate the capabilities of the VETSCAN IMAGYST screening system and (ii) to assess and compare the performance of the VETSCAN IMAGYST fecal preparation methods to conventional fecal flotation techniques. To assess the capabilities of VETSCAN IMAGYST screening components, fecal slides were prepared by the VETSCAN IMAGYST centrifugal and passive flotation techniques with 100 pre-screened fecal samples collected from dogs and cats and examined by both the algorithm and parasitologists. To determine the diagnostic sensitivity and specificity of the VETSCAN IMAGYST sample preparation techniques, fecal flotation slides were prepared by four different techniques (VETSCAN IMAGYST centrifugal and passive flotations, conventional centrifugal flotation, and passive flotation using OVASSAY® Plus) and examined by parasitologists. Additionally, required sample preparation and scanning times were estimated on a subset of samples to evaluate VETSCAN IMAGYST ease-of-use. The algorithm performance of the VETSCAN IMAGYST closely matched that of the parasitologists, with Pearsonʼs correlation coefficient (r) ranging from 0.83–0.99 across four taxa of parasites, Ancylostoma, Toxocara, Trichuris and Taeniidae. Both VETSCAN IMAGYST centrifugal and passive flotation methods correlated well with conventional preparation methods on all targeted parasites (diagnostic sensitivity of 75.8–100%, specificity of 91.8–100%, qualitative agreement between methods of 93.8–94.5%). Sample preparation, slide scan and image analysis were completed within 10–14 min by VETSCAN IMAGYST centrifugal and passive flotations, respectively. The VETSCAN IMAGYST scanning system with the VETSCAN IMAGYST sample preparation methods demonstrated a qualitative match in comparison to the results of parasitologists’ examinations with conventional fecal flotation techniques. The VETSCAN IMAGYST is an easy-to-use, next generation qualitative and possibly quantitative diagnostic platform that brings expert clinical results into the hands of veterinary clinics.
No abstract available
Background Microscopic evaluation of urine is inconsistently performed in veterinary clinics. The IDEXX SediVue Dx® Urine Sediment Analyzer (SediVue) recently was introduced for automated analysis of canine and feline urine and may facilitate performance of urinalyses in practice. Objective Compare the performance of the SediVue with manual microscopy for detecting clinically relevant numbers of cells and 2 crystal types. Samples Five‐hundred thirty urine samples (82% canine, 18% feline). Methods For SediVue analysis (software versions [SW] 1.0.0.0 and 1.0.1.3), uncentrifuged urine was pipetted into a cartridge. Images were captured and processed using a convolutional neural network algorithm. For manual microscopy, urine was centrifuged to obtain sediment. To determine sensitivity and specificity of the SediVue compared with manual microscopy, thresholds were set at ≥5/high power field (hpf) for red blood cells (RBC) and white blood cells (WBC) and ≥1/hpf for squamous epithelial cells (sqEPI), non‐squamous epithelial cells (nsEPI), struvite crystals (STR), and calcium oxalate dihydrate crystals (CaOx Di). Results The sensitivity of the SediVue (SW1.0.1.3) was 85%‐90% for the detection of RBC, WBC, and STR; 75% for CaOx Di; 71% for nsEPI; and 33% for sqEPI. Specificity was 99% for sqEPI and CaOx Di; 87%‐90% for RBC, WBC, and nsEPI; and 84% for STR. Compared to SW1.0.0.0, SW1.0.1.3 had increased sensitivity but decreased specificity. Performance was similar for canine versus feline and fresh versus stored urine samples. Conclusions and Clinical Importance The SediVue exhibits good agreement with manual microscopy for the detection of most formed elements evaluated, but improvement is needed for epithelial cells.
最终分组结果全面覆盖了人工智能在宠物医疗中的全产业链应用。研究矩阵以“计算机视觉影像诊断”和“临床文本NLP挖掘”为核心技术双翼,通过“决策支持系统”实现临床赋能。同时,研究已从单纯的算法开发延伸至“实验室自动化”与“终端用户交互”等具体场景。最后,行业对“伦理规制”、“人机协作模式”及“职业健康”的深度讨论,标志着AI在宠物医疗领域正从技术爆发期向稳健的规范化实施阶段过渡。