A YOLO26-Based Human Detection Framework for High-Entropy Disaster Environments

Abdulrashid Sani, Bashar Aliyu Yauri, Muhammad Garba, Anas Tukur Balarabe

Abstract


The rapid and accurate identification of human presence is a critical prerequisite for the effective disaster management and security monitoring, particularly in crowded environments. While traditional object detection frameworks often suffer from high latency and reduced sensitivity in cluttered scenarios, this study evaluates the efficacy of the YOLO26 architecture an up-to-date, end-to-end, and NMS-free model released in January 2026 for the specific task of human detection in a crowded environment. Employing a C2 (Combination to Application) dataset of 10,215 images reflecting diverse catastrophic domains, including structural building collapses, floods, and dense vegetation area and fire hazards. Experimental results demonstrate that the YOLO26 model achieved a peak mean Average Precision (mAP@0.5) of 0.89 and a harmonic reliability (F1-score) of 0.87. Analysis of the F1-Confidence curves indicates a robust operational hill, maintaining an F1-score above 0.80 across a broad threshold range (0.2–0.6). A functional prototype was successfully deployed via a Gradio-based interface, providing real-time inference and automated human counting with an operational recall of 0.84 at a 0.50 confidence threshold. These findings suggest that the YOLO26 framework offers a favourable balance of localization precision and deployment efficiency, making it suitable for the demanding requirements of search and rescue operations. Future work will involve a comparative performance evaluation between the convolutional-based YOLO26 and transformer-based architectures (Detection Transformers) to further investigate global context modelling in extreme disaster zones. 


Full Text:

PDF

References


M. G. Ragab et al., “A Comprehensive Systematic Review of YOLO for Medical Object Detection (2018 to 2023),” IEEE Access, vol. 12, no. February, pp. 57815–57836, 2024, doi: 10.1109/ACCESS.2024.3386826.

A. Vijayakumar and S. Vairavasundaram, YOLO-based Object Detection Models: A Review and its Applications, vol. 83, no. 35. Springer US, 2024. doi: 10.1007/s11042-024-18872-y.

M. Varatharaj and S. L. Devi, “International Journal of Research Publication and Reviews YOLO-Based Person Detection and Tracking in Dense Crowds,” vol. 6, no. 1, pp. 999–1005, 2025.

UNICEF, “Nigeria Flood Response Report,” pp. 1–3, 2023.

A. Oluwapelumi, “Overview of Displacement in Borno State, Nigeria,” 2024.

F. Ciccone and A. Ceruti, “Real-Time Search and Rescue with Drones: A Deep Learning Approach for Small-Object Detection Based on YOLO,” Drones, vol. 9, no. 8, 2025, doi: 10.3390/drones9080514.

A. Basireddy, K. G. Rao, V. Vaitla, and S. R. Pedda, “Detection of Humans in Search and Rescue Operations Using Ensemble Learning and YOLOv9,” INCOFT 2025 - Int. Conf. Futur. Technol. locate, vol. 2, no. Incoft, pp. 923–928, 2025, doi: 10.5220/0013607400004664.

V. Hendriko and D. Hermanto, “Performance Comparison of YOLOv10, YOLOv11, and YOLOv12 Models on Human Detection Datasets,” Brill. Res. Artif. Intell., vol. 5, no. 1, pp. 440–450, 2025, doi: 10.47709/brilliance.v5i1.6447.

T. Q. Thuan, V. T. N. Han, T. Quyen, D. P. Thinh, P. T. Dat, and D. A. Duy, “Development of a Drone-Based Rescue Platform with Intelligent Human Detection and Multi-Payload Delivery Mechanism,” FME Trans., vol. 53, no. 4, pp. 565–574, 2025, doi: 10.5937/fme2504565Q.

A. B. Adege, “AI-Powered Human Activity Detection and Tracking in Dense Crowds Using YOLOv8-DeepSORT,” Inst. Eng. Technol., pp. 1–16, 2025, doi: 10.1049/ipr2.70227.

P. Siva et al., “Smart Surveillance Systems Using YOLOv8 : A Scalable Approach for Crowd and Threat Detection,” Int. J. Recent Adv. Eng. Technol., vol. 14, no. 1, 2025.

S. Chakrabarty, “YOLO26 : A N A NALYSIS OF NMS-F REE E ND TO E ND,” 2026, [Online]. Available: https://docs.ultralytics.com/models/yolo26/

R. A. Nihal, B. Yen, K. Itoyama, and K. Nakadai, “UAV-Enhanced Combination to Application: Comprehensive Analysis and Benchmarking of a Human Detection Dataset for Disaster Scenarios,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) , vol. 15314 LNCS, pp. 145–162, 2025, doi: 10.1007/978-3-031-78341-8_10.

B. Ç. Yavuz, “Scale-Dependent Performance Analysis of YOLO26 and YOLOv11 for PPE Detection,” 2026.

A. G. Tobias and J. Kittur, “Strategic innovations and future directions in deep learning for engineering applications : a systematic literature review”.

Z. Zhao et al., “Enhancing Human Detection in Occlusion-Heavy Disaster Scenarios : A Visibility-Enhanced DINO ( VE-DINO ) Model with Reassembled Occlusion Dataset,” 2025.

V. Sharma, “Human Body Detection in Disaster Environments : A Review of Framework Design and Development Strategies,” vol. 1, no. 9, pp. 1–6, 2026.

R. Sapkota and M. Karkee, “Ultralytics YOLO Evolution: An Overview of YOLO26, YOLO11, YOLOv8 and YOLOv5 Object Detectors for Computer Vision and Pattern Recognition,” vol. 26, no. 2025, pp. 1–16, 2026.

Z. Paszko and H. Padzik, “Estimation of high affinity estradiol binding sites in human breast cancer,” Arch. Geschwulstforsch., vol. 45, no. 5, pp. 430–443, 2024.

D. S. Kalra, “Why Warmup the Learning Rate ? Underlying Mechanisms and Improvements,” no. NeurIPS, 2024.

Y. Liu, Y. Ge, R. Pan, A. Kang, and T. Zhang, “Theoretical Analysis on how Learning Rate Warmup Accelerates Convergence,” 2025.

K. Liu, Z. Fu, S. Jin, Z. Chen, F. Zhou, and R. Jiang, “ESOD : Efficient Small Object Detection on High-Resolution Images,” pp. 1–13, 2024.

E. Edozie, A. Nuhu, S. Ukagwu, K. John, and B. Olaniyi, “Comprehensive review of recent developments in visual object detection based on deep learning,” 2025.

P. S. Dwivedi, S. Bhosale, M. Shaikh, S. Doiphode, and S. Bansode, “Face-to-Face Language Translation,” vol. 9, no. 3, pp. 256–259, 2024.


Refbacks

  • There are currently no refbacks.