Date of Defense

21-11-2025 3:00 PM

Location

F1-1164

Document Type

Thesis Defense

Degree Name

Master of Science in Electrical Engineering (MSEE)

College

COE

Department

Electrical and Communication Engineering

First Advisor

Dr. Qurban Ali Andal Memon

Abstract

Object detection models, powered by deep learning and computer vision, are revolutionizing marine search and rescue (SAR). By analyzing aerial imagery and live drone footage, these systems automatically identify critical targets like survivors, life rafts, and debris across vast and treacherous ocean areas. This capability enhances operational efficiency by reducing human workload and accelerating response times, even in challenging conditions such as poor light, high seas, or cluttered backgrounds. The result is continuous monitoring, faster decision-making, and a significantly improved probability of successful rescue.

Departing from prior methodologies, YOLO introduced a paradigm shift through its single-shot architecture, which concurrently predicts bounding boxes and class probabilities in a single forward pass. This foundational design is responsible for YOLO's defining trait: remarkable inference speed that made real-time object detection feasible. The ongoing evolution of the YOLO lineage reflects persistent architectural enhancements, yielding models that strike a robust balance between speed, precision, and size, thereby cementing their effectiveness across a wide spectrum of tasks.

This study benchmarks the performance of YOLO models for maritime SAR under challenging weather conditions using the SeaDronesSee and AFO datasets. The results show that while YOLOv7 achieved the highest mAP@50, it struggled with detecting small objects. In contrast, YOLOv10 and YOLOv11 deliver faster inference speeds but compromise slightly on precision. This study also discusses enhancing the dataset using synthetic data to the existing dataset, which showed slightly higher experimental performance than the standalone dataset. The key challenges discussed include environmental variability, sensor limitations, and scarce annotated data, which can be addressed by such techniques as attention modules and multimodal data fusion. Overall, the research results provide practical guidance for deploying efficient deep learning models in marine SAR, emphasizing specialized datasets and lightweight architectures for edge devices.

Share

COinS
 
Nov 21st, 3:00 PM

YOLO-BASED MARINE SEARCH AND RESCUE USING UAV MULTI-DATASET IN CHALLENGING WEATHER CONDITIONS

F1-1164

Object detection models, powered by deep learning and computer vision, are revolutionizing marine search and rescue (SAR). By analyzing aerial imagery and live drone footage, these systems automatically identify critical targets like survivors, life rafts, and debris across vast and treacherous ocean areas. This capability enhances operational efficiency by reducing human workload and accelerating response times, even in challenging conditions such as poor light, high seas, or cluttered backgrounds. The result is continuous monitoring, faster decision-making, and a significantly improved probability of successful rescue.

Departing from prior methodologies, YOLO introduced a paradigm shift through its single-shot architecture, which concurrently predicts bounding boxes and class probabilities in a single forward pass. This foundational design is responsible for YOLO's defining trait: remarkable inference speed that made real-time object detection feasible. The ongoing evolution of the YOLO lineage reflects persistent architectural enhancements, yielding models that strike a robust balance between speed, precision, and size, thereby cementing their effectiveness across a wide spectrum of tasks.

This study benchmarks the performance of YOLO models for maritime SAR under challenging weather conditions using the SeaDronesSee and AFO datasets. The results show that while YOLOv7 achieved the highest mAP@50, it struggled with detecting small objects. In contrast, YOLOv10 and YOLOv11 deliver faster inference speeds but compromise slightly on precision. This study also discusses enhancing the dataset using synthetic data to the existing dataset, which showed slightly higher experimental performance than the standalone dataset. The key challenges discussed include environmental variability, sensor limitations, and scarce annotated data, which can be addressed by such techniques as attention modules and multimodal data fusion. Overall, the research results provide practical guidance for deploying efficient deep learning models in marine SAR, emphasizing specialized datasets and lightweight architectures for edge devices.