Date of Defense

6-6-2024 10:00 AM

Location

Room F3-136

Document Type

Thesis Defense

College

College of Information Technology

First Advisor

Dr. Munkhjargal Gochoo

Keywords

YOLO, Deep SORT, Graphical User Interface (GUI), Deep Learning Models

Abstract

Fisheye cameras are widely used in traffic monitoring for their broad view, yet their distortion challenges deep-learning models in pedestrian detection and tracking. Despite available datasets like FishEye8K, collected in Hsinchu, Taiwan, and several studies have delved into pedestrian prediction systems, a notable gap remains: the absence of datasets specifically designed for the cultural context of the UAE. This study aims to address this gap by introducing an in-house fisheye dataset tailored to enhance the prediction and tracking of pedestrians in fisheye footage within the UAE's environment. We've developed a Graphical User Interface (GUI) that combines YOLOv8 and Deep SORT, which are part of our tracking-by-detection framework for real-time pedestrian detection and tracking in fisheye footage. It employs color-coded bounding boxes for safety: green indicates pedestrians outside the crossing area, yellow for those near the boundary, and red within risk areas, enhancing safety monitoring and simplification. Our model was manually tested on 20 videos and classified with ratings of Excellent, Good, or Fair. When trained with the UAE Fisheye dataset using both YOLO Nano and YOLO Medium versions, it achieved Excellent and Good ratings, while training with the FishEye8k dataset resulted in Fair tracking performance. After 150 epochs, our YOLOv8m model achieved an F1-score of 0.836 and a mAP of 0.651, showcasing its efficacy in recognizing distorted images and validating our training approach.

Share

COinS
 
Jun 6th, 10:00 AM

FISH-EYE CAMERA-BASED REAL-TIME PEDESTRIAN CROSSING INTENTION PREDICTING SYSTEM

Room F3-136

Fisheye cameras are widely used in traffic monitoring for their broad view, yet their distortion challenges deep-learning models in pedestrian detection and tracking. Despite available datasets like FishEye8K, collected in Hsinchu, Taiwan, and several studies have delved into pedestrian prediction systems, a notable gap remains: the absence of datasets specifically designed for the cultural context of the UAE. This study aims to address this gap by introducing an in-house fisheye dataset tailored to enhance the prediction and tracking of pedestrians in fisheye footage within the UAE's environment. We've developed a Graphical User Interface (GUI) that combines YOLOv8 and Deep SORT, which are part of our tracking-by-detection framework for real-time pedestrian detection and tracking in fisheye footage. It employs color-coded bounding boxes for safety: green indicates pedestrians outside the crossing area, yellow for those near the boundary, and red within risk areas, enhancing safety monitoring and simplification. Our model was manually tested on 20 videos and classified with ratings of Excellent, Good, or Fair. When trained with the UAE Fisheye dataset using both YOLO Nano and YOLO Medium versions, it achieved Excellent and Good ratings, while training with the FishEye8k dataset resulted in Fair tracking performance. After 150 epochs, our YOLOv8m model achieved an F1-score of 0.836 and a mAP of 0.651, showcasing its efficacy in recognizing distorted images and validating our training approach.