Date of Defense

5-3-2024 2:00 PM

Location

H4-1005

Document Type

Dissertation Defense

Degree Name

Doctor of Philosophy (PhD)

College

CIT

First Advisor

Prof. Hesham El-Sayed

Keywords

Autonomous Vehicles (AVs); Data Fusion; Situation Awareness; Data-Preprocessing; Machine Learning (ML)

Abstract

Autonomous driving has the potential to bring significant changes and benefits to various aspects of transportation. Autonomous vehicles (AVs) use a combination of advanced sensors, cameras, radar, lidar, GPS, maps, and AI algorithms to perceive their environment, make decisions, and control their movements. Though there is a significant increase in the AVs utility, there are several challenges associated with the AVs among which ensuring safety and security for a reliable drive is still an existing challenge. The majority of accidents involving the AVs result from faulty decision-making resulting in fatal incidents. Multiple elements contribute to the flawed decision-making in autonomous vehicles (AVs), with inaccurate context creation being highlighted as one of the pivotal factors. For comprehensive safety across diverse driving environments, an autonomous vehicle must adeptly and dependably interpret its surroundings. Inadequate data acquisition from diverse sources and insufficient data pre-processing are the primary factors contributing to this inaccurate formation of context. To enhance environmental awareness and boost decision-making precision in autonomous vehicles (AVs), a versatile framework has been proposed. This framework incorporates multiple modules designed to oversee vital tasks such as collecting and organizing sensory data in diverse formats, extracting pertinent features, fusing them effectively, establishing precise context, and creating inventive and rapid decision protocols for timely decision-making in AVs. This research introduces innovative mechanisms for sensory data classification and versatile machine learning (ML) models for feature extraction and data fusion. A novel mechanism is proposed for instant rule framing and decision-making based on the fused data. This endeavour is commenced by presenting an outline of the functionality of the proposed framework, specifically highlighting image and video data formats, which hold prominence in sensory data. Efficient models have been suggested with the aim of extracting vital image attributes, including edges, color, height, and width. Furthermore, an ingenious mathematical model is introduced, utilizing advanced matrix transformations and progressive modes to convert two-dimensional image data formats into three-dimensional ones. This mathematical model serves as the fundamental kernel function for the proposed Convolutional Neural Network (CNN) model, enabling the fusion of different image data formats. Additional innovative concepts and mechanisms are introduced in this research to enhance the performance of the proposed models. The extension of the proposed edge detection model encompasses detecting edges in all directions within the input image, surpassing the previous limitation of solely identifying horizontal and vertical edges. Advanced mathematical models incorporating genetic mutation techniques are proposed to accomplish this task. Versatile kernel functions are developed to process 3D point cloud sensory data, which are integrated into the proposed Generative Adversarial Network (GAN) model for classifying and fusing different image data formats. The extended research effectively completes the functionalities of the remaining modules within the proposed framework. In the expanded work, novel models have been introduced to efficiently combine textual and audio data. Additionally, versatile models for object detection and classification have been presented, enhancing the accurate recognition and categorization of objects. Advanced techniques such as ensembling, gating, and filtering are incorporated to select the most suitable object detection and classification model. Further, innovative methodologies are proposed to establish accurate context and decision rules. The performance evaluation of the proposed models utilizes widely recognized datasets namely KITTI, nuScenes, RADIATE, OSU, BPEM and GeoTiles. The suggested image fusion model achieved an accuracy of 98% and demonstrated a faster execution time (0.98s) compared to other well-known image fusion models. Alternatively, the suggested object detection model demonstrated a sensitivity of 0.65 and an average precision of 0.85, confirming its improved performance than other widely acknowledged object detection models, but. Further results are discussed in the Experimental Analysis portion of Chapter 3, which is the outcome of extensive investigations carried out on several parameters to evaluate the suggested models.

Share

COinS
 
Mar 5th, 2:00 PM

A NOVEL DATA FUSION FRAMEWORK TO ENHANCE CONTEXTUAL AWARENESS OF THE AUTONOMOUS VEHICLES FOR ACCURATE DECISION MAKING

H4-1005

Autonomous driving has the potential to bring significant changes and benefits to various aspects of transportation. Autonomous vehicles (AVs) use a combination of advanced sensors, cameras, radar, lidar, GPS, maps, and AI algorithms to perceive their environment, make decisions, and control their movements. Though there is a significant increase in the AVs utility, there are several challenges associated with the AVs among which ensuring safety and security for a reliable drive is still an existing challenge. The majority of accidents involving the AVs result from faulty decision-making resulting in fatal incidents. Multiple elements contribute to the flawed decision-making in autonomous vehicles (AVs), with inaccurate context creation being highlighted as one of the pivotal factors. For comprehensive safety across diverse driving environments, an autonomous vehicle must adeptly and dependably interpret its surroundings. Inadequate data acquisition from diverse sources and insufficient data pre-processing are the primary factors contributing to this inaccurate formation of context. To enhance environmental awareness and boost decision-making precision in autonomous vehicles (AVs), a versatile framework has been proposed. This framework incorporates multiple modules designed to oversee vital tasks such as collecting and organizing sensory data in diverse formats, extracting pertinent features, fusing them effectively, establishing precise context, and creating inventive and rapid decision protocols for timely decision-making in AVs. This research introduces innovative mechanisms for sensory data classification and versatile machine learning (ML) models for feature extraction and data fusion. A novel mechanism is proposed for instant rule framing and decision-making based on the fused data. This endeavour is commenced by presenting an outline of the functionality of the proposed framework, specifically highlighting image and video data formats, which hold prominence in sensory data. Efficient models have been suggested with the aim of extracting vital image attributes, including edges, color, height, and width. Furthermore, an ingenious mathematical model is introduced, utilizing advanced matrix transformations and progressive modes to convert two-dimensional image data formats into three-dimensional ones. This mathematical model serves as the fundamental kernel function for the proposed Convolutional Neural Network (CNN) model, enabling the fusion of different image data formats. Additional innovative concepts and mechanisms are introduced in this research to enhance the performance of the proposed models. The extension of the proposed edge detection model encompasses detecting edges in all directions within the input image, surpassing the previous limitation of solely identifying horizontal and vertical edges. Advanced mathematical models incorporating genetic mutation techniques are proposed to accomplish this task. Versatile kernel functions are developed to process 3D point cloud sensory data, which are integrated into the proposed Generative Adversarial Network (GAN) model for classifying and fusing different image data formats. The extended research effectively completes the functionalities of the remaining modules within the proposed framework. In the expanded work, novel models have been introduced to efficiently combine textual and audio data. Additionally, versatile models for object detection and classification have been presented, enhancing the accurate recognition and categorization of objects. Advanced techniques such as ensembling, gating, and filtering are incorporated to select the most suitable object detection and classification model. Further, innovative methodologies are proposed to establish accurate context and decision rules. The performance evaluation of the proposed models utilizes widely recognized datasets namely KITTI, nuScenes, RADIATE, OSU, BPEM and GeoTiles. The suggested image fusion model achieved an accuracy of 98% and demonstrated a faster execution time (0.98s) compared to other well-known image fusion models. Alternatively, the suggested object detection model demonstrated a sensitivity of 0.65 and an average precision of 0.85, confirming its improved performance than other widely acknowledged object detection models, but. Further results are discussed in the Experimental Analysis portion of Chapter 3, which is the outcome of extensive investigations carried out on several parameters to evaluate the suggested models.