Date of Defense

11-2025 2:45 PM

Location

E1- 1036

Document Type

Thesis Defense

Degree Name

Master of Science in Information Security

College

CIT

Department

Information Systems and Security

First Advisor

Dr. Norziana Jamil

Keywords

Explainable AI (XAI), Machine Learning (ML), Threat Detection, Cloud Computing, SHAP, LIME, Cybersecurity, Interpretability.

Abstract

The rapid adoption of cloud computing brought about serious security concerns, as cloud infrastructures are constantly exposed to cybersecurity threats such as malware and Distributed Denial of Service attacks. Also, on the other hand, current security methodologies have limitations in identifying new threats accurately. Apart from the fact that ML models are highly efficient in detecting attacks, as ‘black boxes,’ they lack interpretability, impacting trust and adoption within vital cloud environments. This research aims to solve this issue by integrating Explainable Artificial Intelligence practices to help enhance both the accuracy and interpretability of AI systems intended to detect threats in cloud environments. Therefore, the central goal is to develop and test models that can identify threats and explain their decision-making process so that insights are understandable by security analysts. Given the CICIDS2017 dataset where the benchmark dataset for cloud network traffic is publicly available, five ML models, including Linear SVM, XGBoost, Logistic Regression, Neural Network, and Random Forest, were trained and tested. Two XAI practices, SHAP and LIME, were utilized to help interpret model predictions, pointing to the most influential features in detecting threats. The experimental results indicate that the investigated models, when enhanced with XAI, were highly accurate while providing transparent explanations to the feature level for their decisions. More specifically, among the studied models, XGBoost and SHAP demonstrated the best tradeoff between accuracy and interpretability. Therefore, the findings imply that the integration of XAI and conventional ML approaches in cloud threat detection models could help address interpretability and reliability concerns. The methodology, therefore, may aid in building more robust and responsible AI-based cybersecurity solutions intended for modern cloud environments.

Share

COinS
 
Nov 1st, 2:45 PM

ENHANCING CLOUD-BASED THREAT DETECTION THROUGH EXPLAINABLE AI: A COMPARATIVE STUDY OF MACHINE LEARNING AND XAI-INTEGRATED MODELS

E1- 1036

The rapid adoption of cloud computing brought about serious security concerns, as cloud infrastructures are constantly exposed to cybersecurity threats such as malware and Distributed Denial of Service attacks. Also, on the other hand, current security methodologies have limitations in identifying new threats accurately. Apart from the fact that ML models are highly efficient in detecting attacks, as ‘black boxes,’ they lack interpretability, impacting trust and adoption within vital cloud environments. This research aims to solve this issue by integrating Explainable Artificial Intelligence practices to help enhance both the accuracy and interpretability of AI systems intended to detect threats in cloud environments. Therefore, the central goal is to develop and test models that can identify threats and explain their decision-making process so that insights are understandable by security analysts. Given the CICIDS2017 dataset where the benchmark dataset for cloud network traffic is publicly available, five ML models, including Linear SVM, XGBoost, Logistic Regression, Neural Network, and Random Forest, were trained and tested. Two XAI practices, SHAP and LIME, were utilized to help interpret model predictions, pointing to the most influential features in detecting threats. The experimental results indicate that the investigated models, when enhanced with XAI, were highly accurate while providing transparent explanations to the feature level for their decisions. More specifically, among the studied models, XGBoost and SHAP demonstrated the best tradeoff between accuracy and interpretability. Therefore, the findings imply that the integration of XAI and conventional ML approaches in cloud threat detection models could help address interpretability and reliability concerns. The methodology, therefore, may aid in building more robust and responsible AI-based cybersecurity solutions intended for modern cloud environments.