Date of Defense

11-2025 1:00 PM

Location

H4-1006

Document Type

Thesis Defense

Degree Name

Master of Science in Information Security

College

College of Information Technology

Department

Information Security

First Advisor

Prof. Mohammad Mehedy Masud

Keywords

Federated Learning (FL), Homomorphic Encryption (HE), Privacy-Preserving Machine Learning, Efficiency, Model Updates Encryption

Abstract

Federated Learning (FL) is a decentralized approach of machine learning on multiple clients jointly training models without sharing their raw data, which drastically improves privacy and enhance protection against security breach. However, there is still a risk of privacy breach when clients send their model updates to the central server, because if a model update is intercepted or analyzed by a malicious entity, it could be used to recover sensitive data using inference attack. To address this issue, Homomorphic Encryption (HE) can be applied to protect against the interception, since the model updates remain encrypted during transmission as well as during aggregation. This is possible because Homomorphic Encryption allows computations to be performed directly on encrypted data without requiring decryption. In this research we propose an effective framework by integrating FL with HE. We focus on the encryption of model updates using HE prior to sending them to the central server. The server will perform aggregation on the encrypted updates using HE algorithms and send the encrypted aggregated model to the clients. Then the clients decrypt the aggregated model and again refine it using local data. After that, another cycle of model encryption, transmission to the central server and aggregation to the central server is conducted. Using this method, we intend to reduce the risk of leaking sensitive data and keep the efficiency of federated learning in an optimal level. We expect this study to result in (i) a robust privacy-preserving FL framework that aligns with IT management best practices for efficient and secure data handling. (ii) significant improvement in the level of potential privacy vulnerabilities, and (iii) assessment of the trade-offs between improving performance and computational overhead vs. maintaining privacy.

Share

COinS
 
Nov 1st, 1:00 PM

ENHANCED PRIVACY PRESERVING HEALTHCARE DATA MANAGEMENT WITH FEDERATED LEARNING USING HOMOMORPHIC ENCRYPTION

H4-1006

Federated Learning (FL) is a decentralized approach of machine learning on multiple clients jointly training models without sharing their raw data, which drastically improves privacy and enhance protection against security breach. However, there is still a risk of privacy breach when clients send their model updates to the central server, because if a model update is intercepted or analyzed by a malicious entity, it could be used to recover sensitive data using inference attack. To address this issue, Homomorphic Encryption (HE) can be applied to protect against the interception, since the model updates remain encrypted during transmission as well as during aggregation. This is possible because Homomorphic Encryption allows computations to be performed directly on encrypted data without requiring decryption. In this research we propose an effective framework by integrating FL with HE. We focus on the encryption of model updates using HE prior to sending them to the central server. The server will perform aggregation on the encrypted updates using HE algorithms and send the encrypted aggregated model to the clients. Then the clients decrypt the aggregated model and again refine it using local data. After that, another cycle of model encryption, transmission to the central server and aggregation to the central server is conducted. Using this method, we intend to reduce the risk of leaking sensitive data and keep the efficiency of federated learning in an optimal level. We expect this study to result in (i) a robust privacy-preserving FL framework that aligns with IT management best practices for efficient and secure data handling. (ii) significant improvement in the level of potential privacy vulnerabilities, and (iii) assessment of the trade-offs between improving performance and computational overhead vs. maintaining privacy.