Towards Privacy-Aware Federated Learning for User-Sensitive Data

Author First name, Last name, Institution

Muhammad Asad, Zayed University
Safa Otoum, Zayed University

Document Type

Conference Proceeding

Source of Publication

2023 Fifth International Conference on Blockchain Computing and Applications (BCCA)

Publication Date

10-26-2023

Abstract

Federated Learning (FL) has been envisioned as a promising approach for collaboratively training learning models while preserving private individuals' data. In the FL training procedure, participants train a global model by exchanging the model parameters and keeping the raw data private. Nevertheless, exchanging those model parameters causes insecure interaction among participants that might disclose the individual's identity or private information. To this end, several approaches have considered secure multiparty computation (SMC) and differential privacy. Those approaches suffer from several drawbacks, such as limited accuracy, computational capacities, or functional behavior, and cannot guarantee participants' identity during the learning process. To this end, in this paper, we propose a novel Threshold Signature-based Authentication (TSA) scheme for secure FL. The TSA scheme secures the participants' identity against any chosen cipher-text attack and forbids external adversaries from malicious attacks. Moreover, the TSA scheme can successfully defend the identity leaks from the trained models against property and membership inference attacks. The experimental results show that the TSA can achieve 91% training accuracy, which is superior to the existing methods.

ISBN

979-8-3503-3923-9

Publisher

IEEE

Volume

00

First Page

343

Last Page

350

Disciplines

Computer Sciences

Keywords

Training, Privacy, Differential privacy, Costs, Federated learning, Computational modeling, Authentication

Indexed in Scopus

no

Open Access

no

Share

COinS