MS-EE-L Archives

February 2021

MS-EE-L@LISTSERV.GMU.EDU

Options: Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jammie Chang <[log in to unmask]>
Reply To:
Jammie Chang <[log in to unmask]>
Date:
Fri, 19 Feb 2021 16:57:32 +0000
Content-Type:
multipart/alternative
Parts/Attachments:
text/plain (2932 bytes) , text/html (7 kB)
ECE Department Seminar

Scalable Algorithms for Preserving Privacy and Security in
Federated Learning

Swanand Kadhe, Ph.D.
Postdoctoral Researcher
EECS Department
University of California Berkeley

Monday, February 22, 2021
11:00 am – 12:00 pm
Zoom Meeting Link:
https://gmu.zoom.us/j/93501992524


Abstract:  In modern large-scale machine learning, federated and distributed learning have emerged as important paradigms, where the training data remains distributed over a large number of clients (e.g., mobile phones, smart devices, server machines). In these paradigms, each client trains a neural network model locally using their data, and the central server aggregates these local models to obtain an improved model. However, the locally trained model at a client has been shown to leak significant amounts of information about the client’s training data. Moreover, some clients may behave adversarially during the training process by sending maliciously computed models. My research focuses on tackling these challenges both from theoretical and practical perspectives.

In this talk, I will focus on two instances: (i) I will present a cryptographic framework, FastSecAgg, that enables the central server to average local models in a privacy-preserving manner. The core component of FastSecAgg is a novel class of Fast-Fourier Transform (FFT) based secret sharing schemes, which integrates techniques from signal processing, information and coding theory, and cryptography. I will show that FastSecAgg provides strong provable privacy guarantees and achieves orders-of-magnitude improvement in the computation cost at the server compared to the state-of-the-art schemes. (ii) I will present robust gradient aggregation schemes for tackling adversarial clients, which may abruptly fail or send potentially malicious local models. I will demonstrate that the proposed schemes provide provable convergence guarantees, cut down uplink communication costs for the clients, and significantly reduce average training time in practice.

Bio: Swanand Kadhe is a postdoctoral researcher in the EECS Department at the University of California Berkeley. He earned his Ph.D. degree in Electrical and Computer Engineering from Texas A&M University in 2017. He is a recipient of the 2016 Graduate Teaching Fellowship from the College of Engineering at Texas A&M University. He has been a visiting researcher at Nokia Bell Labs, Duke University, and The Chinese University of Hong Kong. From 2009 to 2012, he was a researcher at the TCS Innovation Labs, Bangalore. His research interests lie broadly in Federated and Distributed Machine Learning, Information and Coding Theory, Signal Processing, Privacy and Security, and Blockchains.



Jammie Chang

Academic Program Manager

Department of Electrical and Computer Engineering

George Mason University

4400 University Drive, MSN 1G5

Fairfax, VA 22030

Phone: 703-993-1570

Fax: 703-993-1601


ATOM RSS1 RSS2