MS-CPE-L Archives

April 2022

MS-CPE-L@LISTSERV.GMU.EDU

Options: Use Monospaced Font
Show HTML Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
multipart/alternative; boundary="_000_CH2PR05MB6838826BD4DC2C2670A9E778B8E49CH2PR05MB6838namp_"
Date:
Tue, 5 Apr 2022 17:03:19 +0000
Reply-To:
Jammie Chang <[log in to unmask]>
Subject:
From:
Jammie Chang <[log in to unmask]>
Message-ID:
MIME-Version:
1.0
Sender:
MS-CPE-L <[log in to unmask]>
Comments:
Parts/Attachments:
text/plain (2892 bytes) , text/html (11 kB)
Notice and Invitation

Oral Defense of Doctoral Dissertation

The College of Engineering and Computing, George Mason University



Ping Xu

Bachelor of Science, Northwestern Polytechnical University, Xi’an, China, 2015

Master of Science, George Mason University, Fairfax, VA, 2018

Communication-efficient optimization and learning for distributed multi-agent systems
Monday, April 18, 2022
10:30 AM – 12:00 PM
ENGR 2901
All are invited to attend.

Committee
Dr. Zhi (Gerry) Tian, Chair
Dr. Yariv Ephraim
Dr. Kathleen E. Wage
Dr. Kuo-Chu Chang

Abstract

Distributed learning has attracted extensive interest in recent years, owing to the explosion of data generated from mobile sensors, social media services, and other networked multi-agent applications. In many of these applications, the observed data are usually kept private at local sites without being aggregated to a fusion center, either due to the prohibitively high cost of raw data transmission or privacy concerns. Meanwhile, each agent in the network only communicates with its one-hop neighbors within its local area to save transmission power. Moreover, distributed learning is typically implemented in an iterative manner for computational feasibility and efficiency. This incurs frequent communications among agents to exchange their locally computed updates of the shared learning model, which can cause tremendous communication overhead in terms of both link bandwidth and transmission power. Under this circumstance, this dissertation focuses on developing communication-efficient distributed learning algorithms for multi-agent systems under communication and privacy constraints.
To be specific, we utilize the random feature (RF) mapping method to circumvent the curse of dimensionality issue in traditional kernel methods and bypass transmitting raw data in distributed kernel learning. This approach enables the reformulation of decentralized kernel learning as a decentralized consensus optimization problem in the RF space, which is then solved distributedly and iteratively via the alternating direction method of multipliers (ADMM), with a communication-censoring strategy incorporated to evaluate if an update is informative enough to be transmitted.  For online streaming data with possibly unknown dynamics, this dissertation develops corresponding adaptive and dynamic decentralized learning approaches to learn the optimal function ``on the fly" via linearized ADMM, and conventional decentralized ADMM, respectively. Communication censoring and quantization strategies are utilized for both approaches to save communication resources. Finally, the present dissertation offers a unified framework of learning from nonlinear input-output maps by bridging the gap in kernel methods and neural networks, as well as the development of a RF-based deep kernel learning network with multiple learning path.


ATOM RSS1 RSS2