MS-EE-L Archives

February 2021

MS-EE-L@LISTSERV.GMU.EDU

Options: Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jammie Chang <[log in to unmask]>
Reply To:
Jammie Chang <[log in to unmask]>
Date:
Fri, 19 Feb 2021 17:10:14 +0000
Content-Type:
multipart/alternative
Parts/Attachments:
text/plain (2896 bytes) , text/html (7 kB)
ECE Department Seminar

Bayesian-based Multi-Objective Neural Architecture Search for Accurate, Fast, and Efficient Algorithm-Hardware Co-design

Maryam Parsa, Ph.D.
Postdoctoral Research Associate
Oak Ridge National Laboratory

Monday, March 1, 2021
10:00 am – 11:00 am
Zoom Meeting Link:
https://gmu.zoom.us/j/91203249183


Abstract: Neuromorphic systems has the potential to achieve energy-efficiency over traditional deep learning systems. In recent years, several algorithms have been developed for spiking NNs (SNNs) suitable for event driven neuromorphic hardware. However, SNNs often provide lower accuracy than their artificial NNs (ANNs) counterparts or require computationally expensive and slow training/inference methods. To close this gap, designers typically rely on reconfiguring SNNs through adjustments in the neuron/synapse model or training algorithm itself. Nevertheless, these steps incur significant design time, while still lacking the desired improvement in terms of training/inference times (latency). Designing SNNs that can mimic the accuracy of ANNs with reasonable training times is an exigent challenge in neuromorphic computing.

In this work, a versatile multi-objective Bayesian-based neural architecture search (NAS) is developed for automatically tuning hyperparameters of several state-of-the-art SNN training algorithms, and their underlying hardware. These include evolutionary-based, conversion-based, and back propagation-based training methods on CMOS and Beyond CMOS accelerators. Without the need to redesign or invent new training algorithms/architectures, not only significant performance improvements in terms of accuracy are observed, but also the energy requirements and latency in terms of speed of training and inference are also drastically reduced. This talk will also include my follow up research plans in the areas of computational neuroscience and brain-inspired computing.

Bio: Maryam Parsa is a postdoctoral research associate at Oak Ridge National Laboratory (ORNL) in Beyond Moore Computing group, Computer Science and Mathematics Division (CSMD). She received her PhD from the department of Electrical and Computer Engineering at Purdue University in 2020 with prestigious four-year INTEL/SRC PhD fellowship at the Center for Brain Inspired Computing (C-BRIC) under supervision of Prof. Kaushik Roy, followed by one-year at ORNL on ASTRO Fellowship. During her graduate studies she interned as a graduate researcher at the Intel Corporation and at ORNL as ASTRO fellow. She is also a recipient of student presenter award at TECHCON 2018, the Ross fellowship at Purdue University, and first place IEEE award at University of Ottawa. Her primary research interests are Brain-Inspired Algorithm-Hardware Codesign, Neuromorphic Computing, Neural Architecture Search (NAS), Energy-Efficient AI, and Bayesian Optimization.



ATOM RSS1 RSS2