CEC-ANNOUNCE-L Archives

February 2024

CEC-ANNOUNCE-L@LISTSERV.GMU.EDU

Options: Use Proportional Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Duoduo Liao <[log in to unmask]>
Reply To:
Duoduo Liao <[log in to unmask]>
Date:
Mon, 5 Feb 2024 17:17:55 +0000
Content-Type:
multipart/related
Parts/Attachments:
text/plain (97 kB) , text/html (13 kB) , image001.png (249 kB) , image002.png (97 kB)
Unsupervised Learning of Depth Perception and Beyond
IST<https://cs.gmu.edu/events/list/cs-seminar/> Invited Talks

  *   Date and Time: Friday, Feb. 9, 2024 from 10:00 AM to 11:00 AM
  *   Speaker: Prof. Alex Wong, PhD, Department of Computer Science at Yale University
  *   Location: CHANGE TO ENGR 4201 (CS Conference Room)
     *   GMU Faculty and students are expected to arrive in person
     *   Zoom: https://gmu.zoom.us/j/92803100414?pwd=VEFEeFB0dDNBdlF1WDhIamZybW1TQT09

Abstract:

Deep neural networks have seen empirical successes across computer vision tasks, but training them requires tens of thousands to millions of examples, which typically come in the form of an image or images, and human annotated ground truth. Curating vision datasets, in general, amounts to numerous man-hours; tasks like depth estimation require an even more massive effort. I will introduce an alternative form of supervision that leverages multi-sensor validation as an unsupervised (or self-supervised) training objective for depth estimation. I will demonstrate how one can leverage synthetic data and the abundance of publicly available pretrained models, which has largely relied on expensive manual labeling, to learn or distill the regularities of our visual world. In doing so, I show that one can design smaller and faster models that can operate in real-time with state-of-the-art performance. Not only that, these models can be adapted online to novel environments in which they are deployed. Additionally, I will discuss the current limitations of data augmentation procedures used during unsupervised training, which involves reconstructing the inputs as the supervision signal, and detail a method that allows one to scale up and introduce previously inviable augmentations to boost performance. Finally, I will show that unsupervised depth training can serve as a feasible form of large-scale pretraining to produce backbones suitable for semantic tasks.

Bio:

Alex Wong is an Assistant Professor in the department of Computer Science and the director of [cid:image002.png@01DA582D.58379060] the Vision Laboratory at Yale University. He received his Ph.D. in Computer Science from the University of California, Los Angeles (UCLA) in 2019 and was co-advised by Stefano Soatto and Alan Yuille. He was previously a post-doctoral research scholar at UCLA under the guidance of Stefano Soatto. His research lies in the intersection of machine learning, computer vision, and robotics and largely focuses on multi-sensor fusion for 3D reconstruction, robust vision under adverse conditions, unsupervised learning, and medical image analysis. His work has received the outstanding student paper award at the Conference on Neural Information Processing Systems (NeurIPS) 2011 and the best paper award in robot vision at the International Conference on Robotics and Automation (ICRA) 2019.


Organizer:
Prof. Lindi Liao, PhD
Department of Information Sciences & Technology
School of Computing | George Mason University
http://mason.gmu.edu/~dliao2/







ATOM RSS1 RSS2