MS-CPE-L Archives

January 2022

MS-CPE-L@LISTSERV.GMU.EDU

Options: Use Monospaced Font
Show HTML Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Content-Type:
multipart/mixed; boundary="_004_MN2PR05MB68458797A443E037F87B2A6CB8229MN2PR05MB6845namp_"
Date:
Fri, 28 Jan 2022 17:01:53 +0000
Reply-To:
Jammie Chang <[log in to unmask]>
Subject:
From:
Jammie Chang <[log in to unmask]>
Message-ID:
MIME-Version:
1.0
Sender:
MS-CPE-L <[log in to unmask]>
Comments:
Parts/Attachments:
Spring 2022 ECE Distinguished Seminar Series

Watermarking for Protecting Deep Image Classifiers against Adversarial Attacks: A Framework and Algorithms

Dr. En-hui Yang
Director, Leitch-University of Waterloo Multimedia Communications Lab
Co-founder, SlipStream Data Inc.
Founder, BicDroid Inc.,

Friday, February 11, 2022, 11:00 am
Zoom Meeting Link: https://gmu.zoom.us/j/91322937979


Abstract: Deep neural networks (DNNs) are vulnerable to adversarial examples, inputs with imperceptible, but subtly crafted perturbation that lead DNNs to produce incorrect outputs. The existence and easy construction of adversarial examples pose significant security risks to DNNs, especially in safety-critical applications, including visual object recognition and autonomous driving. After reviewing attack strategies and failed defense strategies, in this talk, we will present a novel watermarking-based framework for protecting deep image classifiers by detecting adversarial examples. The proposed framework consists of a watermark encoder, a possible adversary, and a detector followed by a deep image classifier to be protected. Specific methods of watermarking and detection will also be discussed. It is shown by experiment on a subset of ImageNet validation dataset that our framework along with the presented methods of watermarking and detection is effective against a wide range of advanced attacks (static and adaptive), achieving a near zero (effective) false negative rate for FGSM and PGD attacks (static and adaptive) with the guaranteed zero false positive rate. In addition, for all tested deep image classifiers (ResNet50V2, MobileNetV2, InceptionV3), the impact of watermarking on classification accuracy is insignificant with, on average, 0.63\% and 0.49\%  degradation in top 1 and top 5 accuracy, respectively.


ATOM RSS1 RSS2