Spring 2022 ECE Distinguished Seminar Series
Watermarking for Protecting Deep Image Classifiers against Adversarial Attacks: A Framework and Algorithms
Dr. En-hui Yang
Director, Leitch-University of Waterloo Multimedia Communications Lab
Co-founder, SlipStream Data Inc.
Founder, BicDroid Inc.,
Friday, February 11, 2022, 11:00 am
Zoom Meeting Link: https://gmu.zoom.us/j/91322937979
Abstract: Deep neural networks (DNNs) are vulnerable to adversarial examples, inputs with imperceptible, but subtly crafted perturbation that lead DNNs to produce incorrect outputs. The existence and easy construction of adversarial examples pose significant security risks to DNNs, especially in safety-critical applications, including visual object recognition and autonomous driving. After reviewing attack strategies and failed defense strategies, in this talk, we will present a novel watermarking-based framework for protecting deep image classifiers by detecting adversarial examples. The proposed framework consists of a watermark encoder, a possible adversary, and a detector followed by a deep image classifier to be protected. Specific methods of watermarking and detection will also be discussed. It is shown by experiment on a subset of ImageNet validation dataset that our framework along with the presented methods of watermarking and detection is effective against a wide range of advanced attacks (static and adaptive), achieving a near zero (effective) false negative rate for FGSM and PGD attacks (static and adaptive) with the guaranteed zero false positive rate. In addition, for all tested deep image classifiers (ResNet50V2, MobileNetV2, InceptionV3), the impact of watermarking on classification accuracy is insignificant with, on average, 0.63\% and 0.49\% degradation in top 1 and top 5 accuracy, respectively.
|