Lifeng Huang, Wenzi Zhuang, Chengying Gao, Ning Liu
Recently, adversarial attacks pose a challenge for the security of Deep Neural Networks,
which motivates researchers to establish various defense methods.
However, do current defenses achieve real security enough?
To answer the question, we propose self-augmentation method (SA)
for circumventing defenders to transferable adversarial examples.
Concretely, self-augmentation includes two strategies:
(1) self-ensemble, which applies additional convolution layers to an existing model
to build diverse virtual models that be fused for achieving an ensemble-model effect
and preventing overfitting; and
(2) deviation-augmentation, which based on the observation of defense models
that the input data is surrounded by highly curved loss surfaces,
thus inspiring us to apply deviation vectors to input data for escaping from their vicinity space.
Notably, we can naturally combine self-augmentation with existing methods
to establish more transferable adversarial attacks.
Extensive experiments conducted on four vanilla models and ten defenses suggest the superiority of our method
compared with the state-of-the-art transferable attacks.
International Conference on Multimedia & Expo (ICME), 2021 (*oral)
Lifeng Huang, Chengying Gao, Yuyin Zhou, Changqing Zou, Cihang Xie, Alan Yuille, Ning Liu
In this paper, we study physical adversarial attacks on object detectors in the wild.
Previous works on this matter mostly craft instance-dependent perturbations
only for rigid and planar objects.
To this end, we propose to learn an adversarial pattern to effectively
attack all instances belonging to the same object category (e.g., person, car),
referred to as Universal Physical Camouflage Attack (UPC).
Concretely, UPC crafts camouflage by jointly fooling the region proposal network,
as well as misleading the classifier and the regressor to output errors.
In order to make UPC effective for articulated non-rigid or non-planar objects,
we introduce a set of transformations for the generated camouflage patterns to
mimic their deformable properties.
We additionally impose optimization constraint to make generated patterns look
natural to human observers. To fairly evaluate the effectiveness of different
physical-world attacks on object detectors, we present the first standardized
virtual database, AttackScenes, which simulates the real 3D world in a controllable
and reproducible environment. Extensive experiments suggest the superiority of
our proposed UPC compared with existing physical adversarial attackers not only
in virtual environments (AttackScenes), but also in real-world physical environments.
Computer Vision and Pattern Recognition (CVPR), 2020