L 1-norm double backpropagation adversarial defense - Université Clermont Auvergne Access content directly
Conference Papers Year :

L 1-norm double backpropagation adversarial defense

Abstract

Adversarial examples are a challenging open problem for deep neural networks. We propose in this paper to add a penalization term that forces the decision function to be at in some regions of the input space, such that it becomes, at least locally, less sensitive to attacks. Our proposition is theoretically motivated and shows on a rst set of carefully conducted experiments that it behaves as expected when used alone, and seems promising when coupled with adversarial training.
Fichier principal
Vignette du fichier
main.pdf (473.75 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02049020 , version 1 (05-03-2019)

Identifiers

Cite

Ismaïla Seck, Gaëlle Loosli, Stephane Canu. L 1-norm double backpropagation adversarial defense. ESANN - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 2019, Bruges, France. ⟨hal-02049020⟩
133 View
85 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More