Defenses against Adversarial Attacks

By LI Haoyang 2020.10.20 | 2020.11.15

Content

Defenses against Adversarial AttacksContentRegularizationAdversarial TrainingRobust StructureDefense at InferenceEnsembleBreach

Regularization

There a bunch of methods trying to increase robustness of model by regularization. The idea of regularization germinated from the very first paper that proposed the problem of adversarial example, i.e. Intriguing properties of neural networks.

Adversarial Defense by Regularization

Adversarial Training

The prevailing method to defend adversarial attack is adversarial training, using the adversarial examples generated online to train the model for a more robust version. It was proposed along with Fast Gradient Sign Method.

Adversarial Defense by Adversarial Training

Robust Structure

Robust Structure

Defense at Inference

Adversarial Defense at Inference

Ensemble

Adversarial Defense with Ensemble

Breach

Breach Adversarial Defenses