Reading Notes

6 May 2019

Excessive Invariance Causes Adversarial Vulnerability

Jacobsen et al, ICLR 2019, [link]
tags: adversarial examples - reversible networks - iclr - 2019

The authors introduce the notion of invariance-based adversarial examples, which can be seen as a generalization of adversarial examples at the feature level: Given an image `x` and a pretrained classifier, it is possible to generate an image that is both semantically plausible and distinct from `x`, and yet yields the exact same output logits under the classifier. The authors study this scenario in the context of invertible networks, e.g., i-RevNet [1].
  • Pros (+): Novel category of adversarial examples, nice use of reversible networks.
  • Cons (-): The analysis holds for fully reversible networks, which are not commonly used in practice (outside of invertible flow generative models such as GLOW [3]). It is not clear if this study provides insights about standard ConvNets.

Definition: Invariance-based adversarial examples

Let be a classification neural network of layers, where corresponds to the output logits layer, such that , and is the classifier’s decision. Let be an oracle, formally, . With these notations, we define:

  • Perturbation-based adversarial example: We say is an -perturbation-based adversarial example for if
    • (i) is generated by an adversary and
    • (ii) The perturbation modifies the output of the classifier with respect to the oracle: .
  • Invariance-based adversarial example: We say is an invariance-based adversarial example for at level if
    • (i) and have the same representations at level , i.e., . In particular, this implies for a deterministic neural network.
    • (ii) The samples have different classes under the oracle:

For instance, if we consider the oracle to be a human observer, then perturbation-based adversarial examples corresponds to cases where the classifier fails to correctly classify , but where, to the human eye, appears identical to . Orthogonal to this, invariance-based adversarial examples correspond to cases where is clearly distinct from and yet has the same activation responses under the network : In other words, there is a mismatch between the invariances learned by the model (in the feature representations) and the actual semantic content of the input domain (defined by the oracle).


Analyzing Excessive Invariance with Reversible Networks

The authors consider a slightly modified version of the i-RevNet architecture [1]: is thus a fully reversible function, hence all the input information is guaranteed to be available at every layer, and no “nuisance feature” gets discarded as it is the case in standard architectures.

The classifier is built such that it only sees the first first components from the logits output by and predict a class by applying the softmax function on them. These components are called semantic variables while the remaining ones are the nuisance variables. To analyze the presence of invariance-based adversarial examples, the authors propose the process of metameric sampling: Given semantic variables extracted from a given input , the authors sample nuisance variables and observe the reconstructed input by reversing the feature extractor: , where denotes the concatenation operator. While the generated samples are not guaranteed to be from the original distribution (as and are not necessarily independent variables), in practice the resulting samples do look like plausible natural images.

Experiment 1: Synthetic Spheres Dataset

This is a binary classification task in , where a point should be classified as belonging to one of two spheres. The authors perform two experiments: (i) Take two random points and and observe the decision boundary on the 2D subspace spanned by these two points and (ii) the same experiment but taking a sample and an associated metameric sample . This is applied to the standard classifier decisions, predicted from the semantic variables, and to another classifier trained to predict from nuisance variables only. Results are depicted in Figure 1.

In particular, the third picture from the left shows that one gets a significant “adversarial direction” in the form of a trajectory that moves from one sphere to the other, but along which the semantic classifier decisions remains the same. Note that this problem does not appear in the nuisance classifier (last picture), which confirms that it captures additional task-relevant information which are discarded by the semantic classifier.

Figure 1: Decision boundaries in 2D subspace spanned by two random data points (left) and boundaries spanned by taking a random data point and an associated metameric sample (right). The color of the samples represent the classifier's prediction.

Experiment 2: Natural Images

The experiments are done on the MNIST and ImageNet dataset. Visualizing metameric samples show that sampling random nuisance variables can significantly alter the image content in a semantically plausible way, while not modifying the actual semantic variables, .

Figure 2: Top row are source images from which we sample the semantic variables `z_s`, middle row are metameric samples, and bottom row are images from which we sample the nuisances `z_n`. Top row and middle row have the same (approximately for ResNets, exactly for fully invertible RevNets) logit activations. Thus, it is possible to change the image content completely without changing the output logits. This highlights a striking failure of classifiers to capture all task-dependent variability.


Information Theoretical Analysis and How to Defend Invariance-based Adversarial Examples

Let with labels . An ideal semantic classifier would be such that the mutual information between semantic features and ground-truth labels, is maximized. Additionally, in the current scenario, we can incorporate two additional objectives aiming to disentangle the noise and semantic variables:

  • (i) Decreasing the mutual information to reduce dependency between the ground-truth labels and nuisance features. The authors derive such an objective by using a variational lower bound on the mutual information described in [2].

  • (ii) Reducing the mutual information which can be seen as a form of disentanglement between semantic and nuisance variables. This is incorporated in the training objective as a Maximum Likelihood Objective under a factorial prior, i.e., a prior which assumes independence and can be factorized as .

Note on Theorem 8 (?): The two conditions and either contradict the non-negativity of mutual information or impose that which is a strong constraint.

Finally, experiments show that the newly proposed loss does encourage independence between the semantic and nuisance variables, although results are only available for the MNIST dataset.


References

  • [1] i-RevNet: Deep Invertible Networks, Jacobsen et al., ICLR 2018
  • [2] The IM algorithm: A variational approach to information maximization, Barber and Agakov, NeurIPS 2003
  • [3] Glow: Generative Flow with Invertible 1x1 Convolutions, Kingma and Dhariwal, NeurIPS 2018