By LI Haoyang 2020.11.4 - 2020.11.14 (intermittent reading)
PAC-learning with adversaryContentPAC-LearningDefinitionVC-dimensionPAC-learning in the presence of evasion adversaries - NIPS 2018Adversarial agnostic PAC-learningAdversarial Expected Risk (Definition 1)Adversarial Empirical Risk Minimization (ERM) (Definition 2)Lemma 1Learnability (Definition 3)Adversarial VC-dimension and sample complexityCorrupted hypothesisLemma 2Loss classesLemma 3Adversarial VC-dimensionEquivalent shattering coefficient definitionsAdversarial VC-dimensionTheorem 1 (Sample complexity upper bound with an evasion adversary)The adversarial VC-dimension of halfspace classifiersConvex constraint on binary adversarial relationDefinition 7Theorem 2Adversarial VC dimension can be largerTheorem 3Concluding remarksInspirations
A concept class is said to be PAC-learnable if there exists an algorithm and a polynomial function such that for any and , for all distributions on and for any target concept , the following holds fro any sample size :
If further runs in , then is said to be efficiently PAC-learnable. When such an algorithm exists, it is called a PAC-learning algorithm for .
This theorem states that as long the sample complexity (the number of samples) is larger enough, a PAC learnable algorithm can learn a hypothesis that the difference between the empirical risk and actual risk (i.e. the approximation error) is smaller than at a probability larger than .
From VC dimension - Wikipedia:
The VC dimension of a model is the maximum number of points that can be arranged so that f shatters them. More formally, it is the maximum cardinal such that some data point set of cardinality can be shattered by .
If a set of points can be shattered by some model, it means that the model can always classify these points assigned with arbitrary labels from . For a set with data points, the number of the possible assignments of labels is .
==The VC dimension of a hypothesis (model) is the largest number (cardinality, i.e. ) of elements in a set that can be shattered by this hypothesis.
Paper: https://dl.acm.org/doi/10.5555/3326943.3326965
Daniel Cullina, Arjun Nitin Bhagoji, Prateek Mittal. PAC-learning in the presence of evasion adversaries. NIPS 2018. arXiv:1806.01471
In this paper, we step away from the attackdefense arms race and seek to understand the limits of what can be learned in the presence of an evasion adversary.
They are interested in the effect of an adversary on sample complexity, i.e.
If we add an adversary to the learning setting defined in Definition 3, what happens to the gap in performance between the optimal classifier and the learned classifier?
We explicitly compute the adversarial VC-dimension for halfspace classifiers with adversaries with standard -distance constraints on adversarial perturbations, and show that it matches the standard VC-dimension.
This implies that the sample complexity of PAC-learning does not increase in the presence of an adversary.
We also show that this is not always the case by constructing hypothesis classes where the adversarial VC-dimension is arbitrarily larger or smaller than the standard one.
They extend the PAC-learning to that in the presence of an adversary.
Symbol defintion:
The learning problem is as follows.
There is an unknown . The learner receives labeled training data, i.e. and must select .
The power set denotes the set of functions relating to . A labeled training dataset is than a sampled relation.
The adversary receives a labeled natural example and selects , i.e. the set of adversarial examples in the neighborhood of .
The adversary gives to the learner and the learner must estimate .
is the set of probability measures on . is a set of events.
is the binary nearness relation that generates the neighborhood of possible adversarial samples. is required to be nonempty so some choice of is always available.
is the identity relation, i.e. .
The neighborhood is and . The problem boils down to the standard problem of learning without adversary.
If , are two nearness relations and , then represents a stronger adversary.
One way to produce a relation is from a distance on and an adversarial budget constraint , i.e.
The learner's risk under the true distribution in the presence of an adversary constraint by the relation is
The risk will be minimized when the worst-case adversarial example can still be handled by the hypothesis .
Let . Then, learning is possible if there is an algorithm that, with high probability, gives us such that .
The unaccessible true distribution is approximated with the distribution of the empirical random variable for the learner.
The adversarial empirical risk minimizer is defined as
where is the expected loss under the empirical distribution.
is an empirical distribution drawn from the potential space of distributions.
AERM is the best hypothesis based on the empirical distribution.
Let be learning algorithm for a hypothesis class . Suppose , are nearness relations and . For all ,
For all and all ,
In other words, if we design a learning algorithm for an adversary constrained by , its performance against a weaker adversary is better.
This is natural by definition.
A hypothesis class is learnable by empirical risk minimization in the presence of an evasion adversary constrained by if there is a function (the sample complexity) with the following property.
For all and , all , and all ,
When the sample complexity exceeds a certain value, the probability that the difference between empirical risk and expected risk is smaller than is larger than .
The presence of the adversary forces us to learn using a corrupted set of hypotheses.
The so-called corrupted hypothesis introduces a new output that means "always wrong".
With the new label set , the information in and can be combined into a single set by defining a mapping and :
This mapping maps the original hypothesis to a hypothesis with a corrupted label. If two points in the same neighborhood of are classified into different labels, then is labeled as corrupted.
The corrupted set of hypothesis is then .
It's equivalent between learning an ordinary hypothesis with an adversary and learning a corrupted hypothesis without an adversary.
For any nearness relation and distribution ,
Proof
Let . For all ,
So there exists lemma 2, i.e.
Lemma 2 states that the risk of a hypothesis with the presence of an adversary in neighborhood bounded by relation is equal to the risk of the corresponding corrupted hypothesis constructed by relation with the presence of an "adversary" that is not capable of changing the data point (there is only one point in the neighborhood), i.e. in the neighborhood bounded by identity relation.
The loss classes (composition of loss and classifier functions) derived from and respectively are defined as:
In which, maps a hypothesis to a loss class, i.e.
And is defined as:
If the predicted label of the hypothesis on a data point is not correct, then , otherwise 0.
An element of is an error measure on the hypothesis.
Let . With probability ,
where
This lemma states that the error rate of the corrupted hypothesis corresponding to the Adversarial Empirical Risk Minimizer hypothesis minus the infimum of the error rate of the corrupted hypothesis is upper bounded.
The shattering coefficient of a family of binary classifiers is .
The alternative definition of shatter in terms of the loss class is
These two definitions are equivalent as they shown.
The ordinary VC-dimension is then
From VC dimension - Wikipedia:
The VC dimension of a model is the maximum number of points that can be arranged so that f shatters them. More formally, it is the maximum cardinal such that some data point set of cardinality can be shattered by .
If a set of points can be shattered by some model, it means that the model can always classify these points assigned with arbitrary labels from . For a set with data points, the number of the possible assignments of labels is .
The VC dimension of a hypothesis (model) is the largest number (cardinality, i.e. ) of elements in a set that can be shattered by this hypothesis.
The adversarial VC-dimension is
The ordinary VC-dimension of corrupted hypothesis.
These definitions and lemmas can now be combined to obtain a sample complexity upper bound for PAC-learning in the presence of an evasion adversary.
For a space , a classifier family , and an adversarial constraint , there is a universal constant such that
where .
It seems to state that the number of samples required by the learning in the presence of an adversary is finite?
A halfspace classifier has a hyperplane as a binary classification boundary.
Let be a nonempty, closed, convex origin-symmetric set.
The seminorm derived from is , and the associated distance .
Let be the largest linear subspace contained in .
The adversarial constraint derived from is , or equivalently .
This definition of encompasses all bounded adversaries, as long as .
Let be a family of classifiers on .
For an example and a classifier , define the signed distance to the boundary to be
For a list of examples , define the signed distance set to be
Let be the family of halfspace classifiers, i.e.
And define .
It well known that the VC-dimension of this family is .
Let be the family of halfspace classifiers of .
Let be a nonempty, closed, convex, origin-symmetric set.
Let .
Then .
In particular, when is a bounded ball, , giving .
The proof is in the original paper.
This theorem states that the VC-dimension of a halfspace classifier with the presence of an bounded adversary is the same as a normal halfspace classifier.
We have shown in the previous section that the adversarial VC-dimension can be smaller than or equal to the standard VC-dimension.
For any , there is a space , an adversarial constraint , and a hypothesis class such that and .
Proof
Let . Let , where
The VC dimension of this family is 1 because no classifier outputs the labeling for any pair of distinct examples.
This family of hypothesis checks if the input is some designated point or not, hence it can only shatter a set of one point.
Consider the adversary with an budget of 1. No degraded classifier will ever output 1, only 0 and .
Because for a degraded classifier to output 1, the neighborhood of the designated point should all be predicted as 1, but there is only one point can be predicted as 1.
Take :
Now consider the classifiers that are the indicators for .
Observe that
because if , then , but if then .
Thus contains at each index that contains 1. If the examples are all labeled with , this subset of the degraded classifier family achieves all possible error patterns. (???)
The adversarial VC-dimension is at least .
This example shows that an adversary can not only affect the optimal loss, as shown in Lemma 1, but can also slow the convergence rate to the optimal hypothesis.
While our results provide a useful theoretical understanding of the problem of learning with adversaries, the nature of the 0-1 loss prevents the efficient implementation of Adversarial ERM to obtain robust classifiers.
In practice, recent work on adversarial training [33, 52, 75], has sought to improve the robustness of classifiers by directly trying to find a classifier that minimizes the Adversarial Expected Risk, which leads to a saddle point problem [52].
A number of heuristics are used to enable the efficient solution of this problem, such as replacing the 0-1 loss with smooth surrogates like the logistic loss and approximating the inner maximum by a Projected Gradient Descent (PGD)-based adversary [52] or by an upper bound [63].
Our framework now allows for an analysis of the underlying PAC learning problem for these approaches.
This paper is pure theoretical, without supporting experiments.
They propose a framework to encompass an adversary into PAC-learning, which is meaningful but so far not that useful.
They prove that for halfspace classifiers, the VC-dimension with the presence of an adversary is equal or less than that without adversary, but this relation varies in different families of classifiers.
It means that for a halfspace classifier, with the presence of an adversary, the sample complexity required is equal or less than that without adversary. In other words, an adversary may improve the performance of a halfspace classifier with limited number of data points. (Is this understanding correct ?)
If the understanding is correct, than a proper adversary (not bounded, since the VC dimension would be the same) can improve the performance of few shot learning.