[talk] ANTIDOTE: Understanding and Defending against the Poisoning of Anomaly Detectors
by Nina Taft
show details
Details
abstract: | The use of machine learning techniques to improve network design has gained much popularity in the last few years. When these techniques are applied to security problems, a fundamental problem arises; they are susceptible to adversaries who poison the learning phase of such techniques. When adversaries purposefully inject erroneous data into the network during the data-collection and profile-building phase of anomaly detectors, then the detectors learn the wrong model of what is "normal". Subsequently their ability to detect "abnormal" activities is compromised and attackers can circumvent the defense. In this talk, we'll discuss both poisoning techniques and defenses against poisoning, in the context of a particular anomaly detector – namely the PCA-subspace method that is used to identify anomalies in backbone networks. We first present three poisoning schemes, and show how attackers can substantially increase their chance of successfully evading detection with only moderate amounts of chaff. Moreover such poisoning throws off the balance between false positives and false negatives. To combat these poisoning activities, we design an antidote by proposing an alternate PCA-based detector that incorporates ideas from the field of robust statistics. We'll show how our techniques significantly reduce the effectiveness of poisoning for a variety of poisoning scenarios. We also illustrate that they restore a good balance between false positives and false negatives for the vast majority of the end-to-end flows. |
|
|
You need to log in to add tags and post comments.