Adversarial machine learning

From Wikipedia, the free encyclopedia

Adversarial machine learning is a machine learning technique that attempts to fool models by supplying deceptive input.[1][2][3] The most common reason is to cause a malfunction in a machine learning model.

Most machine learning techniques were designed to work on specific problem sets in which the training and test data are generated from the same statistical distribution (IID). When those models are applied to the real world, adversaries may supply data that violates that statistical assumption. This data may be arranged to exploit specific vulnerabilities and compromise the results.[3][4]

History[]

In Snow Crash (1992), the author offered scenarios of technology that was vulnerable to an adversarial attack. In Zero History (2010), a character dons a t-shirt decorated in a way that renders him invisible to electronic surveillance.[5]

In 2004, Nilesh Dalvi and others noted that linear classifiers used in spam filters could be defeated by simple "evasion attacks" as spammers inserted "good words" into their spam emails. (Around 2007, some spammers added random noise to fuzz words within "image spam" in order to defeat OCR-based filters.) In 2006, Marco Barreno and others published "Can Machine Learning Be Secure?", outlining a broad taxonomy of attacks. As late as 2013 many researchers continued to hope that non-linear classifiers (such as support vector machines and neural networks) might be robust to adversaries, until Battista Biggio and others demonstrated the first gradient-based attacks on such machine-learning models (2012[6]-2013[7]). In 2012, deep neural networks began to dominate computer vision problems; starting in 2014, Christian Szegedy and others demonstrated that deep neural networks could be fooled by adversaries, again using a gradient-based attack to craft adversarial perturbations.[8][9]

Recently, it was observed that adversarial attacks are harder to produce in the practical world due to the different environmental constraints that cancel out the effect of noises.[10][11] For example, any small rotation or slight illumination on an adversarial image can destroy the adversariality.

Examples[]

Examples include attacks in spam filtering, where spam messages are obfuscated through the misspelling of "bad" words or the insertion of "good" words;[12][13] attacks in computer security, such as obfuscating malware code within network packets or to mislead signature detection; attacks in biometric recognition where fake biometric traits may be exploited to impersonate a legitimate user;[14] or to compromise users' template galleries that adapt to updated traits over time.

Researchers showed that by changing only one-pixel it was possible to fool deep learning algorithms.[15] Others 3-D printed a toy turtle with a texture engineered to make Google's object detection AI classify it as a rifle regardless of the angle from which the turtle was viewed.[16] Creating the turtle required only low-cost commercially available 3-D printing technology.[17]

A machine-tweaked image of a dog was shown to look like a cat to both computers and humans.[18] A 2019 study reported that humans can guess how machines will classify adversarial images.[19] Researchers discovered methods for perturbing the appearance of a stop sign such that an autonomous vehicle classified it as a merge or speed limit sign.[3][20][21]

McAfee attacked Tesla's former Mobileye system, fooling it into driving 50 mph over the speed limit, simply by adding a two-inch strip of black tape to a speed limit sign.[22][23]

Adversarial patterns on glasses or clothing designed to deceive facial-recognition systems or license-plate readers, have led to a niche industry of "stealth streetwear".[24]

An adversarial attack on a neural network can allow an attacker to inject algorithms into the target system.[25] Researchers can also create adversarial audio inputs to disguise commands to intelligent assistants in benign-seeming audio;[26] a parallel literature explores human perception of such stimuli.[27][28]

Clustering algorithms are used in security applications. Malware and computer virus analysis aims to identify malware families, and to generate specific detection signatures.[29][30]

Attack modalities[]

Taxonomy[]

Attacks against (supervised) machine learning algorithms have been categorized along three primary axes:[31] influence on the classifier, the security violation and their specificity.

  • Classifier influence: An attack can influence the classifier by disrupting the classification phase. This may be preceded by an exploration phase to identify vulnerabilities. The attacker's capabilities might be restricted by the presence of data manipulation constraints.[32]
  • Security violation: An attack can supply malicious data that gets classified as legitimate. Malicious data supplied during training can cause legitimate data to be rejected after training.
  • Specificity: A targeted attack attempts to allow a specific intrusion/disruption. Alternatively, an indiscriminate attack creates general mayhem.

This taxonomy has been extended into a more comprehensive threat model that allows explicit assumptions about the adversary's goal, knowledge of the attacked system, capability of manipulating the input data/system components, and on attack strategy.[33][34] This taxonomy has further been extended to include dimensions for defense strategies against adverserial attacks.[35]

Strategies[]

Below are some of the most commonly encountered attack scenarios:

Evasion[]

Evasion attacks[36][33][34][37] are the most prevalent type of attack. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems.[14]

Poisoning[]

Poisoning is adversarial contamination of training data. Machine learning systems can be re-trained using data collected during operations. For instance, intrusion detection systems (IDSs) are often re-trained using such data. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining.[33][34][31][38][39][40]

Model stealing[]

Model stealing (also called model extraction) involves an adversary probing a black box machine learning system in order to either reconstruct the model or extract the data it was trained on.[41][42] This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock trading model which the adversary could then use for their own financial benefit.

Specific attack types[]

There are a large variety of different adversarial attacks that can be used against machine learning systems. Many of these work on both deep learning systems as well as traditional machine learning models such as SVMs[43] and linear regression.[44] A high level sample of these attack types include:

  • Adversarial Examples[45]
  • Trojan Attacks / Backdoor Attacks[46]
  • Model Inversion[47]
  • Membership Inference [48]

Adversarial examples[]

An adversarial example refers to specially crafted input which is design to look "normal" to humans but causes misclassification to a machine learning model. Often, a form of specially designed "noise" is used to elicit the misclassifications. Below are some current techniques for generating adversarial examples in the literature (by no means an exhaustive list).

  • Gradient-based evasion attack[49]
  • Fast Gradient Sign Method (FGSM)[50]
  • Projected Gradient Descent (PGD)[51]
  • Carlini and Wagner (C&W) attack[52]
  • Adversarial patch attack[53]

Defenses[]

Conceptual representation of the proactive arms race[34][30]

Researchers have proposed a multi-step approach to protecting machine learning.[9]

  • Threat modeling – Formalize the attackers goals and capabilities with respect to the target system.
  • Attack simulation – Formalize the optimization problem the attacker tries to solve according to possible attack strategies.
  • Attack impact evaluation
  • Countermeasure design
  • Noise detection (For evasion based attack)[54]
  • Information laundering – Alter the information received by adversaries (for model stealing attacks)[42]

Mechanisms[]

A number of defense mechanisms against evasion, poisoning, and privacy attacks have been proposed, including:

  • Secure learning algorithms[13][55][56]
  • Multiple classifier systems[12][57]
  • AI-written algorithms.[25]
  • AIs that explore the training environment; for example, in image recognition, actively navigating a 3D environment rather than passively scanning a fixed set of 2D images.[25]
  • Privacy-preserving learning[34][58]
  • Ladder algorithm for Kaggle-style competitions
  • Game theoretic models[59][60][61]
  • Sanitizing training data
  • Adversarial training[50]
  • Backdoor detection algorithms[62]

See also[]

References[]

  1. ^ Kianpour, Mazaher; Wen, Shao-Fang (2020). "Timing Attacks on Machine Learning: State of the Art". Intelligent Systems and Applications. Advances in Intelligent Systems and Computing. 1037. pp. 111–125. doi:10.1007/978-3-030-29516-5_10. ISBN 978-3-030-29515-8.
  2. ^ Bengio, Samy; Goodfellow, Ian J.; Kurakin, Alexey (2017). "Adversarial Machine Learning at Scale". arXiv:1611.01236 [cs.CV].
  3. ^ Jump up to: a b c Lim, Hazel Si Min; Taeihagh, Araz (2019). "Algorithmic Decision-Making in AVs: Understanding Ethical and Technical Concerns for Smart Cities". Sustainability. 11 (20): 5791. arXiv:1910.13122. Bibcode:2019arXiv191013122L. doi:10.3390/su11205791. S2CID 204951009.
  4. ^ Goodfellow, Ian; McDaniel, Patrick; Papernot, Nicolas (25 June 2018). "Making machine learning robust against adversarial inputs". Communications of the ACM. 61 (7): 56–66. doi:10.1145/3134599. ISSN 0001-0782. Retrieved 2018-12-13.
  5. ^ Vincent, James (12 April 2017). "Magic AI: these are the optical illusions that trick, fool, and flummox computers". The Verge. Retrieved 27 March 2020.
  6. ^ Biggio, Battista; Nelson, Blaine; Laskov, Pavel (2013-03-25). "Poisoning Attacks against Support Vector Machines". arXiv:1206.6389 [cs.LG].
  7. ^ Biggio, Battista; Corona, Igino; Maiorca, Davide; Nelson, Blaine; Srndic, Nedim; Laskov, Pavel; Giacinto, Giorgio; Roli, Fabio (2013). "Evasion attacks against machine learning at test time". ECML PKDD. Springer: 387–402. arXiv:1708.06131. doi:10.1007/978-3-642-40994-3_25.
  8. ^ Szegedy, Christian; Zaremba, Wojciech; Sutskever, Ilya; Bruna, Joan; Erhan, Dumitru; Goodfellow, Ian; Fergus, Rob (2014-02-19). "Intriguing properties of neural networks". arXiv:1312.6199 [cs.CV].
  9. ^ Jump up to: a b Biggio, Battista; Roli, Fabio (December 2018). "Wild patterns: Ten years after the rise of adversarial machine learning". Pattern Recognition. 84: 317–331. arXiv:1712.03141. Bibcode:2018PatRe..84..317B. doi:10.1016/j.patcog.2018.07.023. S2CID 207324435.
  10. ^ Kurakin, Alexey; Goodfellow, Ian; Bengio, Samy (2016). "Adversarial examples in the physical world". arXiv:1607.02533 [cs.CV].
  11. ^ Gupta, Kishor Datta, Dipankar Dasgupta, and Zahid Akhtar. "Applicability issues of Evasion-Based Adversarial Attacks and Mitigation Techniques." 2020 IEEE Symposium Series on Computational Intelligence (SSCI). 2020.
  12. ^ Jump up to: a b Biggio, Battista; Fumera, Giorgio; Roli, Fabio (2010). "Multiple classifier systems for robust classifier design in adversarial environments". International Journal of Machine Learning and Cybernetics. 1 (1–4): 27–41. doi:10.1007/s13042-010-0007-7. ISSN 1868-8071. S2CID 8729381.
  13. ^ Jump up to: a b Brückner, Michael; Kanzow, Christian; Scheffer, Tobias (2012). "Static Prediction Games for Adversarial Learning Problems" (PDF). Journal of Machine Learning Research. 13 (Sep): 2617–2654. ISSN 1533-7928.
  14. ^ Jump up to: a b Rodrigues, Ricardo N.; Ling, Lee Luan; Govindaraju, Venu (1 June 2009). "Robustness of multimodal biometric fusion methods against spoof attacks" (PDF). Journal of Visual Languages & Computing. 20 (3): 169–179. doi:10.1016/j.jvlc.2009.01.010. ISSN 1045-926X.
  15. ^ Su, Jiawei; Vargas, Danilo Vasconcellos; Sakurai, Kouichi (October 2019). "One Pixel Attack for Fooling Deep Neural Networks". IEEE Transactions on Evolutionary Computation. 23 (5): 828–841. arXiv:1710.08864. doi:10.1109/TEVC.2019.2890858. ISSN 1941-0026. S2CID 2698863.
  16. ^ "Single pixel change fools AI programs". BBC News. 3 November 2017. Retrieved 12 February 2018.
  17. ^ Athalye, Anish; Engstrom, Logan; Ilyas, Andrew; Kwok, Kevin (2017). "Synthesizing Robust Adversarial Examples". arXiv:1707.07397 [cs.CV].
  18. ^ "AI Has a Hallucination Problem That's Proving Tough to Fix". WIRED. 2018. Retrieved 10 March 2018.
  19. ^ Zhou, Zhenglong; Firestone, Chaz (2019). "Humans can decipher adversarial images". Nature Communications. 10 (1): 1334. arXiv:1809.04120. Bibcode:2019NatCo..10.1334Z. doi:10.1038/s41467-019-08931-6. PMC 6430776. PMID 30902973.
  20. ^ Jain, Anant (2019-02-09). "Breaking neural networks with adversarial attacks – Towards Data Science". Medium. Retrieved 2019-07-15.
  21. ^ Ackerman, Evan (2017-08-04). "Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms". IEEE Spectrum: Technology, Engineering, and Science News. Retrieved 2019-07-15.
  22. ^ "A Tiny Piece of Tape Tricked Teslas Into Speeding Up 50 MPH". Wired. 2020. Retrieved 11 March 2020.
  23. ^ "Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles". McAfee Blogs. 2020-02-19. Retrieved 2020-03-11.
  24. ^ Seabrook, John (2020). "Dressing for the Surveillance Age". The New Yorker. Retrieved 5 April 2020.
  25. ^ Jump up to: a b c Heaven, Douglas (October 2019). "Why deep-learning AIs are so easy to fool". Nature. 574 (7777): 163–166. Bibcode:2019Natur.574..163H. doi:10.1038/d41586-019-03013-5. PMID 31597977.
  26. ^ Hutson, Matthew (10 May 2019). "AI can now defend itself against malicious messages hidden in speech". Nature. doi:10.1038/d41586-019-01510-1. PMID 32385365.
  27. ^ Lepori, Michael A; Firestone, Chaz (2020-03-27). "Can you hear me now? Sensitive comparisons of human and machine perception". arXiv:2003.12362 [eess.AS].
  28. ^ Vadillo, Jon; Santana, Roberto (2020-01-23). "On the human evaluation of audio adversarial examples". arXiv:2001.08444 [eess.AS].
  29. ^ D. B. Skillicorn. "Adversarial knowledge discovery". IEEE Intelligent Systems, 24:54–61, 2009.
  30. ^ Jump up to: a b B. Biggio, G. Fumera, and F. Roli. "Pattern recognition systems under attack: Design issues and research challenges". Int'l J. Patt. Recogn. Artif. Intell., 28(7):1460002, 2014.
  31. ^ Jump up to: a b Barreno, Marco; Nelson, Blaine; Joseph, Anthony D.; Tygar, J. D. (2010). "The security of machine learning" (PDF). Machine Learning. 81 (2): 121–148. doi:10.1007/s10994-010-5188-5. S2CID 2304759.
  32. ^ Sikos, Leslie F. (2019). AI in Cybersecurity. Intelligent Systems Reference Library. 151. Cham: Springer. p. 50. doi:10.1007/978-3-319-98842-9. ISBN 978-3-319-98841-2.
  33. ^ Jump up to: a b c B. Biggio, G. Fumera, and F. Roli. "Security evaluation of pattern classifiers under attack Archived 2018-05-18 at the Wayback Machine". IEEE Transactions on Knowledge and Data Engineering, 26(4):984–996, 2014.
  34. ^ Jump up to: a b c d e Biggio, Battista; Corona, Igino; Nelson, Blaine; Rubinstein, Benjamin I. P.; Maiorca, Davide; Fumera, Giorgio; Giacinto, Giorgio; Roli, Fabio (2014). "Security Evaluation of Support Vector Machines in Adversarial Environments". Support Vector Machines Applications. Springer International Publishing. pp. 105–153. arXiv:1401.7727. doi:10.1007/978-3-319-02300-7_4. ISBN 978-3-319-02300-7. S2CID 18666561.
  35. ^ Heinrich, Kai; Graf, Johannes; Chen, Ji; Laurisch, Jakob; Zschech, Patrick (2020-06-15). "FOOL ME ONCE, SHAME ON YOU, FOOL ME TWICE, SHAME ON ME: A TAXONOMY OF ATTACK AND DE-FENSE PATTERNS FOR AI SECURITY". ECIS 2020 Research Papers.
  36. ^ Biggio, Battista; Corona, Igino; Maiorca, Davide; Nelson, Blaine; Srndic, Nedim; Laskov, Pavel; Giacinto, Giorgio; Roli, Fabio (2013). "Evasion attacks against machine learning at test time". ECML PKDD. Springer: 387–402. arXiv:1708.06131. doi:10.1007/978-3-642-40994-3_25.
  37. ^ B. Nelson, B. I. Rubinstein, L. Huang, A. D. Joseph, S. J. Lee, S. Rao, and J. D. Tygar. "Query strategies for evading convex-inducing classifiers". J. Mach. Learn. Res., 13:1293–1332, 2012
  38. ^ B. Biggio, B. Nelson, and P. Laskov. "Support vector machines under adversarial label noise". In Journal of Machine Learning Research – Proc. 3rd Asian Conf. Machine Learning, volume 20, pp. 97–112, 2011.
  39. ^ M. Kloft and P. Laskov. "Security analysis of online centroid anomaly detection". Journal of Machine Learning Research, 13:3647–3690, 2012.
  40. ^ Moisejevs, Ilja (2019-07-15). "Poisoning attacks on Machine Learning – Towards Data Science". Medium. Retrieved 2019-07-15.
  41. ^ "How to steal modern NLP systems with gibberish?". cleverhans-blog. 2020-04-06. Retrieved 2020-10-15.
  42. ^ Jump up to: a b Wang, Xinran; Xiang, Yu; Gao, Jun; Ding, Jie (2020-09-13). "Information Laundering for Model Privacy". arXiv:2009.06112 [cs.CR].
  43. ^ Biggio, Battista; Nelson, Blaine; Laskov, Pavel (2013-03-25). "Poisoning Attacks against Support Vector Machines". arXiv:1206.6389 [cs.LG].
  44. ^ Jagielski, Matthew; Oprea, Alina; Biggio, Battista; Liu, Chang; Nita-Rotaru, Cristina; Li, Bo (May 2018). "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning". 2018 IEEE Symposium on Security and Privacy (SP). IEEE: 19–35. arXiv:1804.00308. doi:10.1109/sp.2018.00057. ISBN 978-1-5386-4353-2. S2CID 4551073.
  45. ^ "Attacking Machine Learning with Adversarial Examples". OpenAI. 2017-02-24. Retrieved 2020-10-15.
  46. ^ Gu, Tianyu; Dolan-Gavitt, Brendan; Garg, Siddharth (2019-03-11). "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain". arXiv:1708.06733 [cs.CR].
  47. ^ Veale, Michael; Binns, Reuben; Edwards, Lilian (2018-11-28). "Algorithms that remember: model inversion attacks and data protection law". Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences. 376 (2133). arXiv:1807.04644. Bibcode:2018RSPTA.37680083V. doi:10.1098/rsta.2018.0083. ISSN 1364-503X. PMC 6191664. PMID 30322998.
  48. ^ Shokri, Reza; Stronati, Marco; Song, Congzheng; Shmatikov, Vitaly (2017-03-31). "Membership Inference Attacks against Machine Learning Models". arXiv:1610.05820 [cs.CR].
  49. ^ Biggio, Battista; Corona, Igino; Maiorca, Davide; Nelson, Blaine; Srndic, Nedim; Laskov, Pavel; Giacinto, Giorgio; Roli, Fabio (2013). "Evasion attacks against machine learning at test time". ECML PKDD. Springer: 387–402. arXiv:1708.06131. doi:10.1007/978-3-642-40994-3_25.
  50. ^ Jump up to: a b Goodfellow, Ian J.; Shlens, Jonathon; Szegedy, Christian (2015-03-20). "Explaining and Harnessing Adversarial Examples". arXiv:1412.6572 [stat.ML].
  51. ^ Madry, Aleksander; Makelov, Aleksandar; Schmidt, Ludwig; Tsipras, Dimitris; Vladu, Adrian (2019-09-04). "Towards Deep Learning Models Resistant to Adversarial Attacks". arXiv:1706.06083 [stat.ML].
  52. ^ Carlini, Nicholas; Wagner, David (2017-03-22). "Towards Evaluating the Robustness of Neural Networks". arXiv:1608.04644 [cs.CR].
  53. ^ Brown, Tom B.; Mané, Dandelion; Roy, Aurko; Abadi, Martín; Gilmer, Justin (2018-05-16). "Adversarial Patch". arXiv:1712.09665 [cs.CV].
  54. ^ Kishor Datta Gupta; Akhtar, Zahid; Dasgupta, Dipankar (2020). "Determining Sequence of Image Processing Technique (IPT) to Detect Adversarial Attacks". arXiv:2007.00337 [cs.CV].
  55. ^ O. Dekel, O. Shamir, and L. Xiao. "Learning to classify with missing and corrupted features". Machine Learning, 81:149–178, 2010.
  56. ^ Liu, Wei; Chawla, Sanjay (2010). "Mining adversarial patterns via regularized loss minimization" (PDF). Machine Learning. 81: 69–83. doi:10.1007/s10994-010-5199-2. S2CID 17497168.
  57. ^ B. Biggio, G. Fumera, and F. Roli. "Evade hard multiple classifier systems". In O. Okun and G. Valentini, editors, Supervised and Unsupervised Ensemble Methods and Their Applications, volume 245 of Studies in Computational Intelligence, pages 15–38. Springer Berlin / Heidelberg, 2009.
  58. ^ B. I. P. Rubinstein, P. L. Bartlett, L. Huang, and N. Taft. "Learning in a large function space: Privacy- preserving mechanisms for svm learning". Journal of Privacy and Confidentiality, 4(1):65–100, 2012.
  59. ^ M. Kantarcioglu, B. Xi, C. Clifton. "Classifier Evaluation and Attribute Selection against Active Adversaries". Data Min. Knowl. Discov., 22:291–335, January 2011.
  60. ^ Chivukula, Aneesh; Yang, Xinghao; Liu, Wei; Zhu, Tianqing; Zhou, Wanlei (2020). "Game Theoretical Adversarial Deep Learning with Variational Adversaries". IEEE Transactions on Knowledge and Data Engineering: 1. doi:10.1109/TKDE.2020.2972320. ISSN 1558-2191.
  61. ^ Chivukula, Aneesh Sreevallabh; Liu, Wei (2019). "Adversarial Deep Learning Models with Multiple Adversaries". IEEE Transactions on Knowledge and Data Engineering. 31 (6): 1066–1079. doi:10.1109/TKDE.2018.2851247. hdl:10453/136227. ISSN 1558-2191. S2CID 67024195.
  62. ^ "TrojAI". www.iarpa.gov. Retrieved 2020-10-14.

External links[]

  1. ^ H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli. "Support vector machines under adversarial label contamination". Neurocomputing, Special Issue on Advances in Learning with Label Noise, In Press.
Retrieved from ""