On March 24, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (the “Report”). The Report provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning, identifies current challenges in the life cycle of AI systems, and describes methods for mitigating and managing the consequences of cyber attacks on such systems.
The Report states that it is directed primarily at those responsible for designing, developing, deploying, evaluating, and governing AI systems. It is designed to aid in securing AI applications against attacks that include adversarial manipulation of training data, provision of adversarial inputs to adversely affect the performance of AI systems, and malicious manipulations, modifications or interactions with models to exfiltrate sensitive information from training data.