Security
adversarial-example
An input designed to fool a machine learning model into making incorrect predictions.
Expanded definition
Adversarial examples are inputs to models that have been intentionally modified to cause the model to make a mistake. These inputs are often generated by adding small, imperceptible perturbations to legitimate data points. The study of adversarial examples is crucial for understanding the vulnerabilities of machine learning systems and improving their robustness against potential attacks.
Related terms
Explore adjacent ideas in the knowledge graph.