Adversarial machine learning is the use of malicious methods to disrupt AI applications or corrupt their outputs.
The widespread adoption of machine learning technologies has made them a popular and valuable target for malicious agents seeking to cause disruption, or gain unauthorized access.
Using malicious techniques to influence the outputs of machine learning models, usually for the purposes of disruption.
Very little. But for hackers, it’s a whole new way of disrupting operations and getting around network security.
Disrupting a machine learning application takes a lot of knowledge of that application. But in some cases, malicious agents can use trial-and-error methods to find vulnerabilities.
Adversarial machine learning is being used to influence machine learning outputs and intentionally deceive machine learning applications.
What is it?
Adversarial machine learning refers to any kind of malicious action that seeks to influence the outputs of a machine learning application, or exploit its weaknesses.
With machine learning applications responsible for everything from categorizing images to detecting suspicious network activity, there are a lot of reasons why someone might want to maliciously influence how they operate.
Typically, adversarial machine learning involves knowledge of how a ML model was trained or which training data was used. This knowledge can be used to create fake or synthetic data, manipulating the ML model to produce rogue outcomes. One example is the use of so-called ‘dazzle make-up’ to defeat facial recognition systems.
What’s in for you?
Adversarial machine learning is far more of a threat to most businesses than it is an opportunity. It’s an emerging digital threat that those using machine learning — especially in areas like identity verification and access management — need to be aware of, and prepared for.
However, adversarial machine learning techniques can be useful when testing your own machine learning, enabling you to identify vulnerabilities and resolve them before any bad actors can take advantage of them.
It can also help improve the outputs of your machine learning applications, enabling them to deliver greater value for your business and customers.
What are the trade offs?
Adversarial machine learning is itself a trade-off of using machine learning. When you build applications that can learn from new data and adapt to their environment, they’re going to be susceptible to taking in bad information, drawing incorrect conclusions and making poor decisions based on misleading inputs.
By using pre-defined data sets to train machine learning applications, you can reduce their exposure to malicious inputs. However, that still doesn’t help prevent trial-and-error attack methods that are designed to find a model’s vulnerabilities.
How is it being used?
The most common example of adversarial machine learning is faking human error to circumvent email spam filters. Spammers were quick to identify that the machine learning technologies that power spam filters use human error as an indicator that an email has come from a real human. So, by adding a few intentional human-like mistakes, they significantly increase the chances of their malicious emails hitting your inbox.
Other examples include the manipulation of stop signs in such a way as to render them invisible to a self-driving car or specially modified glasses that fool facial recognition algorithms.
Such attacks are relatively unsophisticated — the kind of disruptive action that virtually anyone could take, for any reason — causing massive issues with technologies that depend on machine learning to operate.
Artigos relacionados
Would you like to suggest a topic to be decoded?
Just leave your email address and we'll be in touch the moment it's ready.