The Dark Threat Rising In Artificial Intelligence

Introduction

Artificial intelligence, known better as AI, refers to the technological advancements that also computers to operate on their own, and with the capabilities to self-learn. This has exploded since the early 2010s and is expected to become a $35B industry by 2028, three times what it’s worth now. What may have started as Facebook suggesting friends to tag in photos has now spawned facial recognition software, automated services that detect suspicious behavior on your network, Dark Web monitoring and so much more.

On the flip side, cybercriminals are also able to use AI to their advantage for anything from password spraying to hacking your systems. Machine learning makes AI able of identifying patterns and making best-case choices, so they can keep running without too much manual oversight.

Now both sides of artificial intelligence are colliding. Instead of creating their own code to combat your company’s cybersecurity-oriented AI, thieves are engaging in a practice known as poisoned AI.

What is Poisoned AI?

If an AI that was created with good intentions uses machine learning to understand what a “real” user looks like before allowing them onto the platform, then that profile is based on the legitimate people in the system, new visitors, common spam for your industry, and other logistics that add up to whether the AI determines if the web traffic is a threat.

You’ve likely come across this concept before. Think about your email platform of choice. Does it automatically detect and filter spam into a separate Junk folder? How do you think it makes those determinations?

Worse still, what if cybercriminals could manipulate the data that the AI uses to determine what’s safe? That right there is the trouble with poisoned AI.

How It Works

Essentially, AI learns by programmers feeding it “good” and “bad” code, and then it learns on its own how to identify threats versus safe web traffic based on its foundational knowledge. When cybercriminals are skilled and inclined, they can create malicious code and feed it into larger datasets, where the poisoned data will be more or less lost in the overwhelming mass of information. Thus artificial intelligence will believe that these snippets of code are, in fact, harmless when they’re not; thus hackers can later trick those poisoned AIs into opening a backdoor for easy infiltration into your network.

Of course, ideally you have more safeguards in place than one single AI machine, but nonetheless the threat of poisoned AI is a blow to front-line defenses. Recent data suggests that backdoors can bypass security defenses just by poisoning as little as 0.7% of inputted data.

What This Means for Business

It’s just short of impossible to sort through every dataset your AI encounters, and would also be incredibly time-consuming to find a needle in that big of a haystack. Nevertheless, security specialists should check in on their AI inputs now and then to guarantee that the information is properly labeled and sorted.

Although it builds accuracy to train AI with a larger database, that often means using open source offers which is exactly how the cybercriminal would pass on their poisoned data. Companies can, instead, use less data which they can guarantee is “clean” from malicious code.

Conclusion

As cybersecurity experts develop new technology to detect and defend against threats, these same capabilities are available for criminals to build off too. AI poisoning is just the latest way that they’ve found to use smart technology to their own advantage.

This doesn’t mean you should go back to all-manual processes. There’s still a place for AI in your cybersecurity posture as the technology has unmatched ability to detect suspicious behavior and protect your most vulnerable assets.

References

Related Posts