In theory, the only technology capable of hacking a system run by artificial intelligence is another, more powerful AI system. That’s one reason why the U.S. Army incorporated a powerful AI capabilities into its drone systems that is expected to provide the ultimate cybersecurity — at least, for now.

“It’s an arms race,” said Walter O’Brien, CEO of Scorpion Computer Services, whose AI system runs and protects the Army’s UAV operations. “Now I have an AI protecting the data center, and now the enemy would have to have an AI to attack my AI, and now it’s which AI is smarter.”

The Army’s drone operations got its AI upgrade after the military contracted with Stryke Industries and their sub-contractor Scorpion Computer Services, the Army announced this month. By adding the AI system to the Universal Ground Control Station, the idea is that it will help run drone operations and will provide the most robust cybersecurity possible. But the arms race to great bigger, better AI systems has only just begun.

DARPA made artificial intelligence and how it could improve cybersecurity a key tenet of a 2016 event, when it tried to find and repair flaws in software at machine speeds.

The addition of Scorpion’s artificial intelligence system Scenario Generator, also called ScenGen, will give operators at Redstone Arsenal in Huntsville, Alabama, the ability to control groups of predator drones, like the MQ-1C Gray Eagle at once, to form protective or offensive swarms, O’Brien said.

ScenGen can be used for war planning, testing internal systems and automating regression testing — which basically makes sure adding additional code won’t break the code currently in place. And it works at a breakneck speed by “thinking” of 250 years of human work every 90 minutes, O’Brien said.

“ScenGen basically thinks of everything that could happen,” he said. “Imagine two chess computers playing each other.”

But it’s possible ScenGen’s biggest impact is cybersecurity. AI systems’ complexity and ability to adapt mean they react quickly to cyber attacks and can identify and protect every potential point of entry into the system its protecting.

If a smarter AI — perhaps one using quantum computing — were to hack a system like ScenGen, the results would be catastrophic, he said. Those type of systems don’t exist yet, but he said it’s only a matter of time until they are developed in the coming decades. “[ScenGen is] kind of like the ultimate skeleton key. Nobody outside of scorpion has the source code to this because it’s so dangerous,” he said.

The global artificial intelligence within cybersecurity industry is already growing in the private sector and is estimated to reach $18.2 billion by 2023, according to a report from P&S Market Research. A large chunk of that growth is due to better threat-learning algorithms being used to protect businesses from cyber attacks.

Even AI software used to provide cybersecurity is at risk from cyber attacks, if attacked with a stronger machine, said Roman Yampolskiy, an associate professor at the University of Louisville and director of the university’s Cyber Security Lab.

“It is a viable threat and many cybersecurity systems are subject to attacks including cybersecurity systems for intrusion detection and anti-virus software,” he said in an email. “It happens a lot, intrusion detection systems can be trained away from ‘normal’ behavior, in what is known as behavioral drift. Antivirus software can be disabled.”

He added sometimes human error is to blame, such as some cases where users were given permission to train the AI system to achieve certain tasks, but the training went wrong — producing “disastrous consequences,” he said.

“A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery,” Yampolskiy wrote in a paper he co-wrote on “a timeline of AI failures,” published in Cornell University’s library.

ScenGen’s best protection digitally is ScenGen itself, but other security measures are in place to prevent the AI system from getting into enemy hands, O’Brien said. The source code is never connected to the internet and is under “lock and key.” The code that is executed for customers only work in specific subject areas that have been licensed for that use, so code that’s meant to be used on UAVs couldn’t also be used on helicopters, for example.

The AI system can’t be physically moved without authorization from two high-ranking members of Scorpion, so no one person can decide to move the system, O’Brien said, and Redstone Arsenal has significant physical security.

Even if someone were to get ahold of the system, they wouldn’t know how to use it, O’Brien said. The system doesn’t use a traditional coding language like C++ or Java, opting to use their own invented language that takes years for operators to fully learn.

At the same time, O’Brien said, there will be a day when ScenGen is obsolete and hackable by AI quantum computing systems, and he said it’s essential to prepare for the day that quantum computers are available publicly and able to run complex AI systems.

“It’s kind of like a cyber nuclear weapon,” he said. “When the next war is a cyber war, it’s won by whoever is best at math.”