WASHINGTON – When U.S. leaders talk about the promise of artificial intelligence, one application they regularly discuss is cybersecurity. But experts say European countries have thus far proven to be more measured in their approach to AI, fearing the technology is not yet reliable enough to remove human analysts.

Consider France, which along with the United Kingdom and Germany, has become one of Europe’s AI hubs. According to a report by France Digitale, a company that advocates for the rights of start-ups in France, French startups were using AI 38 percent more than they did a year earlier.

But the advancement of AI in the defense sector has not been as prominent in some European countries. That’s in part because the systems need a large amount of data to be reliable, according to Nicolas Arpagian, vice president of strategy and public affairs at Orange Cyberdefense, a French-based company working with Europol and other cybersecurity companies to build strategic and technological counter-measures to face cybersecurity attacks.

“It's very difficult to know what the data can be used for, and if you let the computer or if you let the algorithm take decisions [to prevent cyberattacks,] and that's a false positive, you won’t be able to intervene early enough to stop decisions that were taken on the basis of this [erroneous] data detected by the algorithm,” he said.

Orange Cyberdefense’s approach is training human analysts to detect the behavioral patterns hackers reveal. The company also relies on artificial intelligence to act as an assistant and to keep humans in the lead role.

“You need the analyst, the human being, the human brain and the human experience to deal with and to understand a changing situation,” Arpagian said.

At the same time, pressure from Russia, China and other adversaries in the AI market has pushed the United States to designate more resources for the development of the technology in the defense sector, according to a 2019 Congressional Research Service report. In recent years, China has focused on the development of advanced AI to make faster and well-informed decisions about attacks, the report found. Russia has focused on robotics, although it’s also active in the use of AI in the defense sector.

Moving to use AI in U.S. cybersecurity ops

In February, the Department of Defense adopted five principles to ensure the ethical use of the technology. Secretary of Defense Mark Esper said the United States and its allies must accelerate the adoption of AI and “lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order.”

In September, Lt. Gen. Jack Shanahan, the director of the Joint Artificial Intelligence Center, said the center’s mission was to accelerate the Pentagon’s adoption and integration of AI in cybersecurity and battlefield operations.

“We are seeing initial momentum across the department in terms of fielding AI-enabled capabilities,” he said on a call with reporters. “It is difficult work, yet it is critically important work. It demands the right combination of tactical urgency and strategic patience.¨

The Pentagon has taken the first step to increase the use of AI and machine learning during its operations as implementing AI in cybersecurity operations “is essential for protecting the security of our nation,” according to the department’s formal artificial intelligence strategy released in 2019. The technology will be incorporated to reduce inefficiencies from manual, data-focused tasks, and shift human resources to higher-level reasoning cybersecurity operations, the strategy laid out.

Artificial intelligence can play a key role identifying unknown attacks, as human analysts normally know enough about recurring threats to accurately detect cyber risks such as evasion techniques and malware behaviors, said Shimon Oren, head of cybersecurity and threat research at Deep Instinct, an American company that uses AI and deep learning technology to prevent and detect malware used in cyberattacks.

Oren said the use of artificial intelligence and deep learning technology is crucial to train and teach the systems to make decisions and draw conclusions on new threat scenarios that will be presented to it post-training. The technology will free human analysts to do the type of work computers “absolutely cannot do,” he said.

For example, the U.S. intelligence community is looking to fully automate well-defined AI processes, as AI systems can perform tasks “significantly beyond what was possible only recently, and in some cases, even beyond what humans can achieve,” according to the 2019 Augmenting Intelligence using Machines Initiative.

“It's very hard for [humans], even when we're very experienced and knowledgeable, to extrapolate what might be the next kind of attack, how it might look like, what exactly will the next kind of malware do and how will it go about doing what it's meant to do. And for that reason, exactly we need to rely on AI,” Oren said.

But relying on only one method to detect cyberattacks is a mistake, Arpagian and Oren agreed. While there is a high possibility of human analysts missing information that hints at an attack, AI systems often are not up to the latest technology to be as successful as they are expected to be, Arpagian said.

Orange Cyberdefense has been focusing on the integration of augmented intelligence rather than AI until this is developed enough to be meaningful, Arpagian said. The company has faced some criticism from others who have embraced AI fully, instead of using the technology as a tool for assistance.

“If you say you are not using artificial intelligence tools [to prevent cyberattacks] you could seem to be a bit old fashioned and outdated,” Arpagian said. “But augmented intelligence is something we need to have and, afterwards, when we have enough data on a specific activity on a very specific domain, then maybe the artificial intelligence will be able to deal with [cyberattacks] on its own.”

Many European countries are not prepared to integrate AI as the intelligence service lacks the readiness to properly begin using the technology within its agencies, according to the French civil servant Nicolas Tenzer, who has authored official reports to the French government and has served as a senior consultant to international organizations.

“When it comes to propaganda, for instance, [the intelligence service] is not really trained -- they don’t really know the best way to respond to that,” he said. “The second problem is there must be a true willingness from the government [to use the technology.]”

Tenzer said the lack of readiness and lack of collaboration between agencies will make it difficult to integrate AI in the defense sector to the extent the United States has.

U.S. efforts to implement AI include the American AI Initiative, which President Donald Trump announced in 2019 as an effort to promote the use of the technology in various fields including infrastructure, health and defense.

In 2019, the Department of Defense and the Naval Information Warfare Systems Command posted a challenge to get input from industry partners on how to automate the Security Operations Center by using artificial intelligence and machine learning, specifically how to detect modern malware strains pre-attack. FireEye, an intelligence-led company that provides software to investigate cybersecurity attacks, was awarded $100,000 as it provided the best model to detect attacks and respond in a short time, according to a March 3 release.

Earlier this year, the Center for Security and Emerging Technology at Georgetown University launched the Cybersecurity and Artificial Intelligence Project to study the overlap between cybersecurity, artificial intelligence and national security. The project, directed by Ben Buchanan, is expected to study how artificial intelligence can be used on offensive cyberoperations apart from defensive operations.

“Technology is fundamental to cyber operations on offense and defense,” Buchanan said. “The reason why AI is important is that there’s just so much data that you need a machine to be able to do the first pass through the data [during offensive and defensive operations.]”

Chiara Vercellone is a reporter interning with Defense News, C4ISRNET and Fifth Domain Cyber

Share:
More In Artificial Intelligence