On Feb. 10, 1996, then-World Chess Champion Garry Kasparov lost a game with standard controls to IBM's Deep Blue computer. It was widely portrayed as an ominous event, although Kasparov would go on to win the match, 4-2.

A year later, in May 1997, Kasparov lost an entire chess match (3 ½-2 ½) to an enhanced Deep Blue. Although controversy ensued, Inside Chess magazine captured the mood of many with the cover, "ARMAGEDDON!"

Yet scientists estimated it could be more than 100 years before computers would beat humans at Go, a game exponentially more complex than chess for a computer to master.

Nineteen years later, in March 2016, Google DeepMind's AlphaGo became the first computer to beat (4-1) Go World Champion Lee Sedol on a 19x19 board without the use of handicaps.

On the surface, these events seem similar: A computer beat the current world's best human player at a complex game. But the different factors – and future implications – around each are stark.

What’s in a Game?

One of the key differences between Deep Blue's and AlphaGo's achievements is rooted in the fundamental dissimilarities between chess and Go. Differing factors include complexity as expressed by the number of possible games (1047 for chess vs. 10170 for Go), possible legal moves per turn (on average, 35 in chess vs. 250 in Go), valuation systems (fixed in chess vs. relative in Go), positional analysis (logical and sequential in chess vs. spatial and intuitive in Go) and so forth. Each of these factors contributed to a distinct problem space, requiring a unique approach to computational design, programming and engineering.

A second major difference is what creators did to prepare the machines for these different types of games. In the 1990s, developers input a vast quantity of chess know-how into Deep Blue and then bolstered its ability to calculate moves with raw processing power. In June 1997, Deep Blue was the 259th most powerful supercomputer in the world. Deep Blue used brute-force processing to out-calculate Kasparov – at a pace of 200 million positions per second.

By contrast, AlphaGo's creators could not use the same approach for Go that Deep Blue's developers had used for chess. In a research paper, AlphaGo's creators explained the unique challenges: "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence," researchers wrote, "owing to its enormous search space and the difficulty of evaluating board positions and moves."

AlphaGo's creators therefore employed a combination of cognitive computing technologies that enabled the machine to essentially "teach" itself Go through continuous play and incremental tweaking of its strategic and tactical approaches. Whereas chess depends on logical, sequence-based reasoning heavily influenced by the fixed value of pieces (e.g., a rook is always valued more than a bishop), the value of Go's simple black and white stones is derived entirely from their relation to other pieces in specific board positions.

Go therefore relies on something more ambiguous – a combination of spatial reasoning and what top human players describe as subconscious or intuitive reasoning, neither of which are easy for humans to encode for computational application.

AlphaGo instead "learned" fundamentally better ways to play Go through trial, error and refinement of approach involving two key components – a policy network and a value network, both based on the deep-learning technology called artificial neural networks. The policy network allowed AlphaGo to obtain general "rules of thumb" for good play from millions of games against itself and other computers. The value network allowed AlphaGo to reference a vast historical database of Go games to identify similar positions and infer the likeliest next best move.

In this way, AlphaGo developed its skill much like humans do – through experience and intuition – although scientists say AlphaGo took much longer to develop its skills than humans do.

Intuition is a broad and easily misunderstood term, especially when applied to computational capability. To communicate the concept is tricky, often illustrated by strained phrasing such as, "Solving big AI problems requires a wide range of technologies and deep learning can provide something akin to human intuition – or at least approximate the kind of intuitive tasks that we humans find difficult to explain." Observers are nonetheless compelled to use the term in describing AlphaGo's feat. "What's new and important about AlphaGo is that its developers have figured out a way of bottling something very like that intuitive sense," wrote computer scientist Michael Nielsen.

Unlike Deep Blue, which exceled at chess but not much else, the fundamental technologies underlying AlphaGo have broad application, including cybersecurity. At first glance, the most obvious use case is cyber gamification, which is already used by the military, universities and companies to train security professionals and test cyber resilience during simulated attacks.

But the possibilities are much broader and the cybersecurity industry is currently undergoing something of an "intelligence" innovation boom cycle – although what is meant by "intelligence," how it should be applied and its potential effectiveness are all issues very much still under debate.

Machine Intelligence, Not Artificial Intelligence

Computer scientists have long dreamed of building computers with artificial intelligence (AI). Last week at the RSA Conference, the security industry's largest annual gathering, AI was one of the most discussed themes on panels and in marketing spiels. But ambiguity remains about what, exactly, is meant by AI.

Further, it's unclear whether AI is too narrow a focus or should even be the focus at all. A recent Deloitte University Press article argued that a broader concept, machine intelligence (MI), is really "the bigger story." Deloitte researchers wrote:

Important elements of MI applications to cybersecurity include big data, cognitive analytics and machine learning.

Big data – last year's security industry jargon du jour – is ushering in The Zettabyte Era. AlphaGo's big data input consisted of 30 million game positions – which, none the less, do not approach Go's 10170 realm of possibility.

Cybersecurity professionals in the trenches are all-too-familiar with big data. Security technologies trigger a high volume of warnings, many of which turn out to be false positives and which, over time, can lead to "alert fatigue." The amount of data – threat and otherwise – is expected only to increase with the continued deployment of data-intensive technologies (e.g., internet of things) and increased cyber threat intelligence capabilities.

Cognitive analytics is a catchall term for technologies that enable high volumes of unstructured data to be searched and contextualized. Cognitive analytics combines natural language processing (NLP), probabilistic reasoning and machine learning to find the best answer to a question within a specific context. Algorithms are at the heart of cognitive analytics. A key part of AlphaGo's cognitive analytics capability stemmed from researchers' use of the Monte Carlo tree search algorithm (MCTSA). MCTSA uses statistical random sampling to determine the next best move in games.

Two important points about cognitive analytics:

  1. Its methods do not depend upon rules-based structured queries, but can answer more open questions and hypotheses.
  2. Cognitive analytics’ underlying probabilistic inference is what provides the “predictive” in predictive cybersecurity.

Machine learning is the process whereby computers automatically identify patterns in data. AlphaGo played other machines and against itself millions of times. As it played, AlphaGo "learned" to recognize patterns of play and how to evaluate vague – yet highly contextual – concepts, such as board position.

To achieve this, AlphaGo developers employed artificial neural networks (recall the policy and value networks), which use digital technology to essentially simulate a human brain for solving problems. Neural networks enable machines to "teach" themselves tasks by repetition and refinement – a process researchers call deep learning – rather than relying on a set of instructions developed by programmers.

Companies are already developing cybersecurity solutions based on one or multiple facets of MI. To date, big data analytics have been applied to security metrics, NLP applied to threat intelligence and machine learning applied to endpoint security.

Skeptics Emerge

In an industry with its fair share of contrarians, it's perhaps no surprise that some are brushing aside the notion, the promise – and, especially, the hype – of MI, AI and the whole nine yards.

On a crypto panel at RSA last week, MIT Professor Ron Rivest (the "R" in RSA) said, "I'm skeptical of AI." He then spoke about a future internet on which it could be hard to find a real human among all the bots.

Borman Professor of Computer Science at the Weizmann Institute Adi Shamir (yes, the "S" in RSA) said, "Fifteen years from now we will give all data to AI systems, it will think, and [then] say that, 'in order to save the internet, I'll have to kill it.' The internet is beyond salvaging; we need to start over with something better."

Ahead of his keynote address, Cisco Security's David Ulevitch said he's "more concerned about 'Artificial Stupidity,'" by which he meant ineffective automation and orchestration of security safeguards.

Nonetheless, as evidenced by the numerous investment-focused events at RSA, innovators and venture capitalists continue to believe and invest, whether intelligently or not.

Ethical Dimensions of Predictive Cybersecurity

Should the believers and investors succeed and widespread MI-driven cybersecurity become a reality, government and industry will face a slew of ethical questions that few have begun to consider.

One of the first ethical questions to arise around MI inevitably pertains to automation and the resulting loss of human jobs. The cybersecurity industry currently faces a talent shortage in private and public sectors, probably exacerbated by current business practices, so it's not clear whether automation would be as controversial as in other industries.

Additional ethical issues emerge when considering predictive cybersecurity used to anticipate cybercrime or cyberterrorism – wherein the accused are implicated in crimes that have yet to be committed.

Dr. Evan Selinger, a professor of philosophy at Rochester Institute of Technology and fellow at the Institute for Ethics and Emerging Technologies, pointed to several potential issues arising from the poor quality and/or inadequate quantity of data on which to base such predictions. In shadowy cyberspace, data quality can be questionable, especially as attacks on data integrity rise. These challenges accompany the historical challenge of attributing cyberattacks – to say nothing of intent to attack.

Another set of ethical questions arises from the predictive capability based on algorithms used to infer probabilistic outcomes. This raises the issue of "algorithmic transparency," whose proponents argue that people should know which factors go into an algorithm's design and application – especially when implicating people (or their computers) in crimes. The trouble is such details are often considered proprietary – and therefore confidential – by companies and classified by government agencies.

But algorithmic transparency does not guarantee clear understanding.

"Unfortunately," Selinger said, "calls for greater transparency that don't compromise classified intelligence or intellectual property face a fundamental complication: Computational complexity."

New America Fellow David Auerbach summarized the challenge with computational complexity: "Just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works." That's because, Auerbach wrote, "machine learning and the 'training' of data create classification algorithms that do not behave in wholly predictable manners."

The ethical issues of predictive cybersecurity are not limited to cybercrime and cyberterrorism. Frank Pasquale, author of The Black Box Society, has talked about issues related to autonomous weapons systems. It's not hard to imagine predictive cybersecurity preemptively – whether correctly or not – launching cyberattacks against another country's critical infrastructure, especially if private businesses are incentivized to attack.

None of these questions present easy answers, even for ethicists like Selinger, who admitted, "If I knew how to remedy the situation, I might be up for a Nobel Prize."

Share:
More In Cyber