If there’s a true constant in our industry, it’s a brutal rate of change. Attack techniques come and go, and my inbox is stuffed with new Indicators of Compromise (IoCs) that lure defenders into a never-ending game of chasing their own tails.
Whenever one is caught in a vortex like this, it’s a good idea to try to change the game. As threats evolve, so must we. But that evolution must be thoughtful. Ten years ago, our approach was to try to move faster. If we just could speed up our threat responses, we’d catch up -- or so we told ourselves. Next, we decided to detect threats at the speed of the computer. Vendors jumped on the machine learning (or even better, the artificial intelligence) bandwagon, and everything algorithmic was suddenly hot. That’s not to say that AI doesn’t have an important place in the cybersecurity defender’s playbook, but it’s also not going to be reserved solely for benign purposes. As defenders shift, so will the attacker, and we’re already seeing some adoption of adversarial AI in the field. Just as IoCs are a necessary, but insufficient, approach, the AI bubble will likely follow a similar arc.
We’re still midway through this hype cycle, but already I’m sensing another shift: a focus on human behavior. By understanding the human, we can manage risk, abstract away much of the technology, and focus on what matters: how people interact with data. The show floor at this year’s RSA conference was full of marketing materials that claimed humans were at the center of that company’s offering. Images of larger-than-life people adorned the vendor booths, indicating that they really mean it: it’s all about the human.
By way of transparency, I firmly believe that humans do represent a fixed point in the ever-rippling fabric of cyberspace, and I’m a fan of getting away from the tail-chasing and focusing on approaches that have a longer shelf life. So, in some respects, I came back from RSA encouraged. At another level, though, understanding humans is harder than one might think.
To date, the security industry hasn’t spent much time thinking about security through a human-focused lens. I could give you example after example of security solutions that ask the person to adapt to them, not vice versa. Given that we typically use computers as a means to an end (no user sits down and tells themselves “Security is Job 1!”, unless they actually work in the security team), the topic becomes a constant source of friction that, from the perspective of the user, stops real work from getting done.
Understanding the human-in-the-loop is a multidisciplinary game, and so security teams must reorganize to support that outcome. The overall experience of the end user is paramount and knowing how to help the user make the right decision – or why they might make the wrong one – requires a deep knowledge of people, not just technology. Then, we must understand how those systems and users work together for a particular outcome.
I am a firm believer in the promise that user- and data-centric security can help us find a place of relative calm amid the constant IoC churn. However, those of us who venture into these waters must do so with a full understanding of just how different they. As a security researcher, I need to be wise about what I don’t know. Understanding how typical users perceive security decisions and interactions falls into that latter category.
For the CISO, investing in a more user-centric approach to security is time and money well spent. Picking the right partner is a much longer conversation. The CISO should probe vendors to get beneath the marketing veneer that covers the product line. Does the development team have experts in human behavior deeply embedded? Is the solution just anomaly detection under a thin ‘user-centric’ veil? (What’s anomalous is interesting but it’s not the whole story.) How can the claims made by the vendor be validated – and how does the vendor do that for themselves, in house?
A few well-targeted questions can help agencies find the nuggets of gold in a field that is quickly filling with spoils. Armed with the right partners, it’s time for officials to turn their focus within. They will need to augment their teams with people who have different skill sets from the “typical” cyber abilities. That’s a tough job, but also an opportunity. Diversity is the answer to many of our challenges, and when teams add different skills, they also tend to add people with radically different backgrounds. For example, some commentators say that women make up an appalling 11 percent of the cybersecurity workforce. If becoming more focused on human behavior is a forcing function to address that, it would be a rare win-win in the security world.
Richard Ford is chief scientist at Forcepoint.