When tech giant Citrix reported it had been victimized by a cyber intrusion earlier this year, the company wasn’t initially aware hackers had set up inside its network for more than six months. That’s significant because it allowed ample time for the criminals to steal credentials along with massive amounts of data. In the cybersecurity world, we call this prolonged period of undetected mischief dwell time – or the average time between an initial cyberattack and its detection.
Now imagine if a similar incident were to take place inside a large federal agency like the Internal Revenue Service or the Census Bureau or the Social Security Administration.These agencies maintain treasure chests of personal data from hundreds of millions of American citizens. Unfortunately, these stockpiles also provide the hackers with more places to hide for days, weeks and months. So, it’s valid to ask what is being done to safeguard that data or protect the systems that house it from a cyberattack?
Recently, Congress began approving ‘cyber incident response teams’ at the Department of Homeland Security to guarantee they can work with private sector companies to recover from malicious events that may impact the country’s digital infrastructure. This is a promising step, but more attention should be paid to improving detection measures.
For example, recognizing abnormalities early would give agencies an advantage to stop hackers sooner and halt the amount of damage they cause. While not all cyberattacks create an obvious pattern, an intrusion is often discovered because of a rapid change, such as an increase in network traffic, or a large and unusual export of sensitive information, i.e. data exfiltration. Sometimes, the event can even be a suspicious location or highly unusual time for system login.
Hackers are increasingly using more sophisticated methods for their devious pursuits. Often, those who gain unauthorized access to a network will conduct a quiet, yet deliberate attack. This is where they methodically move across devices and users. This action, termed lateral movement in cybersecurity, is conducted carefully to not attract attention to their movements while exploring and strategizing on how best to exfiltrate the government’s data.
Cyber response teams are likely familiar with correlation rules, a common approach that is used in cybersecurity to catch malicious activity. Unfortunately, these rules have limits, and often, they fail to uncover an attack because they lack necessary context. For example, most of the time the correlation logic is a static rule built on indicators of compromise or logic, not behavior which is specific and doesn’t scale.
Correlation rules are built for detecting known threats but don’t effectively address the complexity of advanced threats, such as malicious insiders, zero-day attacks, laterally moving malware and compromised credentials. Correlation rules are static, which puts the onus of maintenance and upkeep on the security team. This takes time away from the security team and could lead to a reactive based approach to incident response.
Another concern with relying on correlation alone for incident response, is when investigating a correlated event. When a rule triggers the responding analyst is able to see what events led up to the triggered rule, but what is missing is immediate insight into what happened after the triggered rule. This leads to additional querying and pivoting by the analyst to manually stitch together additional data points to better understand the risk associated with the correlated event. In addition, the amount of correlated events a security team observes in a given day could lead to analyst fatigue and could allow true security risks to be missed.
As an example, a false positive alert could occur when a federal employee logs into their account while on vacation from another state, or if they’re visiting a country that is a suspected sanctuary for cybercriminals. Now, analysts may assume they have detected an attacker, rather than recognizing the login as a government employee. The downside of correlation rules is the added burden they create for security teams to continually monitor activity.
Further, writing custom queries and correlation rules requires a lot of people and process.
Using a solution that is centered around modeling behavior allows the machine to perform the heavy lifting that is typically required to build and maintain correlation rules and custom queries. A solution that leverages machine learning and modeling creates a baseline of what is normal for every user and every asset in an organization, and simply bubbles up what is anomalous based on comparing users and assets’ ongoing activity against the baseline of what has been learned as normal. Automated timelines correlate all related activity back to the users and/or assets that are interacting with the customers’ network.
By removing manual monitoring needs, machine learning can quickly and easily find anomalous and suspicious user and asset behavior. It does this through the use of algorithms, which establish normal behavior in the network environment as a baseline. Whenever anomalous activity occurs, a risk reason and score is assigned to the anomalous behavior and added to the overall risk score for a user or entity’s daily session activity. Once the aggregate score meets a threshold, the system will present the user or entity of interest to the security team along with all of their session activity for that period. It’s a capability our government and military would benefit from using.
The benefit of using machine learning is that it can be used to automatically fill in incident timelines with the full scope and context of related event details. That means, an incident like Citrix’s, where the company was under attack for half a year or more, can never go unnoticed. Analysts would not have to scour massive amounts of data logs to manually try to piece together an investigation timeline and could detect breaches sooner. By detecting attackers in a network environment faster, they eliminate time spent “dwelling,” and also significantly reduce the amount of damage a hacker can cause.
In late 2018, the SANS institute found organizations are broadening the scope of their threat hunting efforts and that dwell times are decreasing.
With personal data of citizens being collected and readily shared, federal agencies have an enormous responsibility to put measures in place to protect the government and the American people. It would be nearly impossible for federal security analysts to manually prioritize and sift through huge amounts of log data to find a security breach at one of our agencies. With only nominal increases to this year’s cybersecurity budget, federal cyber teams need to be judicious about how they allocate their dollars. It would certainly be smarter to explore new security management solutions sooner, rather than later.
Bill Aubin is vice president for federal at Exabeam, a California-based cybersecurity company.