The picture looked innocent. Just a sun peeking over the horizon. The caption: “Happy Fourth of July.”
But embedded in the digital photo's ones and zeroes, authorities allege, was a trove of technical data on methods to make turbine engines more efficient – information General Electric considers trade secrets. Federal prosecutors say a GE engineer with connections to Chinese firms took those files from his work computer, then encrypted them, hid them in the photo's binary code and sent them to his personal e-mail address.
The case against Xiaoqing Zheng illustrates an especially elaborate example of what cybersecurity experts call the "insider threat," a catch-all term for risks that stem from authorized users of a network. Those risks include everything from innocent mistakes, like clicking on a malware-infected email, to acts of malice, like the theft or destruction of sensitive information.
Because that threat is so broad, experts say, the best defense is to understand the people who use the network rather than trying to thwart every new hacking technique that comes along. Understanding people's roles, needs and habits helps security workers tell the difference between normal network activity and signs that something suspicious is afoot.
“It was very early-2000s to think of malware floating around on a network or packets that would crash a machine," said Richard Ford, chief scientist for Forcepoint, a cybersecurity company jointly owned by Raytheon. "Those things still matter, but we need to focus more on user and data-centric security."
‘A co-evolutionary threat’
In a criminal complaint outlining the GE incident, an FBI investigator notes the company had restricted the use of USB drives – a common way to prevent data theft. But with every defensive step comes a new method of attack, Ford said.
“I’ll channel my inner Jeff Goldblum from ‘Jurassic Park.’ Nature will find a way,” he said. “This is a co-evolutionary system. You have two parties who are adverse. One side does something and the other side changes its defense.”
The USB restriction, for example, forced anyone who wanted to steal information to resort to other means. In this case, it was steganography, the practice of hiding a message in a seemingly unrelated medium like a picture. It used to be something most people saw only in spy novels, said Michael Daly, chief technology officer for cybersecurity and special missions at Raytheon.
“Tradecraft like steganography, that had been reserved for foreign intelligence operatives, has been moved into the more mundane realm of corporate espionage," Daly said. “All the tools that had been created by government for professional, nation-state espionage are making their way into the general criminal environment. There’s no faux containment of these things."
Mounting a defense
No matter the method of attack, all cyber intrusions have one thing in common: unusual network activity. With insider threats, that means people log on at odd times or venture into parts of the network they don’t normally need. They download large volumes of data, run uncommon applications or processes or attach odd file types to emails. The signs, or “tells,” are innumerable. And while none of them is a sure sign anything is wrong, each indicates it may be necessary to investigate further.
Several pieces of data can help network defenders know the difference. They include:
- The ability to see and capture what users are doing on the network
- Defining normal activity for individual users as well as groups
- Identifying anomalies
- Assigning each user a “risk score”
A shared responsibility
Guarding a network from its own users shouldn’t fall entirely to IT or security, Daly said; other leaders, including the heads of human resources, legal and ethics, should take part to bring context as well as checks and balances.
“You’ve got to lay out the rules and have a governance council where all the stakeholders are engaged,” he said. “Otherwise, you’re wasting tools and missing opportunities to see things.”
Cybersecurity, Ford said, is not a goal in itself. It is simply a means to an end.
“What you’re really trying to do is protect your users and protect your data,” he said. “If you’re trying to protect the Mona Lisa, you put in a metal detector, screen who can get into the museum, and you don’t leave it unlocked at night. You don’t let a known art thief into the Louvre. If someone lunges toward the painting, you deal with that.”
What you don’t do, Ford said, is shut down the Louvre completely – while it’s important to protect the Mona Lisa, it’s just as important to remember the painting exists to be seen.
“When you make your security rules too stringent, you lose the benefit of information technology,” he said. “When is your information most valuable? When it’s being used.”