Let’s cut straight to the chase: the government has real challenges with respect to critical data protection. Not only does it have lots (and lots!) of data that is absolutely critical to national security, the government also holds data on every single citizen — data that it has a duty to handle appropriately and that could be personally damaging if released or altered. Keeping all of this data both safe and usable is an incredibly difficult job, and it’s a job that’s hampered by one very simple everyday word: trust.

When I write trust, I don’t mean the average citizen’s trust in the government; I mean how the government typically trusts — or distrusts — those people, devices, systems, and infrastructure that make up the overall federal and state ecosystems. That’s broad and is, in fact, part of the challenge: anyone that treats “the government” as a single entity is making an error. That broad brush stroke encompasses radically different missions, threats and access to resources. Thus, there’s good and there’s bad, including pockets of true expertise and leadership, leveraging skills that have been honed in perhaps the most challenging environment in the planet. However, overall, I’d suggest that the government as a whole could benefit from thinking in a more nuanced way about trust. In that, I think they face similar challenges to any large corporation.

It’s not that the government doesn’t understand the concept of trust … not at all. In part, one only has to look at the clearance process that validates the trustworthiness of those that access classified data and how classified data is isolated from those that do not possess the requisite clearances for access. They also understand the dual concepts of trustworthiness and risk acceptance that is part of using multilevel secure systems. Mostly, this works well, even against some fairly determined external attackers as well as malicious insiders and spies. It’s easy to look at places it broke down, but we also need to see the many times it has performed exactly as designed.

While this has echoes of trust interwoven throughout, it’s only for parts of the government, and even then, sometimes dosed out in a rather “all or nothing” way. The problem is the way we typically decontextualize trust, and that is a reflection of how we treat digital trust differently than real-world interpersonal trust. It’s that challenge I’d like to unpack further here.

When you meet someone socially and are getting to know them, most of us take cues from those initial conversations. We’re trying to figure out how trustworthy someone is. However, that trust is situational. You might trust your new friend to provide a ride to dinner, but not trust them (yet) with the keys to your car. It’s not just about determining whether a person is “good” or “bad,” but whether they are likely to perform a particular action reliably or not. For example, you might know someone is entirely well-intentioned but very clumsy. You might not trust that clumsy person with your most treasured crystal glass, for example. It’s not that they’re bad … they’re just bad with fragile things. It’s a subtle but important difference.

Now let’s compare this to how we view trust with respect to computing. Here, defenders often apply a high degree of “inside/outside” thinking along the lines of “what’s inside is good and what’s outside is bad.” Once I’m logged in to a machine or joined to an organization, I’m pretty much given free rein within my granted rights. There are some checks and balances: insider threat programs, for example, try and identify those insiders who are a danger to the organization. However, the overarching paradigm is one of trust or distrust, with not much in between. It’s the same with machines: once a machine is placed on the network, we generally trust it completely. This type of “trust” isn’t trust at all — it’s a more “permit or deny” privilege-based system ... and it’s easy for an attacker to exploit.

The way forward is fairly straightforward — broad adoption of fine-grained, context-sensitive trust is the right approach. As noted above, trust is not a discrete thing; it depends on context. Furthermore, this contextual and non-discrete model of trust needs to be applied to more than just people on specific programs, but should apply to every entity that interacts with a system, as well as the system itself.

The only long-term solution to data management within the government is to embrace this trust-based architecture in a consistent and broad manner. On the plus side, this is synergistic with — but not identical to — some of the excellent work Department of Homeland Security is doing with Continuous Diagnostics and Mitigation (CDM) and the Department of Defense with Defense Federal Acquisition Regulations Supplement (DFARS), to name a few. It just moves that risk-appropriate umbrella of mitigations out further to what matters most: users and data.

Moving away from the “inside/outside, good/bad” mindset has already started — and it should be encouraged. What’s needed next is to start really baking in this risk-adaptive approach and recognizing that trust comes in degrees. The world really is about shades of grey; that’s the way to protect the data that is most important to us. Anything else will lead to failure.

Richard Ford is chief scientist at cybersecurity software company Forcepoint LLC.

Share:
More In Cyber