The object on the doctor’s screen, a white mass on a dark void, sits in the lungs, in a place it shouldn’t be, and suggests cancer.

The data that produced the image is saved and processed through a computer the physician knows about and one that remains entirely unknown to the hospital staff. But in a demonstration staged with the permission of an anonymous hospital, a team of security researchers have fooled the experts: the cancer on the screen is a mirage of code.

Data integrity is every bit as important as anything else in cybersecurity. A study by researchers at Ben Gurion University used a tiny computer, a couple ports, and deep learning to gain access to CT scans, and then fed them false information. A video of the test shows a range of possibilities, from adding a single polyp to the scan image, to filling a scan with 472 new growths, to perhaps most ominously of all, erasing evidence of a cancer-like growth from the scan entirely.

The machine learning is integral to creating images to match the attacker’s intent, masking real tumors or populating the scan with fake ones all in the style and appearance of a normal scan.

The limitations of the technique are important to state up-front. The attack specifically required a man-in-the-middle device to capture the scans and then transmit the data from them to an attacker sitting in the waiting room. The intercepting device (here, a Raspberry Pi loaded with special malware), is tucked into a convenient location out of sight. The encryption on the target computer excludes several functions, allowing the attacker to bypass them. All of this combines to outline several steps where a hospital, or other organization, could stop an attack like this.

Security systems fail, though, and in this case the failure is potentially catastrophic. If a hacker gained access to a scan of a high-value target, say a political or military leader, and was able to force them into an unnecessary surgery, that’s a powerful effect from the attack. If they concealed a treatable cancer until it was far too late, that’s an even scarier attack, especially if the person in question never seeks a second opinion from doctors with an uncompromised machine.

While the CT scan attack has obvious implications for anywhere human bodies are treated for injuries, the notion of deceptive images injected through malware is a broader threat. Consider what could happen if an algorithm processing drone footage to direct an attack erased all vehicles from the footage. Or only certain vehicles. Or, perhaps, population new vehicles where none existed. The data would be garbage, and unless a human suspected something was amiss, it’d be possible to completely obscure some significant video. A deletion is obvious, a known absence. Altered images are unknown knowns, masquerading as known knowns.

Watch a video of the CT scan demonstration below:

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Cyber