Twitter starts its story of platform manipulation with Justin Bieber, back in June 2009.
Shortly after the young pop star joined the then-novel microblogging service, fans in Brazil started tweeting for Bieber to come to Brazil. The singer eventually performed his first concert there in November 2011, but for the social media platform that hosted the initial drive to get him to Brazil, the perpetual requests had a distorting effect on trending topics and had to be addressed sooner.
“The first type of manipulation we saw was not what you’d consider nefarious; it was an attempt to get Justin Bieber to trend,” said Del Harvey, vice president for trust and safety at Twitter, speaking at the 2019 RSA cybersecurity conference on a panel about the ways in which social media can be co-opted by nefarious actors.
“We had to make changes to our trending topics because of how they did that,” Harvey continued. “What is being described, weaponization of social media, is far broader than social media and is used for far more than nefarious purposes. Justin ended up going to Brazil.”
The backstory behind a Bieber concert is an unusual conversation for a security conference, but it illustrates something fundamental about the way social media works in the world. The same paths that masses of people can use to get the eye of a singer can also be manipulated by states or other actors for their own, less-earnest ends. The challenge, then, is for the companies that manage these platforms to distinguish between legitimate and illegitimate uses of the same tools in the same space.
“Russian information operations are using the tools created by the companies here on stage not to market but to go after American democracy,” said New America’s Peter W. Singer, author of “Like War.” A focus on illegitimate means of attack rather than simple exploitation of existing tools for harmful purposes meant countries missed the kinds of threats they were hoping to guard against.
“We were looking in the wrong place; we were looking for people hacking Facebook accounts, not buying ads on scale,” Singer said.
Nathaniel Gleicher, who became head of cybersecurity for Facebook in January 2018, said that a big part of how security has to work here is by increasing the friction for bad actors. One part of that is automated tools, which can remove fake accounts at scale. (By Facebook’s own reporting, at any given time fake accounts are no more than 4-5 percent of the accounts on the site. With 2.32 billion monthly active users, 4 percent is just shy of 93 million.) Automation can only go so far, so Facebook also has human investigative teams, which look for specific bad actors, identify their networks and discover any new behaviors they might have that are shared among people willing to do harm.
Both Harvey of Twitter and Robert Joyce, a senior adviser at the National Security Agency, emphasized the usefulness of pinning maliciousness to behavior, rather than content. Harvey noted that the same video or images might be shared by both terrorist groups boasting of an act and reporters covering the act, and a content-based approach would hurt researchers and journalists in the process of quieting the spread of terrorist messaging.
Joyce pointed specifically to the way certain actors use to amplify their messages as behavior that can be locked down. If a person is outside a network and wants to spread a message into it, they’ll act differently and go through different channels than if they’re organically part of that network already.
While Harvey acknowledged that bots exist and aren’t a nonzero threat, the perception of bots is far outsized to their actual presence.
“It is amazing you see people get in an argument and one says the other is just a bot. It is demonstrably not a bot,” said Harvey. If there is a silver lining to the assumption of bots, it’s an off-ramp from arguments. She continued, “There’s an increased exit path from disagreement that people take by saying the other person is a bot.”
Asked specifically if the NSA was doing anything about these problems on social media, Joyce replied “it is not our focus to look into these companies,” and noted that, when it comes to bots or campaigns across platforms, automated tools are good at finding automated attackers.
Authenticity is another factor, besides content and behavior, that can be used to determine if an account on social media is earnestly or cynically spreading misinformation. Facebook sets a high standard of fidelity to formal identity (a standard not without its own controversies), for example, but when panelists were asked about the kind of permanent online identification required by China, no one was eager to adopt anything like that.
“I think there [are] a lot of societies where it’s important for people to have privacy. Important for their safety,” Joyce said. “I can’t imagine a department of truth, a ministry of truth. People have a right to free speech; a bot doesn’t. Where we can, we can take away an inauthentic voice.”
Social media itself creates new kinds of power not previously seen. One moderator asked if Mark Zuckerberg is the most powerful person on Earth.
“You could make that argument,” said Singer. “He wrote some code for college and now he’s powerful because he’s a rule-maker.”
Singer went on to describe the case of Myanmar, where for a time Facebook was essentially the internet for the entire nation. Facebook was misused to coordinate the mass killings of the Rohingya people, and then after some time Facebook decided that the heads of state and generals in Myanmar could no longer use the most popular online platform within their country.
“That is an awesome kind of power. Creators of social media didn’t set out to have that kind of war and politics power,” said Singer. “What happens to the second generation when they realize that they have that power?”