© 2024 Kansas City Public Radio
NPR in Kansas City
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations

What Can — Or Should — Internet Companies Do To Fight Terrorism?

After recent terrorist attacks, social media companies are under pressure to do more to stop messaging from terrorist groups.
Patrick George
Ikon Images/Getty Images
After recent terrorist attacks, social media companies are under pressure to do more to stop messaging from terrorist groups.

After the recent attacks in Paris and in San Bernardino, Calif., social media platforms are under pressure from politicians to do more to take down messages and videos intended to promote terrorist groups and recruit members.

Lawmakers in Congress are considering a bill that calls on President Obama to come up with a strategy to combat the use of social media by terrorist groups. Another measure, proposed in the Senate, would require Internet companies to report knowledge of terrorist activities to the government.

Obama himself has urged tech leaders to make it harder for terrorists "to use technology to escape from justice," and Democratic presidential candidate Hillary Clinton has recently said that social media companies can help by "swiftly shutting down terrorist accounts, so they're not used to plan, provoke or celebrate violence."

The Wall Street Journal is also reporting, citing an unnamed source, that the Department of Homeland Security is working on a plan to study social media posts as part of the visa application process before certain people are allowed to enter the country.

The companies say they cooperate with law enforcement now, and the proposed legislation would do more harm than good.

Messages that threaten or promote terrorism already violate the usage rules of most social media platforms. Twitter, for instance, has teams around the world investigating reports of rule violations, and the company says it works with law enforcement entities when appropriate.

"Violent threats and the promotion of terrorism deserve no place on Twitter and our rules make that clear," Twitter said in a statement.

A major challenge is that social networks rely on their users to flag inappropriate content, in part because of the sheer quantity that is posted. Every minute, hundreds of hours of video may be uploaded to YouTube and thousands of photos to Facebook, making timely response very challenging.

And with human perception in play, some videos can be harder to identify than others:

"There are videos of armed military-style training on YouTube, on Vimeo, on Facebook," says Nicole Wong, a former deputy chief technology officer in the Obama administration and executive at Twitter and Google. "Some of the videos taken by our servicemen in Afghanistan look surprisingly similar to videos taken by the PKK, which is a designated terrorist organization in Turkey."

So what if we automated the process? For instance, social media companies use sophisticated programs to help identify images of child pornography by comparing to a national database. But no such database exists for terrorist images.

And there's a bigger issue: What exactly constitutes terrorist content?

"There's no responsible social media company that wants to be a part of promoting violent extremism," Wong says. To her, a major reason why private companies shouldn't police social media for terrorist content is that "no one has come up with a sensible definition for what terrorist activity or terrorist content would be."

Efforts to legislate the problem run into similar criticism. For instance, the Senate bill that would require companies to report terrorist activity does not define terrorist activity, says Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology.

"This kind of proposal creates a lot of risks for individual privacy and free expression," she says.

Critics say this could open the door for governments elsewhere to demand reports of postings that they may consider threatening.

It's somewhat similar to an ongoing debate about the ability of government investigators to get access to encrypted communications: If the U.S. government asked for backdoors into these secured conversations, what would stop China, Russia or any other country from demanding the same kind of access?

Cisco Systems' new CEO Chuck Robbins spoke about this at a recent small breakfast, which included NPR's Aarti Shahani. He said the company's technologies don't and won't include backdoors and that ultimately, companies can't build their businesses around the swings of public sentiment related to terrorist attacks.

"Our technology is commercially available. ... We are not providing any capabilities that aren't well documented and understood. And [we] also operate within the regulations that every government has placed on the technology arena," he said.

"We're operating the way that the public would like for us to operate and we're operating within the construct of the regulatory environment that we live in."

Copyright 2020 NPR. To see more, visit https://www.npr.org.

NPR News' Brian Naylor is a correspondent on the Washington Desk. In this role, he covers politics and federal agencies.
Alina Selyukh is a business correspondent at NPR, where she follows the path of the retail and tech industries, tracking how America's biggest companies are influencing the way we spend our time, money, and energy.
KCUR serves the Kansas City region with breaking news and award-winning podcasts.
Your donation helps keep nonprofit journalism free and available for everyone.