Search
Blog

The Tech Industry and the Regulation of Online Terrorist Content: What do Law Enforcement Think?

by Stuart Macdonald and Andrew Staniforth

The importance of tackling online terrorist propaganda is well-accepted, as is the important role of social media companies in responding to this challenge. In this short piece, we report some initial findings from a wider project on cooperation between social media companies and law enforcement.

Drawing on a set of seven semi-structured interviews with senior members of national and transnational law enforcement organisations, we describe how these individuals perceive law enforcement’s relationship with social media companies, how this relationship has evolved, and identify some current and future challenges.

To begin, it is worth highlighting the importance that our interviewees attached to this issue. One remarked “Almost every single individual that we charge and prosecute has a mountain of this material on their devices when they are arrested,” whilst another commented, “There isn’t much that goes on these days when it comes to serious crime or terrorism that doesn’t have a significant technology component, whatever that looks like.” So, whilst there was an acknowledgment that “proving a causal link between media and actual violence is very, very difficult,” “professional intuition” pointed to the practical influence of online terrorist content.

Some of our interviewees had worked in this field for more than a decade. They explained that they had seen a significant change in attitudes towards stakeholder cooperation. One described the “old culture” of counterterrorism policing as “you don’t engage with anybody.” Others talked about the change in the approach of social media companies. According to one interviewee, ten years ago these companies would not have been willing to even discuss the removal of content from their platforms: “They would not have opened the door.” For another, the relationship between law enforcement and social media companies had gone from being “transactional” to “transformational.” All interviewees described current relationships with most companies in positive terms that included: “good to excellent,” “fantastic” and “It’s the best it’s ever been.”

Interviewees emphasised the importance of empathy and trust in building conducive working relationships with social media companies. They highlighted the benefits of an open internet – “It’s one of the greatest inventions that humankind has ever come up with” – and the value of avoiding “the prying eyes of the government, the state and the security forces,” particularly in repressive regimes: “It’s a bit different if you’re communicating in a war zone which is controlled by a despot, the only way you can communicate is using this platform and then suddenly it gets shut down or it’s open to everybody to see what you’re doing.” They also stressed the need to understand social media companies’ “business model.” According to one interviewee: “I don’t think in my mind you can approach a company to ask them to help if you don’t make any effort to understand their point of view and their needs. And the reality is most companies are about making money, and that’s what they’re there for. So, therefore, you have to work with that understanding.” Internet Referral Units had therefore adopted an approach based on voluntariness and education. “We’re not trying to browbeat them,” said one interviewee, “We don’t demand people take something down. We don’t threaten them with any legal action. We’re entirely based on basically asking nicely.” Underlying this was an effort to explain to companies the harm that online terrorist propaganda could potentially cause (“We were explaining what this content was and … the harm it would create”), as well as warning them of the potential reputational damage they could suffer if they failed to take action (“These companies are reliant on reputation, user base advertising. And if they are seen to either not do something or do something that the public perceive is incorrect, then this has a knock-on effect on their revenue”). Ultimately, then, the removal of online terrorist propaganda was presented as a shared objective, a common goal.

Almost every single individual that we charge and prosecute has a mountain of this material on their devices when they are arrested"

In recent years, there has been much discussion of possible regulatory approaches, with some countries introducing specific legal regimes (such as Germany’s NetzDG law[1]) and the U.K. contemplating the creation of an independent regulatory body with responsibility for “online harms.”[2] It was interesting, therefore, that our interviewees stated that, when approaching a social media company, they rarely made reference to the fact that the content in question violated national law. They focused instead on the fact that the content breached the company’s terms of service. In a similar vein, our interviewees expressed concerns about the utility of new legislation, particularly at a national level.

Some raised jurisdictional concerns: “The problem with that could then become that people move overseas or move servers, move their premises or their places of work … The fact is, if they’re elsewhere, it’s very difficult to do anything with that unless that regulator is then going to look to block platforms because they’re not based in the UK or they’re not complying.” Others raised technological ones: “The technology is outpacing how we think about managing these risks and issues.”Still, others raised definitional issues: “I don’t know how you can actually clearly define the scope of what you’re tackling … I don’t see how legislation can actually capture that.” The most common concern, however, was that such legislation could impede voluntary cooperation and mark a return to the “transactional” relationship of the past. As one interviewee explained, “I think legislation may impact negatively on some of what are called lower-level relationships … I mean the ability for me to phone up Facebook and have a conversation or an easy conversation, I think it will be less amenable to that”. Another simply remarked, “The voluntary approach that we have works best.” The only justification that was offered by any of our interviewees for new legislation was that it might increase public confidence in how this issue is being tackled, although even this interviewee remarked, “There are countries with very strong legislation and that doesn’t necessarily give people confidence.”

Whilst welcoming the progress that has been made in recent years, interviewees did identify some issues that have yet to be addressed adequately. Of particular concern were the challenges presented by smaller companies. Most smaller companies are also cooperative: “A lot of them are very keen to assist because they don’t want this on their site … The goal is to make money and make their site effective.” The difficulty these companies face is limited capacity and resources: “their ability to proactively review content to moderate or to remove is stymied by the fact that you know, there are only so many hours in a day and they may not have the revenue streams to be able to employ somebody else.” With companies like this, interviewees explained that “We’ll look to have a chat, see how they’re set up and see how best either we can help or potentially look to feed them into some NGOs or non-police organisations like Tech Against Terrorism or the GIFCT.” But interviewees cautioned that there are some smaller companies that “are just not interested in cooperating on a voluntary basis or even, like in some extreme cases, they thought that they should not intervene because, for example, in the U.S. context, they were using the argument that the First Amendment protects freedom of speech and anything that goes on the Internet should be protected.”

The technology is outpacing how we think about managing these risks and issues."

Two further issues that interviewees identified were the potential loss of intelligence and access to data on encrypted services. In respect of the former, interviewees explained that the improvements in content moderation are a “two-edged sword”. When content is removed swiftly or blocked altogether, “it’s very difficult to detect the people who are putting it up there or the people who are contributing to it or the people who are sharing it”. One interviewee described content removal as a “delicate operational balance.” because “social media provides a fantastic set of open-source information/potential intelligence.” Another referred to the loss of “historical sourcebooks,” stressing that this has implications not just for counterterrorism but for open-source investigations more generally. Turning to encrypted services, interviewees pointed out that the work of Internet Referral Units “doesn’t really cross into privacy issues.” They felt that social media companies’ level of cooperation is lesser in the context of investigations, particularly in respect to accessing data from end-to-end encryption services. This was a source of some frustration. One interviewee asked why anyone would want to knowingly “provide a shield for anybody who wants to use the web of social media … for nefarious purposes.” For this interviewee, “It’s like allowing a hotel to open up in your hometown in which the only people allowed through the door are criminals and the only people who are not allowed in are law enforcement. Why would you do that to your community?”

These were by no means the only future challenges that our interviewees identified. They also pointed to the difficulties associated with right-wing extremism (“When you start dealing with the right-wing there’s no real clear definitions … Is it right-wing extremism or just a right-wing political view, which is completely legitimate to hold and espouse?”), the impact of decentralised platforms (“a completely different challenge … Like how do you cooperate with decentralised?”), frustration at platforms sometimes taking several days to carry out their “due diligence” before responding to a request and the danger that existing cooperation is built too much on relationships between specific individuals who might one day move on. Nonetheless, overall our interviewees displayed optimism about social media companies’ willingness to address these challenges, as typified by the following quote: “Our experience has been that they continue to push really, really hard to be better.”

References

[1]  Echikson, W. & Knodt, O., (2018). Germany’s NetzDG: A key test for combatting online hate. Counter Extremism Project Research Report No. 2018/09.

[2]  H.M. Government, (2019). Online Harms White Paper. London: The Stationery Office.

About Hedayah's Blog Series

Hedayah publishes a monthly blog series covering a range of different topics related to counter violent extremism (CVE). These blogs highlight the latest trends and challenges faced by the CVE world and highlight topics that receive less attention in the international CVE space with a unique perspective.

The authors of the blog posts are Hedayah’s staff, Hedayah’s Fellows, and guest experts. The opinions expressed in the blogs are their own and not representative of Hedayah. We hope that these blogs will contribute to the conversation around CVE solutions, and push forward the quest for more research and innovation in the field.