We can’t expect Facebook to stop terrorists

http://www.theguardian.com/commentisfree/2014/nov/26/cant-expect-facebook-stop-terrorists

Version 0 of 1.

In its recent report on the atrocious murder of Lee Rigby last May, the Intelligence and Security Committee noted that one of the perpetrators of the crime had earlier sent a message via social media that revealed his intention. This raises the question of the responsibility of companies like Facebook to monitor and report such postings to the authorities.

With about 5bn postings a day across a global user base of over 1.3 billion people, the task for Facebook in ensuring that it does not carry inappropriate content of any kind, not just that related to terrorism, is clearly well beyond the capacity of humans. Machines have to have clear instructions; algorithms are not good at nuance, nor do they examine context, so it is reasonable that Facebook should set the bar quite high.

So far as the grey area between freedom of speech and evidence of crime is concerned, there are two fundamental difficulties. Claiming that companies operating social media platforms have a responsibility to spot and report possible criminal intent, raises a lot of questions. Who should decide what is reportable and on what basis? Opinion will vary from country to country and culture to culture, and there will be many differences of opinion in each sub-category. It must be up to politicians therefore, after adequate public debate, to decide what obligations a company must accept in order to operate within their country, and introduce laws accordingly. But the whole question of jurisdiction is enormously complicated by the transnational nature of the internet.

As a previous head of a United Nations taskforce on terrorist use of the internet, I can say that the likelihood of any international agreement, even on this narrow subset of internet use, is so far distant as to be invisible. There are just too many conflicting views on where to draw the boundary between personal freedoms and public security. Even were such an agreement to be reached, by the time it was incorporated into national law, and enforcement measures agreed, the original problem would have disappeared in a welter of new technology.

And here lies the second problem. It may be conceivable at present that technology exists that could allow a large company to monitor content and flag suspicious activity for human consideration, but if so, the development of privacy tools will soon outstrip it. Jim Comey, the director of the FBI, has already highlighted the fact that the planned encryption software on new Androids and iPhones will not be breakable even by Google and Apple. In his words, this will effectively put people above the law.

This is a serious issue that merits considerable discussion, but there is a clear difference between a social responsibility and a legal requirement. Facebook and similar companies operate in a highly competitive commercial environment, and while no doubt eager to obey the rules, they are not in the security business and we would not want to entrust them with decisions on censorship. Lee Rigby’s murder was completely inexcusable on any grounds, but as the ISC report concluded, it was ultimately unpreventable; as have been other more recent attacks elsewhere in the world. The focus has to be as much on dealing with what drives such behaviour as on reaching out for new ways to prevent it.