We shouldn't expect Facebook to behave ethically

http://www.theguardian.com/technology/2014/jul/06/we-shouldnt-expect-facebook-to-behave-ethically

Version 0 of 1.

There are two interesting lessons to be drawn from the row about Facebook's "emotional contagion" study. The first is what it tells us about Facebook's users. The second is what it tells us about corporations such as Facebook.

In case you missed it, here's the gist of the story. The first thing users of Facebook see when they log in is their news feed, a list of status updates, messages and photographs posted by friends. The list that is displayed to each individual user is not comprehensive (it doesn't include all the possibly relevant information from all of that person's friends). But nor is it random: Facebook's proprietary algorithms choose which items to display in a process that is sometimes called "curation". Nobody knows the criteria used by the algorithms – that's as much of a trade secret as those used by Google's page-ranking algorithm. All we know is that an algorithm decides what Facebook users see in their news feeds.

So far so obvious. What triggered the controversy was the discovery, via the publication of a research paper in the prestigious Proceedings of the National Academy of Sciences that for one week in January 2012, Facebook researchers deliberately skewed what 689,003 Facebook users saw when they logged in. Some people saw content with a preponderance of positive and happy words, while others were shown content with more negative or sadder sentiments. The study showed that, when the experimental week was over, the unwitting guinea-pigs were more likely to post status updates and messages that were (respectively) positive or negative in tone.

Statistically, the effect on users was relatively small, but the implications were obvious: Facebook had shown that it could manipulate people's emotions! And at this point the ordure hit the fan. Shock! Horror! Words such as "spooky" and "terrifying" were bandied about. There were arguments about whether the experiment was unethical and/or illegal, in the sense of violating the terms and conditions that Facebook's hapless users have to accept. The answers, respectively, are yes and no because corporations don't do ethics and Facebook's T&Cs require users to accept that their data may be used for "data analysis, testing, research".

Facebook's spin-doctors seem to have been caught off-guard, causing the company's chief operating officer, Sheryl Sandberg, to fume that the problem with the study was that it had been "poorly communicated". She was doubtless referring to the company's claim that the experiment had been conducted "to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it's positive or negative in tone, news from friends, or information from pages they follow."

Which being translated reads: "We intend to make sure that nothing that people see on Facebook reduces the probability that they will continue to log in. The experiment confirms our conjecture that negative content is bad news (which is why we only have a 'Like' button), and so we will configure our algorithms to make sure that happy talk continues to dominate users' news feeds."

When the story of this period comes to be written, one thing that will astonish historians is the complaisant ease with which billions of apparently sane people allowed themselves to be monitored and manipulated by government security agencies and giant corporations. I used to think that most Facebook users must have some conception of the extent to which they are being algorithmically managed and the outraged hoo-ha over this experiment might suggest otherwise. But I suspect that once the fuss has died down most users will continue to submit to the company's manipulation of their information flow and emotions. Those who the gods wish to destroy, they first make naive.

The arguments about whether the experiment was unethical reveal the extent to which big data is changing our regulatory landscape. Many of the activities that large-scale data analytics now make possible are undoubtedly "legal" simply because our laws are so far behind the curve. Our data-protection regimes protect specific types of personal information, but data analytics enables corporations and governments to build up very revealing information "mosaics" about individuals by assembling large numbers of the digital traces that we all leave in cyberspace. And none of those traces has legal protection at the moment.

Besides, the idea that corporations might behave ethically is as absurd as the proposition that cats should respect the rights of small mammals. Cats do what cats do: kill other creatures. Corporations do what corporations do: maximise revenues and shareholder value and stay within the law. Facebook may be on the extreme end of corporate sociopathy, but really it's just the exception that proves the rule.