This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.theguardian.com/commentisfree/2016/nov/01/facebook-target-mental-health-data-online

The article has changed 4 times. There is an RSS feed of changes available.

Version 0 Version 1
What could Facebook target next? Our mental health data What could Facebook target next? Our mental health data What could Facebook target next? Our mental health data
(about 2 months later)
It used to be that the eyes were considered the window to the soul. In 2016, you might have better luck checking someone’s social media. The tiny details we share about our lives have blurred the lines between “online” and “real life” – our Facebook accounts even get “memorialised” when we die.It used to be that the eyes were considered the window to the soul. In 2016, you might have better luck checking someone’s social media. The tiny details we share about our lives have blurred the lines between “online” and “real life” – our Facebook accounts even get “memorialised” when we die.
According to a study published this week in Lancet Psychiatry, these seemingly innocuous tidbits can actually lead to quite a comprehensive picture of who we are, at least in terms of our mental health. Researchers from the University of Cambridge and Stanford Business School argue that data from Facebook – the photos we upload, the statuses we share, the frequency and content of the messages we send to friends – is “more reliable” than offline self-reported information, which is often considered to be inadequate or incomplete when it comes to understanding how mental illness is affecting someone.According to a study published this week in Lancet Psychiatry, these seemingly innocuous tidbits can actually lead to quite a comprehensive picture of who we are, at least in terms of our mental health. Researchers from the University of Cambridge and Stanford Business School argue that data from Facebook – the photos we upload, the statuses we share, the frequency and content of the messages we send to friends – is “more reliable” than offline self-reported information, which is often considered to be inadequate or incomplete when it comes to understanding how mental illness is affecting someone.
Status updates in particular, they say, can provide a “wealth of information” about users’ mental health. A language analysis algorithm can pick up symptoms of mental illness, and could even flag early warning signs for conditions such as depression or schizophrenia. Yet more algorithms, these analysing pictures for “emotional facial expressions” could provide insights into offline behaviours. The next question is: what can we – and what can Facebook – do with that data?Status updates in particular, they say, can provide a “wealth of information” about users’ mental health. A language analysis algorithm can pick up symptoms of mental illness, and could even flag early warning signs for conditions such as depression or schizophrenia. Yet more algorithms, these analysing pictures for “emotional facial expressions” could provide insights into offline behaviours. The next question is: what can we – and what can Facebook – do with that data?
Of course, a lot of the information the company has on us all is superficial at best – sure, it might be slightly uncanny to have advertisements served to us that perfectly reflect our taste in music or the TV shows we’ve discussed online, but it doesn’t really say anything very essential about who we are as people. Data about mental or physical health, however, cannot be treated so flippantly, either by those of us who are thoughtlessly supplying it or by the people collecting it. Discrimination against those with mental health problems is still rife; a recent NatCen British Social Attitudes survey, for example, found that 44% of people would be “uncomfortable” working with someone who’d experienced symptoms of psychosis. In legislative terms, this isn’t supposed to impact chance of employment or employment rights. In reality, there is a persistent and pervasive culture of distrust around those with mental health problems.Of course, a lot of the information the company has on us all is superficial at best – sure, it might be slightly uncanny to have advertisements served to us that perfectly reflect our taste in music or the TV shows we’ve discussed online, but it doesn’t really say anything very essential about who we are as people. Data about mental or physical health, however, cannot be treated so flippantly, either by those of us who are thoughtlessly supplying it or by the people collecting it. Discrimination against those with mental health problems is still rife; a recent NatCen British Social Attitudes survey, for example, found that 44% of people would be “uncomfortable” working with someone who’d experienced symptoms of psychosis. In legislative terms, this isn’t supposed to impact chance of employment or employment rights. In reality, there is a persistent and pervasive culture of distrust around those with mental health problems.
What if an algorithmic branding of 'ill' was shared with the world without our knowledge or consent?What if an algorithmic branding of 'ill' was shared with the world without our knowledge or consent?
Understandably, many people choose not to share the status of their mental health with colleagues or friends. But such privacy may be a luxury when “emotional facial expressions” can be neatly scanned and categorised by an algorithm, or when our heartfelt statuses are simply part of an input-output exchange. What if employers were able to use the same technology to scan our private posts, monitoring what we say and how we say it to avoid taking a risk on someone they may see as a “liability”? What if the data wasn’t secure, and an algorithmic branding of “ill” was shared with the world without our knowledge or consent?Understandably, many people choose not to share the status of their mental health with colleagues or friends. But such privacy may be a luxury when “emotional facial expressions” can be neatly scanned and categorised by an algorithm, or when our heartfelt statuses are simply part of an input-output exchange. What if employers were able to use the same technology to scan our private posts, monitoring what we say and how we say it to avoid taking a risk on someone they may see as a “liability”? What if the data wasn’t secure, and an algorithmic branding of “ill” was shared with the world without our knowledge or consent?
As the study suggests, the internet can be a vital tool in how people express themselves about their mental illness, from the grandiose horrors of a serious breakdown to the minutiae of living day-to-day with a chronic illness. We can also foster genuine connections with others who might be experiencing the same things. And the potential benefits of the findings are clear – not least in the way we could be able to reach people such as refugees, the homeless or the elderly who are often shut out of traditional mental health services. We may even be able to find new therapeutic routes or platforms with which to help people.As the study suggests, the internet can be a vital tool in how people express themselves about their mental illness, from the grandiose horrors of a serious breakdown to the minutiae of living day-to-day with a chronic illness. We can also foster genuine connections with others who might be experiencing the same things. And the potential benefits of the findings are clear – not least in the way we could be able to reach people such as refugees, the homeless or the elderly who are often shut out of traditional mental health services. We may even be able to find new therapeutic routes or platforms with which to help people.
What we can’t do, however, is continue to proceed the way that we are. The ethics of this particular study aren’t questionable; everybody involved in the study consented to their data being used, after all. But if it were to be used more broadly? The team suggests that detection of poor mental health could be a way for social networks to provide on-site support for users – in which case we may have a problem. For one thing, on a practical level, vulnerable users may not fully understand what their participation in such a scheme would mean, nor know the impact it could have on them. We need strict legislation about the ethics of gathering such sensitive data, and even stricter punishments should it be inadvertently or purposefully shared.What we can’t do, however, is continue to proceed the way that we are. The ethics of this particular study aren’t questionable; everybody involved in the study consented to their data being used, after all. But if it were to be used more broadly? The team suggests that detection of poor mental health could be a way for social networks to provide on-site support for users – in which case we may have a problem. For one thing, on a practical level, vulnerable users may not fully understand what their participation in such a scheme would mean, nor know the impact it could have on them. We need strict legislation about the ethics of gathering such sensitive data, and even stricter punishments should it be inadvertently or purposefully shared.
And, as we must never forget, Facebook makes money from our data. Slotting us into neat little boxes may be OK when it comes to things such as gender or age – what does that say about us, really? – but the idea of being neatly categorised as “mentally ill” or “mentally well” simply because of the things we choose to share online is both unethical and potentially dangerous. At best, of course, we could receive help that is currently unavailable to us. At worst? It doesn’t bear thinking about.And, as we must never forget, Facebook makes money from our data. Slotting us into neat little boxes may be OK when it comes to things such as gender or age – what does that say about us, really? – but the idea of being neatly categorised as “mentally ill” or “mentally well” simply because of the things we choose to share online is both unethical and potentially dangerous. At best, of course, we could receive help that is currently unavailable to us. At worst? It doesn’t bear thinking about.