This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.theguardian.com/commentisfree/2018/oct/14/what-is-wrong-with-ai-try-asking-a-human-being

The article has changed 3 times. There is an RSS feed of changes available.

Version 0 Version 1
What’s wrong with AI? Try asking a human being What’s wrong with AI? Try asking a human being
(about 2 months later)
Amazon has apparently abandoned an AI system aimed at automating its recruitment process. The system gave job candidates scores ranging from one to five stars, a bit like shoppers rating products on the Amazon website.Amazon has apparently abandoned an AI system aimed at automating its recruitment process. The system gave job candidates scores ranging from one to five stars, a bit like shoppers rating products on the Amazon website.
The trouble was, the program tended to give five stars to men and one star to women. According to Reuters, it “penalised résumés that included the word ‘women’s’, as in ‘women’s chess club captain’” and marked down applicants who had attended women-only colleges.The trouble was, the program tended to give five stars to men and one star to women. According to Reuters, it “penalised résumés that included the word ‘women’s’, as in ‘women’s chess club captain’” and marked down applicants who had attended women-only colleges.
It wasn’t that the programme was malevolently misogynistic. Rather, like all AI programs, it had to be “trained” by being fed data about what constituted good results. Amazon, naturally, fed it with details of its own recruitment programme over the previous 10 years. Most applicants had been men, as had most recruits. What the program learned was that men, not women, were good candidates.It wasn’t that the programme was malevolently misogynistic. Rather, like all AI programs, it had to be “trained” by being fed data about what constituted good results. Amazon, naturally, fed it with details of its own recruitment programme over the previous 10 years. Most applicants had been men, as had most recruits. What the program learned was that men, not women, were good candidates.
It’s not the first time AI programs have been shown to exhibit bias. Software used in the US justice system to assess a criminal defendant’s likelihood of reoffending is more likely to judge black defendants as potential recidivists. A Canadian auditory test for neurological diseases only worked with English speakers. Facial recognition software is poor at recognising non-white faces. A Google photo app even labelled African Americans “gorillas”.It’s not the first time AI programs have been shown to exhibit bias. Software used in the US justice system to assess a criminal defendant’s likelihood of reoffending is more likely to judge black defendants as potential recidivists. A Canadian auditory test for neurological diseases only worked with English speakers. Facial recognition software is poor at recognising non-white faces. A Google photo app even labelled African Americans “gorillas”.
Amazon ditched AI recruiting tool that favored men for technical jobs
All this should teach us three things. First, the issue here is not to do with AI itself, but with social practices. The biases are in real life.All this should teach us three things. First, the issue here is not to do with AI itself, but with social practices. The biases are in real life.
Second, the problem with AI arises when we think of machines as being objective. A machine is only as good as the humans programming it.Second, the problem with AI arises when we think of machines as being objective. A machine is only as good as the humans programming it.
And third, while there are many circumstances in which machines are better, especially where speed is paramount, humans can judge social context in a way no machine can. We may be slow and fallible, but we also have a sense of right and wrong and social means of challenging bias and injustice. We should never deprecate that.And third, while there are many circumstances in which machines are better, especially where speed is paramount, humans can judge social context in a way no machine can. We may be slow and fallible, but we also have a sense of right and wrong and social means of challenging bias and injustice. We should never deprecate that.
• Kenan Malik is an Observer columnist• Kenan Malik is an Observer columnist
Artificial intelligence (AI)Artificial intelligence (AI)
OpinionOpinion
AmazonAmazon
GenderGender
ComputingComputing
commentcomment
Share on FacebookShare on Facebook
Share on TwitterShare on Twitter
Share via EmailShare via Email
Share on LinkedInShare on LinkedIn
Share on PinterestShare on Pinterest
Share on Google+Share on Google+
Share on WhatsAppShare on WhatsApp
Share on MessengerShare on Messenger
Reuse this contentReuse this content