This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at http://www.theguardian.com/commentisfree/2015/nov/26/from-whong-to-quingel-what-makes-a-word-funny

The article has changed 3 times. There is an RSS feed of changes available.

Version 0 Version 1
From whong to quingel: what makes a word funny? From whong to quingel: the science of funny words
(about 1 hour later)
Without scrolling down, have a think about which word in these pairs you find funnier.Without scrolling down, have a think about which word in these pairs you find funnier.
Quingel v HeashesQuingel v Heashes
Prousup v MestinsProusup v Mestins
Finglam v CortsioFinglam v Cortsio
Witypro v OctesteWitypro v Octeste
Rembrob v SectoriRembrob v Sectori
Pranomp v AnotainPranomp v Anotain
Fityrud v TessinaFityrud v Tessina
If you’re like most people, it will be the first in each pair. Researchers led by Chris Westbury at the University of Alberta found that 56 English-speaking subjects rated those on the left as being more funny than those on the right. What’s amazing is that these nonsense words (NWs) were not designed by a human being with their potential for comedy in mind. They were produced by a computer program using a simple algorithm.If you’re like most people, it will be the first in each pair. Researchers led by Chris Westbury at the University of Alberta found that 56 English-speaking subjects rated those on the left as being more funny than those on the right. What’s amazing is that these nonsense words (NWs) were not designed by a human being with their potential for comedy in mind. They were produced by a computer program using a simple algorithm.
It looks like Westbury et al, whose paper appears in the January 2016 edition of Memory and Language, have developed a reliable method of machine-generating humour. How on earth did they do it?It looks like Westbury et al, whose paper appears in the January 2016 edition of Memory and Language, have developed a reliable method of machine-generating humour. How on earth did they do it?
Well, what they’ve found is a strong inverse correlation between funniness and a property called entropy. This is a way of expressing how usual the letters in the NW are – so the less commonly they’re used in English, the lower the total entropy of the NW. To put it another way, the less “wordy” these NWs are, the more they strike us as humorous.Well, what they’ve found is a strong inverse correlation between funniness and a property called entropy. This is a way of expressing how usual the letters in the NW are – so the less commonly they’re used in English, the lower the total entropy of the NW. To put it another way, the less “wordy” these NWs are, the more they strike us as humorous.
Related: Do you sound gay? What our voices tell us – and what they don’t | David ShariatmadariRelated: Do you sound gay? What our voices tell us – and what they don’t | David Shariatmadari
Entropy is a very precisely measurable variable, and it’s extraordinary that as a result of manipulating that alone, the researchers saw a consistent change in another variable – funniness – a quality you might otherwise think was subjective.Entropy is a very precisely measurable variable, and it’s extraordinary that as a result of manipulating that alone, the researchers saw a consistent change in another variable – funniness – a quality you might otherwise think was subjective.
The hypothesis they set out to test is a pretty old one. In fact, it goes all the way back to Schopenhauer, who wrote about humour in the early 19th century. He thought that it came about when expectations were violated. Read the following joke: “When a clock is hungry, it goes back four seconds”. We expect that in a sentence about a clock, the phrase “four seconds” will relate to time. But in this case, it’s about getting another plateful of food. The incongruity is funny – clocks don’t eat. But while the phrase “When a clock is hungry, it rides a horse” presents a similarly bizarre juxtaposition, it isn’t very funny. So it’s not just the incongruity, it’s the violation of our expectation that really lands the joke.The hypothesis they set out to test is a pretty old one. In fact, it goes all the way back to Schopenhauer, who wrote about humour in the early 19th century. He thought that it came about when expectations were violated. Read the following joke: “When a clock is hungry, it goes back four seconds”. We expect that in a sentence about a clock, the phrase “four seconds” will relate to time. But in this case, it’s about getting another plateful of food. The incongruity is funny – clocks don’t eat. But while the phrase “When a clock is hungry, it rides a horse” presents a similarly bizarre juxtaposition, it isn’t very funny. So it’s not just the incongruity, it’s the violation of our expectation that really lands the joke.
How does this relate to NWs? Well, it’s simply that when we read a word on the page, or hear it, we expect it to be quite like the ones we’ve already encountered. The more it violates our expectations – the less “wordy” it is – the more funny we find it. The authors also ran a quick analysis of some of the NWs used in the Dr Seuss books – like rumbus, skritz and yuzz-a-ma-tuzz – and found that they score low on entropy, making them particularly funny.How does this relate to NWs? Well, it’s simply that when we read a word on the page, or hear it, we expect it to be quite like the ones we’ve already encountered. The more it violates our expectations – the less “wordy” it is – the more funny we find it. The authors also ran a quick analysis of some of the NWs used in the Dr Seuss books – like rumbus, skritz and yuzz-a-ma-tuzz – and found that they score low on entropy, making them particularly funny.
The computer programme generated a lot of words that resembled rude ones. These were rated high for funniness, but the experimenters wanted to home in on one easily measurable variable. Rudeness introduced a whole other set of complications – why we find words related to sex embarrassing, for example.The computer programme generated a lot of words that resembled rude ones. These were rated high for funniness, but the experimenters wanted to home in on one easily measurable variable. Rudeness introduced a whole other set of complications – why we find words related to sex embarrassing, for example.
That meant that the database they tested had to be stripped of these gems:That meant that the database they tested had to be stripped of these gems:
WhongWhong
DonglDongl
ShartShart
FockyFocky
CluntClunt
IppleIpple
And so on. Westbury et al narrowed their focus in order to produce clear results. But perhaps their findings can be used to help us understand why these nearly-rude words are some of the funniest around.And so on. Westbury et al narrowed their focus in order to produce clear results. But perhaps their findings can be used to help us understand why these nearly-rude words are some of the funniest around.
I think they also violate our expectations – and the stakes are higher, because of the added frisson of a link to sex or bodily functions. Take clunt. The association with one of the strongest taboo words in the English language is clear. The expectation that you’ve read or uttered a rude word is raised – and then violated, because in fact it’s harmless nonsense. There’s a sense of relief – of getting away with it.I think they also violate our expectations – and the stakes are higher, because of the added frisson of a link to sex or bodily functions. Take clunt. The association with one of the strongest taboo words in the English language is clear. The expectation that you’ve read or uttered a rude word is raised – and then violated, because in fact it’s harmless nonsense. There’s a sense of relief – of getting away with it.
Later in their paper, Westbury et al discuss possible evolutionary explanations for this model of humour. They point out that anomalies are often experienced as potential threats. An anomaly could mean something’s about to go wrong – that movement in the bushes could be a snake or a dangerous predator. They suggest that one function of humour is to let others know that, having detected an anomaly, you’ve quickly realised it’s harmless. This is “the kind of humour that makes us laugh involuntarily after realising that the man we thought we glimpsed lurking in our backyard is just the neighbour’s cat”. Strange as it may seem, that same mechanism may be activated when you see an unlikely looking word or a highly taboo one – you experience relief as you recognise that it’s completely harmless – just a joke.Later in their paper, Westbury et al discuss possible evolutionary explanations for this model of humour. They point out that anomalies are often experienced as potential threats. An anomaly could mean something’s about to go wrong – that movement in the bushes could be a snake or a dangerous predator. They suggest that one function of humour is to let others know that, having detected an anomaly, you’ve quickly realised it’s harmless. This is “the kind of humour that makes us laugh involuntarily after realising that the man we thought we glimpsed lurking in our backyard is just the neighbour’s cat”. Strange as it may seem, that same mechanism may be activated when you see an unlikely looking word or a highly taboo one – you experience relief as you recognise that it’s completely harmless – just a joke.
You weren’t expecting that, were you? We’ll end on llamas, because Westbury et al also noted that expectation-violating combinations of words can be very funny. Compare “existential llama” to “angry llama”. According to the authors, the first generates fewer than 100 results on Google, meaning it’s a low-entropy (highly unlikely) combination. The second is high-entropy – with more than 13,000 results.You weren’t expecting that, were you? We’ll end on llamas, because Westbury et al also noted that expectation-violating combinations of words can be very funny. Compare “existential llama” to “angry llama”. According to the authors, the first generates fewer than 100 results on Google, meaning it’s a low-entropy (highly unlikely) combination. The second is high-entropy – with more than 13,000 results.
This reminds me of Googlewhacks. Remember them? The idea was that if you could come up with a sequence of two words that Google could only find one instance of across the entire web, that was a Googlewhack. A site was set up to record them – and Dave Gorman went on a Googlewhack adventure. They’re all very low-entropy, and often funny: “illuminatus ombudsman”, “squirrelling dervishes”, “insolvent pachyderms”, “bamboozle guzzler”.This reminds me of Googlewhacks. Remember them? The idea was that if you could come up with a sequence of two words that Google could only find one instance of across the entire web, that was a Googlewhack. A site was set up to record them – and Dave Gorman went on a Googlewhack adventure. They’re all very low-entropy, and often funny: “illuminatus ombudsman”, “squirrelling dervishes”, “insolvent pachyderms”, “bamboozle guzzler”.
I fear that the age of Googlewhacks may be over – the site is now dormant, and some believe that changes to Google’s algorithm mean that they’re now impossible to generate. As an existential loss, it’s enough to make any llama angry.I fear that the age of Googlewhacks may be over – the site is now dormant, and some believe that changes to Google’s algorithm mean that they’re now impossible to generate. As an existential loss, it’s enough to make any llama angry.