This article is from the source 'guardian' and was first published or seen on . The next check for changes will be
You can find the current article at its original source at https://www.theguardian.com/technology/2025/jun/17/how-ai-pales-in-the-face-of-human-intelligence-and-ingenuity
The article has changed 3 times. There is an RSS feed of changes available.
Version 1 | Version 2 |
---|---|
How AI pales in the face of human intelligence and ingenuity | How AI pales in the face of human intelligence and ingenuity |
(2 days later) | |
Sheila Hayman says artificial intelligence can’t match the qualities that we have just by the virtue of being alive, and Graham Taylor shows how AI can summarise but not reason | Sheila Hayman says artificial intelligence can’t match the qualities that we have just by the virtue of being alive, and Graham Taylor shows how AI can summarise but not reason |
Gary Marcus is right to point out – as many of us have for years – that just scaling up compute size is not going to solve the problems of generative artificial intelligence (When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype, 10 June). But he doesn’t address the real reason why a child of seven can solve the Tower of Hanoi puzzle that broke the computers: we’re embodied animals and we live in the world. | Gary Marcus is right to point out – as many of us have for years – that just scaling up compute size is not going to solve the problems of generative artificial intelligence (When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype, 10 June). But he doesn’t address the real reason why a child of seven can solve the Tower of Hanoi puzzle that broke the computers: we’re embodied animals and we live in the world. |
All living things are born to explore, and we do so with all our senses, from birth. That gives us a model of the world and everything in it. We can infer general truths from a few instances, which no computer can do. | All living things are born to explore, and we do so with all our senses, from birth. That gives us a model of the world and everything in it. We can infer general truths from a few instances, which no computer can do. |
A simple example: to teach a large language model “cat”, you have to show it tens of thousands of individual images of cats – being the way they are, they may be up a tree, in a box, or hiding in a roll of carpet. And even then, if it comes upon a cat playing with a bath plug, it may fail to recognise it as a cat. | A simple example: to teach a large language model “cat”, you have to show it tens of thousands of individual images of cats – being the way they are, they may be up a tree, in a box, or hiding in a roll of carpet. And even then, if it comes upon a cat playing with a bath plug, it may fail to recognise it as a cat. |
A human child can be shown two or three cats, and from interacting with them, it will recognise any cat as a cat, for life. | A human child can be shown two or three cats, and from interacting with them, it will recognise any cat as a cat, for life. |
Apart from anything else, this embodied, evolved intelligence makes us incredibly energy-efficient compared with a computer. The computers that drive an autonomous car use anything upwards of a kilowatt of energy, while a human driver runs on twentysomething watts of renewable power – and we don’t need an extra bacon sandwich to remember a new route. | Apart from anything else, this embodied, evolved intelligence makes us incredibly energy-efficient compared with a computer. The computers that drive an autonomous car use anything upwards of a kilowatt of energy, while a human driver runs on twentysomething watts of renewable power – and we don’t need an extra bacon sandwich to remember a new route. |
At a time of climate emergency, the vast energy demands of this industry might perhaps lead us to recognise, and value, the extraordinary economy, versatility, plasticity, ingenuity and creativity of human intelligence – qualities that we all have simply by virtue of being alive.Sheila HaymanAdvisory board member, Minderoo Centre for Technology & Democracy, Cambridge University | |
It comes as no surprise to me that Apple researchers have found “fundamental limitations” in cutting-edge artificial intelligence models (Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds, 9 June). AI in the form of large reasoning models or large language models (LLMs) are far from being able to “reason”. This can be simply tested by asking ChatGPT or similar: “If 9 plus 10 is 18 what is 18 less 10?” The response today was 8. Other times, I’ve found that it provided no definitive answer. | It comes as no surprise to me that Apple researchers have found “fundamental limitations” in cutting-edge artificial intelligence models (Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds, 9 June). AI in the form of large reasoning models or large language models (LLMs) are far from being able to “reason”. This can be simply tested by asking ChatGPT or similar: “If 9 plus 10 is 18 what is 18 less 10?” The response today was 8. Other times, I’ve found that it provided no definitive answer. |
This highlights that AI does not reason – currently, it is a combination of brute force and logic routines to essentially reduce the brute force approach. A term that should be given more publicity is ANI – artificial narrow intelligence, which describes systems like ChatGPT that are excellent at summarising pertinent information and rewording sentences, but are far from being able to reason. | This highlights that AI does not reason – currently, it is a combination of brute force and logic routines to essentially reduce the brute force approach. A term that should be given more publicity is ANI – artificial narrow intelligence, which describes systems like ChatGPT that are excellent at summarising pertinent information and rewording sentences, but are far from being able to reason. |
But note, the more times that LLMs are asked similar questions, the more likely it will provide a more reasonable response. Again, though, this is not reasoning, it is model training.Graham TaylorMona Vale, New South Wales, Australia | |
Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section. | Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section. |