This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.theguardian.com/technology/2023/jun/01/australian-government-considers-ban-on-high-risk-uses-of-ai-such-as-deepfakes-and-algorithmic-bias

The article has changed 6 times. There is an RSS feed of changes available.

Version 3 Version 4
Australia considers ban on ‘high-risk’ uses of AI such as deepfakes and algorithmic bias Australia considers ban on ‘high-risk’ uses of AI such as deepfakes and algorithmic bias
(2 months later)
New report warns of technology's ability to ‘influence democratic processes or cause other deceit’ as well as ‘target minority racial groups’New report warns of technology's ability to ‘influence democratic processes or cause other deceit’ as well as ‘target minority racial groups’
The Albanese government is considering a ban on “high-risk” uses of artificial intelligence and automated decision-making, warning of potential harms including the creation of deepfakes and algorithmic bias.The Albanese government is considering a ban on “high-risk” uses of artificial intelligence and automated decision-making, warning of potential harms including the creation of deepfakes and algorithmic bias.
On Thursday, the industry and science minister, Ed Husic, will release a report on the emerging technologies by the National Science and Technology Council and a discussion paper on how to achieve “safe and responsible” AI.On Thursday, the industry and science minister, Ed Husic, will release a report on the emerging technologies by the National Science and Technology Council and a discussion paper on how to achieve “safe and responsible” AI.
Generative AI, in which AI creates new content such as text, images, audio and code, has experienced a surge in uptake such as through the “large language model” programs ChatGPT, Google’s chatbot Bard and Microsoft Bing’s chat feature.Generative AI, in which AI creates new content such as text, images, audio and code, has experienced a surge in uptake such as through the “large language model” programs ChatGPT, Google’s chatbot Bard and Microsoft Bing’s chat feature.
While universities and education authorities grapple with the new technology’s application in student cheating, the industry department’s discussion paper warns AI has a range of “potentially harmful purposes”.While universities and education authorities grapple with the new technology’s application in student cheating, the industry department’s discussion paper warns AI has a range of “potentially harmful purposes”.
These include “generating deepfakes to influence democratic processes or cause other deceit, creating misinformation and disinformation, [and] encouraging people to self-harm”.These include “generating deepfakes to influence democratic processes or cause other deceit, creating misinformation and disinformation, [and] encouraging people to self-harm”.
“Algorithmic bias is often raised as one of the biggest risks or dangers of AI,” it said, with the potential to prioritise male over female candidates in recruitment or to target minority racial groups.“Algorithmic bias is often raised as one of the biggest risks or dangers of AI,” it said, with the potential to prioritise male over female candidates in recruitment or to target minority racial groups.
The paper also noted positive applications of AI already in use such as analysing medical images, improving building safety and cost savings in provision of legal services. The implications of AI on the labour market, national security and intellectual property were outside its scope.The paper also noted positive applications of AI already in use such as analysing medical images, improving building safety and cost savings in provision of legal services. The implications of AI on the labour market, national security and intellectual property were outside its scope.
The NSTC report found that “the concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials risks to Australia”.The NSTC report found that “the concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials risks to Australia”.
While Australia has some advantages in computer vision and robotics, its “core fundamental capacity in [large language models] and related areas is relatively weak” due to “high barriers to access”.While Australia has some advantages in computer vision and robotics, its “core fundamental capacity in [large language models] and related areas is relatively weak” due to “high barriers to access”.
The paper sets out a range of responses from around the world: from voluntary approaches in Singapore to greater regulation in the EU and Canada.The paper sets out a range of responses from around the world: from voluntary approaches in Singapore to greater regulation in the EU and Canada.
“There is a developing international direction towards a risk-based approach for governance of AI,” it said.“There is a developing international direction towards a risk-based approach for governance of AI,” it said.
The paper said the government will “ensure there are appropriate safeguards, especially for high-risk applications of AI and [automated decision-making]”.The paper said the government will “ensure there are appropriate safeguards, especially for high-risk applications of AI and [automated decision-making]”.
The term is almost as old as electronic computers themselves, coined in 1955 by a team including legendary Harvard computer scientist Marvin Minsky. With no strict definition of the phrase, and the lure of billions of dollars of funding for anyone who sprinkles AI into pitch documents, almost anything more complex than a calculator has been called artificial intelligence by someone.The term is almost as old as electronic computers themselves, coined in 1955 by a team including legendary Harvard computer scientist Marvin Minsky. With no strict definition of the phrase, and the lure of billions of dollars of funding for anyone who sprinkles AI into pitch documents, almost anything more complex than a calculator has been called artificial intelligence by someone.
AI is already in our lives in ways you may not realise. The special effects in some films and voice assistants like Amazon’s Alexa all use simple forms of artificial intelligence. But in the current debate, AI has come to mean something else.AI is already in our lives in ways you may not realise. The special effects in some films and voice assistants like Amazon’s Alexa all use simple forms of artificial intelligence. But in the current debate, AI has come to mean something else.
It boils down to this: most old-school computers do what they are told. They follow instructions given to them in the form of code. But if we want computers to solve more complex tasks, they need to do more than that. To be smarter, we are trying to train them how to learn in a way that imitates human behaviour.It boils down to this: most old-school computers do what they are told. They follow instructions given to them in the form of code. But if we want computers to solve more complex tasks, they need to do more than that. To be smarter, we are trying to train them how to learn in a way that imitates human behaviour.
Computers cannot be taught to think for themselves, but they can be taught how to analyse information and draw inferences from patterns within datasets. And the more you give them – computer systems can now cope with truly vast amounts of information – the better they should get at it.Computers cannot be taught to think for themselves, but they can be taught how to analyse information and draw inferences from patterns within datasets. And the more you give them – computer systems can now cope with truly vast amounts of information – the better they should get at it.
The most successful versions of machine learning in recent years have used a system known as a neural network, which is modelled at a very simple level on how we think a brain works.The most successful versions of machine learning in recent years have used a system known as a neural network, which is modelled at a very simple level on how we think a brain works.
In a snap eight-week consultation, the paper asked stakeholders “whether any high-risk AI applications or technologies should be banned completely” and, if so, what criteria should be applied for banning them.In a snap eight-week consultation, the paper asked stakeholders “whether any high-risk AI applications or technologies should be banned completely” and, if so, what criteria should be applied for banning them.
But the paper noted that Australia may need to harmonise its governance with major trading partners in order to take “advantage of AI-enabled systems supplied on a global scale and foster the growth of AI in Australia”.But the paper noted that Australia may need to harmonise its governance with major trading partners in order to take “advantage of AI-enabled systems supplied on a global scale and foster the growth of AI in Australia”.
The paper asks stakeholders to consider “the implications for Australia’s domestic tech sector and our current trading and export activities with other countries if we took a more rigorous approach to ban certain high-risk activities”.The paper asks stakeholders to consider “the implications for Australia’s domestic tech sector and our current trading and export activities with other countries if we took a more rigorous approach to ban certain high-risk activities”.
Husic said “using AI safely and responsibly is a balancing act the whole world is grappling with at the moment”.Husic said “using AI safely and responsibly is a balancing act the whole world is grappling with at the moment”.
Sign up to Guardian Australia's Afternoon Update Sign up to Afternoon Update
Our Australian afternoon update email breaks down the key national and international stories of the day and why they matter Finish your day with Antoun Issa’s three-minute snapshot of Australia’s main news
after newsletter promotionafter newsletter promotion
“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” he said in a statement.“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud,” he said in a statement.
“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.
“Today is about what we do next to build trust and public confidence in these critical technologies.”“Today is about what we do next to build trust and public confidence in these critical technologies.”
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundupSign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundupSign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup
In the budget the federal government invested $41m for the National AI Centre, which sits within the science agency CSIRO, and a new Responsible AI Adopt program for small and medium enterprises.In the budget the federal government invested $41m for the National AI Centre, which sits within the science agency CSIRO, and a new Responsible AI Adopt program for small and medium enterprises.
The paper noted that, since Australia’s laws are “technology neutral”, AI is already regulated to an extent by existing laws including on consumer protection, online safety, privacy and criminal laws.The paper noted that, since Australia’s laws are “technology neutral”, AI is already regulated to an extent by existing laws including on consumer protection, online safety, privacy and criminal laws.
For example, the hotel booking website Trivago has paid penalties for algorithmic decision-making that misled consumers into thinking they were offered the cheapest rates.For example, the hotel booking website Trivago has paid penalties for algorithmic decision-making that misled consumers into thinking they were offered the cheapest rates.
In April a regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service.In April a regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service.
In May the eSafety commissioner warned that generative AI programs could be used to automate child grooming by predators.In May the eSafety commissioner warned that generative AI programs could be used to automate child grooming by predators.
The Labor MP Julian Hill, who warned about uncontrollable military applications of AI in parliament in February, has called for a new Australian AI Commission to regulate AI.The Labor MP Julian Hill, who warned about uncontrollable military applications of AI in parliament in February, has called for a new Australian AI Commission to regulate AI.