This article is from the source 'bbc' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.bbc.co.uk/news/technology-65855333

The article has changed 13 times. There is an RSS feed of changes available.

Version 6 Version 7
What is AI, is it dangerous and what jobs are at risk? What is AI, how does it work and what can it be used for?
(5 months later)
Artificial intelligence (AI) technology is developing at high speed, and is transforming many aspects of modern life. Artificial intelligence (AI) technology is developing at high speed, transforming many aspects of modern life.
However, some experts fear that it could be used for malicious purposes, and may threaten jobs. However, some experts fear it could be used for malicious purposes.
The UK is hosting a global meeting of world leaders and tech bosses including Elon Musk to discuss how highly advanced AIs can be used safely.
What is AI and how does it work?What is AI and how does it work?
AI allows a computer to act and respond almost as if it was a human. AI allows computers to learn and solve problems almost like a person.
Computers can be fed huge amounts of information and trained to identify the patterns in it, in order to make predictions, solve problems, and even learn from their own mistakes. AI systems are trained on huge amounts of information and learn to identify the patterns in it, in order carry out tasks such as having human-like conversation, or predicting a product an online shopper might buy.
As well as data, AI relies on algorithms - lists of rules which must be followed in the correct order to complete a task.
Watch: What is artificial intelligence?Watch: What is artificial intelligence?
Watch: What is artificial intelligence?Watch: What is artificial intelligence?
The technology is behind the voice-controlled virtual assistants Siri and Alexa. It lets Spotify, YouTube and BBC iPlayer suggest what you might want to play next, and helps Facebook and Twitter decide which social media posts to show users. The technology is behind the voice-controlled virtual assistants Siri and Alexa, and helps Facebook and X - formerly known as Twitter- decide which social media posts to show users.
AI lets Amazon analyse customers' buying habits to recommend future purchases - and the firm is also using the technology to crack down on fake reviews. AI lets Amazon analyse customers' buying habits to recommend future purchases - and the firm also uses the technology to crack down on fake reviews.
A simple guide to help you understand AI
Bill Gates: AI most important tech advance in decadesBill Gates: AI most important tech advance in decades
What are ChatGPT and Snapchat's My AI? What are AI programs like ChatGPT and DALL-E?
Two powerful AI-driven applications or apps which have become very high profile in recent months are ChatGPT and Snapchat My AI. ChatGPT and DALL-E are examples of what is called "generative" AI.
They are examples of what is called "generative" AI. These programs learn from vast quantities of data, such as online text and images, to generate new content which feels like it has been made by a human.
This uses the patterns and structures it identifies in vast quantities of source data to generate new and original content which feels like it has been created by a human. So-called "chatbots" - like ChatGPT - can have text conversations.
The AI is coupled with a computer programme known as a chatbot, which "talks" to human users via text. Other AI programs like DALL-E can create images from simple text instructions.
The apps can answer questions, tell stories and write computer code. Generative AIs can also make videos and even produce music in the style of famous musicians.
But both programmes sometimes generate incorrect answers for users, and can reproduce the bias contained in their source material, such as sexism or racism. But these programs sometimes generate inaccurate answers and images, and can reproduce the bias contained in their source material, such as sexism or racism.
Google's rival to ChatGPT launches for over-18s Many artists, writers and performers have warned that such AIs allow others to exploit and imitate their work without payment.
New chatbot has everyone talking to it 'Most of our friends use AI in schoolwork'
Can you pass your degree using ChatGPT?Can you pass your degree using ChatGPT?
Why do critics fear AI could be dangerous?Why do critics fear AI could be dangerous?
With few rules currently in place governing how AI is used, experts have warned that its rapid growth could be dangerous. Some have even said AI research should be halted. Many experts are surprised by how quickly AI has developed, and fear its rapid growth could be dangerous. Some have even said AI research should be halted.
In May, Geoffrey Hinton, widely considered to be one of the godfathers of artificial intelligence, quit his job at Google, warning that AI chatbots could soon be more intelligent than humans. Earlier in October, the UK government published a report which said AI might soon assist hackers to launch cyberattacks or help terrorists plan chemical attacks.
Later that month, the US-based Center for AI Safety published a statement supported by dozens of leading tech specialists. Some experts even worry that in the future, super-intelligent AIs could make humans extinct. In May, the US-based Center for AI Safety's warning about this threat was backed by dozens of leading tech specialists.
They argue AI could be used to generate misinformation that could destabilise society. In the worst-case scenario, they say machines might become so intelligent that they take over, leading to the extinction of humanity. Similar fears are shared by two of the three scientists known as the godfathers of AI for their pioneering research, Geoffrey Hinton and Yoshua Bengio.
The EU's competition chief Margrethe Vestager says "guardrails" are needed to counter the biggest risks posted by AI But the other - Yann LeCun - dismissed the idea that a super-smart AI might take over the world as "preposterously ridiculous".
However, the EU's tech chief Margrethe Vestager told the BBC that AI's potential to amplify bias or discrimination was a more pressing concern. US comedian Sarah Silverman is unhappy about her writing allegedly being used to train AIs
In particular she is concerned about the role AI could play in making decisions that affect people's livelihoods such as loan applications, adding there was "definitely a risk" that AI could be used to influence elections. In June, the EU's tech chief Margrethe Vestager told the BBC that AI's potential to amplify bias or discrimination was a more pressing concern than futuristic fears about an AI takeover.
Others, including tech pioneer Martha Lane Fox, say we shouldn't get what she calls "too hysterical" about AI, urging a more sensible conversation about its capabilities. In particular, she worries about the role AI could play in making decisions that affect people's livelihoods such as loan applications.
Why making AI safe isn't as easy as you might think Others criticise AI's environmental impact.
Powerful AI systems use a lot of electricity: by 2027,one researcher suggests that collectively, they could consume each year as much as a small country like the Netherlands.
What rules are in place at the moment about AI?What rules are in place at the moment about AI?
Governments around the world are wrestling with how to regulate AI. In the EU, the Artificial Intelligence Act, when it becomes law, will impose strict controls on high risk systems.
Members of the European Parliament have just voted in favour of the EU's proposed Artificial Intelligence Act, which will put in place a strict legal framework for AI, which companies would need to follow. US President Joe Biden has also announced measures to deal with a range of problems that AI might cause. He vowed to "harness the power of AI while keeping Americans safe".
Margrethe Vestager says "guardrails" are needed to counter the biggest risks posed by AI. The UK government previously ruled out setting up a dedicated AI watchdog.
The legislation - which is expected to come into effect in 2025 - categorises applications of AI into levels of risk to consumers, with AI-enabled video games or spam filters falling into the lowest risk category. But Prime Minister Rishi Sunak wants the UK to be a leader in AI safety, and is hosting a global summit at Bletchley Park where firms and governments are discussing how to tackle the risks posed by the technology.
Higher-risk systems like those used to evaluate credit scores or decide access to housing would face the strictest controls. Twenty eight nations at the summit - including the UK, US, the European Union and China - have signed a a statement about the future of AI called the Bletchley Declaration.
These rules will not apply in the UK, where the government set out its vision for the future of AI in March. This acknowledges the risks that advanced AIs could be misused - for example to spread misinformation - but says they can also be a force for good.
It ruled out setting up a dedicated AI regulator, and said instead that existing bodies would be responsible for its oversight. The signatories resolve to work together to ensure AI is trustworthy and safe.
But Ms Vestager says that AI regulation needs to be a "global affair", and wants to build a consensus among "like-minded" countries. In a recorded address, King Charles told attendees that the risks posted by AI must be tackled with a "a sense of urgency, unity and collective strength".
US lawmakers have also expressed concern about whether the existing voluntary codes are up to the job. Can Sunak’s big summit save us from AI nightmare?
Meanwhile, China intends to make companies notify users whenever an AI algorithm is being used. Why making AI safe isn't as easy as you might think
Which jobs are at risk because of AI?Which jobs are at risk because of AI?
AI has the potential to revolutionise the world of work, but this raises questions about which roles it might displace. A report by investment bank Goldman Sachs suggested that AI could replace the equivalent of 300 million full-time jobs across the globe.
A recent report by investment bank Goldman Sachs suggested that AI could replace the equivalent of 300 million full-time jobs across the globe, as certain tasks and job functions become automated. That equates to a quarter of all the work humans currently do in the US and Europe. It concluded many administrative, legal, architecture, and management roles could be affected.
The report highlighted a number of industries and roles that could be affected, including administrative jobs, legal work, architecture, and management. But it also said AI could boost the global economy by 7%.
But it also identified huge potential benefits for many sectors, and predicted that AI could lead to a 7% increase in global GDP. The tech has already been used to help doctors spot breast cancers, and to develop new antibiotics.
Some areas of medicine and science are already taking advantage of AI, with doctors using the technology to help spot breast cancers, and scientists using it to develop new antibiotics.
BBC Work in Progress: How generative AI could change hiring as we know itBBC Work in Progress: How generative AI could change hiring as we know it
Is AI about to transform the legal profession?
Related TopicsRelated Topics
Elon Musk
Artificial intelligenceArtificial intelligence
Rishi Sunak