This article is from the source 'bbc' and was first published or seen on . The next check for changes will be

You can find the current article at its original source at https://www.bbc.co.uk/news/technology-65855333

The article has changed 13 times. There is an RSS feed of changes available.

Version 10 Version 11
What is AI, how does it work and what can it be used for? What is AI, and how do programmes like ChatGPT and DeepSeek work?
(5 months later)
Artificial intelligence (AI) technology is developing at high speed, transforming many aspects of modern life. Artificial intelligence (AI) has increasingly become part of everyday life over the past decade.
There seem to be new announcements almost every day, with big players such as Meta, Google and ChatGPT-maker OpenAI competing to get an edge with customers. It is used for everything from personalising social media feeds to powering medical breakthroughs.
However, some experts fear it could be used for malicious purposes. But as big tech firms and governments vie to be at the forefront of AI's development, critics have expressed caution over its potential misuse, ethical complexities and environmental impact.
What is AI and how does it work? What is AI and what is it used for?
AI allows computers to learn and solve problems almost like a person. AI allows computers to learn and solve problems in ways that can seem human.
AI systems are trained on huge amounts of information and learn to identify the patterns in it, in order carry out tasks such as having human-like conversation, or predicting a product an online shopper might buy. Computers cannot think, empathise or reason.
However, scientists have developed systems that can perform tasks which usually require human intelligence, trying to replicate how people acquire and use knowledge.
AI programmes can process large amounts of data, identify patterns and follow detailed instructions about what to do with that information.
Watch: What is artificial intelligence?Watch: What is artificial intelligence?
The technology is behind the voice-controlled virtual assistants Siri and Alexa, and helps Facebook and X - formerly known as Twitter- decide which social media posts to show users. This could be trying to anticipate what product an online shopper might buy, based on previous purchases, in order to recommend items.
AI lets Amazon analyse customers' buying habits to recommend future purchases - and the firm also uses the technology to crack down on fake reviews. The technology is also behind voice-controlled virtual assistants like Apple's Siri and Amazon's Alexa, and is being used to develop systems for self-driving cars.
AI also helps social platforms like Facebook, TikTok and X decide what posts to show users. Streaming services Spotify and Deezer use AI to suggest music.
Scientists are also using AI as a way to help spot cancers, speed up diagnoses and identify new medicines.
Computer vision, a form of AI that enables computers to detect objects or people in images, is being used by radiographers to help them review X-ray results.
A simple guide to help you understand AIA simple guide to help you understand AI
Bill Gates: AI most important tech advance in decades Five things you really need to know about AI
What are AI programs like ChatGPT and Midjourney? What are generative AI programs like ChatGPT, DeepSeek and Midjourney?
ChatGPT and Midjourney are examples of what is called "generative" AI. Generative AI is used to create new content which can feel like it has been made by a human.
These programs learn from vast quantities of data, such as online text and images, to generate new content which feels like it has been made by a human. It does this by learning from vast quantities of existing data such as online text and images.
So-called chatbots - like ChatGPT - can have text conversations. ChatGPT and Chinese rival DeepSeek's chatbot are two widely-used generative AI tools. Midjourney can create images from simple text prompts.
Other AI programs like Midjourney can create images from simple text instructions. So-called chatbots such as Google's Gemini or Meta AI can hold text conversations with users.
Generative AI can also make videos and even produce music in the style of famous musicians. Elon Musk's generative AI chatbot Grok can generate images for users who pay for X (formerly Twitter).
But these programs sometimes generate inaccurate answers and images, and can reproduce the bias contained in their source material, such as sexism or racism. Generative AI can also be used to make high-quality videos and music.
Many artists, writers and performers have warned that such AIs allow others to exploit and imitate their work without payment. Songs mimicking the style or sound of famous musicians have gone viral, sometimes leaving fans confused about their authenticity.
The most recent people to add their names to these calls include Billie Eilish and Nicki Minaj, who are among 200 artists calling for the "predatory" use of AI in the music industry to be stopped. Why is AI controversial?
'Most of our friends use AI in schoolwork' While acknowledging AI's potential, some experts are worried about the implications of its rapid growth.
Can you pass your degree using ChatGPT? The International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen financial inequality.
Why do critics fear AI could be dangerous? Prof Geoffrey Hinton, a computer scientist regarded as one of the "godfathers" of AI development, has expressed concern that powerful AI systems could even make humans extinct - a fear dismissed by his fellow "AI godfather", Yann LeCun.
Many experts are surprised by how quickly AI has developed, and fear its rapid growth could be dangerous. Some have even said AI research should be halted. Critics also highlight the tech's potential to reproduce biased information, or discriminate against some social groups.
In 2023, the UK government published a report which said AI might soon assist hackers to launch cyberattacks or help terrorists plan chemical attacks. This is because much of the data used to train AI comes from public material, including social media posts or comments, which can reflect biases such as sexism or racism.
Some experts even worry that in the future, super-intelligent AIs could make humans extinct. In May, the US-based Center for AI Safety's warning about this threat was backed by dozens of leading tech specialists. Facebook apology as AI labels black men 'primates'
Similar fears are shared by two of the three scientists known as the godfathers of AI for their pioneering research, Geoffrey Hinton and Yoshua Bengio. Twitter finds racial bias in image-cropping AI
But the other - Yann LeCun - dismissed the idea that a super-smart AI might take over the world as "preposterously ridiculous". And while AI programmes are growing more adept, they are still prone to errors. Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact.
US comedian Sarah Silverman is unhappy about her writing allegedly being used to train AIs Apple halted a new AI feature in January after it incorrectly summarised news app notifications.
The EU's tech chief Margrethe Vestager previously told the BBC that AI's potential to amplify bias or discrimination was a more pressing concern than futuristic fears about an AI takeover. The BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot himself.
In particular, she worries about the role AI could play in making decisions that affect people's livelihoods such as loan applications. Google has also faced criticism over inaccurate answers produced by its AI search overviews.
In March, a black Uber Eats driver received a payout after "racially discriminatory" facial-recognition checks prevented him using the app, and ultimately removed his account. This has added to concerns about the use of AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.
Others have criticised AI's environmental impact. There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.
Powerful AI systems use a lot of electricity: by 2027, one researcher suggests that collectively, they could consume each year as much as a small country like the Netherlands. Writers, musicians and artists have also pushed back against the technology, accusing AI developers of using their work to train systems without consent or compensation.
What rules are in place to govern AI? Billie Eilish was among 200 artists who called for an end to "predatory" use of AI in music in an open letter.
The US and UK have signed a landmark deal to work together on testing the safety of such advanced forms of AI - the first bilateral deal of its kind. Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - signed a statement, external in October 2024 calling AI a "major, unjust threat" to their livelihoods.
US President Joe Biden has also announced measures to deal with a range of problems that AI might cause. Billie Eilish and Nicki Minaj want stop to 'predatory' music AI
The UK government previously ruled out setting up a dedicated AI watchdog. AI-written book shows why the tech 'terrifies' creatives
But Prime Minister Rishi Sunak wants the UK to be a leader in AI safety, and the country hosted the first global summit on AI safety in 2023. How does AI impact the environment?
Twenty eight nations at the summit - including the UK, US, the European Union and China - signed a statement about the future of AI, external. It is not clear how much energy AI systems use, but some researchers estimate the industry as a whole could soon consume as much as the Netherlands.
This acknowledges the risks that advanced AIs could be misused - for example to spread misinformation - but says they can also be a force for good. Creating the powerful computer chips needed to run AI programmes also takes lots of power and water.
The signatories resolved to work together to ensure AI is trustworthy and safe. Demand for generative AI services has meant an increase in the number of data centres.
Can Sunak’s big summit save us from AI nightmare? These huge halls - housing thousands of racks of computer servers - use substantial amounts of energy and require large volumes of water to keep them cool.
Why making AI safe isn't as easy as you might think Some large tech companies have invested in ways to reduce or reuse the water needed, or have opted for alternative methods such as air-cooling.
In the EU, the Artificial Intelligence Act, when it becomes law, will impose strict controls on high risk systems. However, some experts and activists fear that AI will worsen water supply problems.
Which jobs are at risk because of AI? The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under strain.
A report by investment bank Goldman Sachs suggested that AI could replace the equivalent of 300 million full-time jobs across the globe. In September 2024, Google said it would reconsider proposals for a data centre in Chile, which has struggled with drought.
It concluded many administrative, legal, architecture, and management roles could be affected, external. Electricity grids creak as AI demands soar
But it also said AI could boost the global economy by 7%. What rules are in place for AI?
And the Institute of Public Policy Research (IPPR) estimates that up to eight million workers in the UK could be at risk of losing their jobs as the tech develops. Some governments have already introduced rules governing how AI operates.
But the tech has also been used to support workers, such as by helping doctors spot breast cancers, and developing new antibiotics. The EU's Artificial Intelligence Act places controls on high risk systems used in areas such as education, healthcare, law enforcement or elections. It bans some AI use altogether.
BBC Work in Progress: How generative AI could change hiring as we know it Generative AI developers in China are required to safeguard citizens' data, and promote transparency and accuracy of information. But they are also bound by the country's strict censorship laws.
Is AI about to transform the legal profession? In the UK, Prime Minister Sir Keir Starmer has said the government "will test and understand AI before we regulate it".
Both the UK and US have AI Safety Institutes that aim to identify risks and evaluate advanced AI models.
In 2024 the two countries signed an agreement to collaborate on developing "robust" AI testing methods.
However, in February 2025, neither country signed an international AI declaration which pledged an open, inclusive and sustainable approach to the technology.
Several countries including the UK are also clamping down on use of AI systems to create deepfake nude imagery and child sexual abuse material.
Man who made 'depraved' child images with AI jailed
Inside the deepfake porn crisis engulfing Korean schools