This article is from the source 'rtcom' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.
You can find the current article at its original source at https://www.rt.com/news/588807-eu-artificial-intelligence-regulation-risk/
The article has changed 2 times. There is an RSS feed of changes available.
Previous version
1
Next version
Version 0 | Version 1 |
---|---|
EU regulates AI | EU regulates AI |
(32 minutes later) | |
Chatbots and deepfake generators are required to label their creations and share data about their algorithms | |
The EU unveiled its first-ever legislative package attempting to regulate the production and use of artificial intelligence on Friday. The AI Act purports to take a “risk-based approach” to the technology, prioritizing restrictions on those aspects believed to pose the most danger to society. | The EU unveiled its first-ever legislative package attempting to regulate the production and use of artificial intelligence on Friday. The AI Act purports to take a “risk-based approach” to the technology, prioritizing restrictions on those aspects believed to pose the most danger to society. |
The legislation initiates transparency requirements for all general-purpose AI models, with stronger requirements for larger and more powerful models that could pose “systemic risks to the Union,” MEP Dragos Tudorache, one of the lead negotiators, told reporters on Friday. | The legislation initiates transparency requirements for all general-purpose AI models, with stronger requirements for larger and more powerful models that could pose “systemic risks to the Union,” MEP Dragos Tudorache, one of the lead negotiators, told reporters on Friday. |
“High-risk” uses of AI are classified as those posing a significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law under the new legislation, which specifically mentions the insurance and banking sectors as well as election- and voting-related systems as areas requiring stringent safety-testing before public deployment. | “High-risk” uses of AI are classified as those posing a significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law under the new legislation, which specifically mentions the insurance and banking sectors as well as election- and voting-related systems as areas requiring stringent safety-testing before public deployment. |
Citizens will reportedly have recourse to filing complaints demanding explanations about where an AI system’s decision has impacted their rights and lives. | Citizens will reportedly have recourse to filing complaints demanding explanations about where an AI system’s decision has impacted their rights and lives. |
The package requires deepfake generators, large language models, and other creation engines to label their work as AI-generated and purports to protect copyright-holders from AI ripping off their work. | The package requires deepfake generators, large language models, and other creation engines to label their work as AI-generated and purports to protect copyright-holders from AI ripping off their work. |
Biometric systems that use “sensitive characteristics” like race and sexual orientation to identify people are banned under the new legislation, as is indiscriminate scraping of faces from online databases or other image repositories. Also forbidden are social credit scoring, “systems that manipulate human behavior to circumvent their free will,” emotion recognition by employers or educational institutions. | Biometric systems that use “sensitive characteristics” like race and sexual orientation to identify people are banned under the new legislation, as is indiscriminate scraping of faces from online databases or other image repositories. Also forbidden are social credit scoring, “systems that manipulate human behavior to circumvent their free will,” emotion recognition by employers or educational institutions. |
The use of facial recognition technology by law enforcement would be limited to “certain safety and national security exemptions,” such as to prevent a “specific and present terrorist threat.” For “post-remote” use of AI biometric tracking, the target must have been convicted of or suspected of having committed a serious crime – terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, or “environmental crime.” | The use of facial recognition technology by law enforcement would be limited to “certain safety and national security exemptions,” such as to prevent a “specific and present terrorist threat.” For “post-remote” use of AI biometric tracking, the target must have been convicted of or suspected of having committed a serious crime – terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, or “environmental crime.” |
The bans are set to take effect in six months, while the transparency requirements will be enforced starting in a year. The full legislative package takes effect in two years. | The bans are set to take effect in six months, while the transparency requirements will be enforced starting in a year. The full legislative package takes effect in two years. |
The final draft of the legislation, which was crafted over three days of negotiations that began with a 22-hour marathon session on Wednesday, has not yet been published and must be approved through votes in both Parliament and European Council. | The final draft of the legislation, which was crafted over three days of negotiations that began with a 22-hour marathon session on Wednesday, has not yet been published and must be approved through votes in both Parliament and European Council. |
The bloc hopes to be a global leader in regulating AI, echoing its controversial efforts to regulate the flow of information on the internet with the Digital Services Act earlier this year. Like that legislation, it comes with hefty financial penalties – violators could be fined up to €35 million ($37.7 million) or 7% of global revenues. | The bloc hopes to be a global leader in regulating AI, echoing its controversial efforts to regulate the flow of information on the internet with the Digital Services Act earlier this year. Like that legislation, it comes with hefty financial penalties – violators could be fined up to €35 million ($37.7 million) or 7% of global revenues. |
Previous version
1
Next version