04/24/2021 / By Arsenio Toledo
Officials within the European Union (EU) on Wednesday, April 21, proposed legislation to limit the use of facial recognition technology and other “high-risk” applications of artificial intelligence (AI). The bill was proposed by the European Commission, the main executive body of the EU.
The EU is attempting to lead the world in creating and enforcing regulations aimed at keeping the excesses of big tech companies in check, especially when it comes to the potential dangers of new technologies such as facial recognition and artificial intelligence.
“Our regulation addresses the human and societal risks associated with specific uses of AI,” said European Commission Executive Vice President Margrethe Vestager. “We think that this is urgent. We are the first on the planet to suggest this legal framework.”
The proposed legislation would ban certain “high-risk” applications of artificial intelligence technology within the EU, including those that exploit “vulnerable groups,” “deploy subliminal techniques” or those that “score people’s social behavior,” like the social credit system in China.
The proposal would also prohibit law enforcement from using “real-time” facial recognition technology and other remote biometric identification systems in public spaces. “Real-time” facial recognition involves using this technology to track live a person’s whereabouts.
Exceptions to this prohibition include times when the technology is used to prevent a terror attack, find missing children or tackle other serious public security concerns. These exceptions will only be allowed for limited period, and a judge or another national authority figure needs to approve it. (Related: Washington state LEGALIZES facial recognition for police and government agencies, creating a high-tech police state.)
Other applications of artificial intelligence that are deemed to be “high-risk” because they may endanger the safety or legal status of other people – including using AI for self-driving vehicles and to sort through employment or asylum application decisions – have to undergo thorough checks of their systems before they can be deployed within the EU.
After their deployment, the companies that use them have to abide by other obligations, such as the use of high-quality datasets, requiring human oversight to minimize risk and ensuring the traceability of results. Failure to comply with these requirements will most likely result in fines.
The legislation will apply to both developers and users of so-called “high-risk” artificial intelligence systems. Breaking the law can result in fines of up to six percent of the corporation’s entire revenue, although such a steep penalty will rarely be used as EU officials aren’t likely to serve the maximum punishment.
The European Commission’s proposal comes as many privacy advocates, politicians and regular EU citizens have become very vocal about their desire to regulate the use of live facial recognition and other dangerous forms of artificial intelligence. At present, the EU has no clear rules regarding how and when certain “high-risk” technologies can be used on the general public.
The introduction of this proposal has already ignited debates. Many groups who side with the tech industry were relieved to find out that the draft legislation wasn’t “more draconian,” such as the Computer & Communications Industry Association (CCIA), an international lobbying group that advocates for tech companies like Amazon, Facebook and Google.
The CCIA welcomed the EU’s attempt to regulate “high-risk” uses of artificial intelligence, while allowing for other AI systems to proliferate. “It’s positive that the commission has taken this risk-based approach,” said CCIA Vice President Christian Borggreen.
Digital rights activists have also applauded the legislation, but many within their ranks have argued that the Commission’s proposal is much too vague and offers far too many loopholes that corporations can exploit.
European Digital Rights (EDR), an international privacy and digital rights nonprofit, is concerned with how often the bill allows law enforcement agencies to use facial recognition technology.
“The list of exemptions is incredibly wide,” said Sarah Chander, senior policy adviser for EDR. “Such a list kind of defeats the purpose for claiming something is a ban.”
Many other prominent voices aligned with the tech industry argued that the EU’s regulation would give Chinese corporations an advantage, since they don’t have to deal with the red tape.
“It’s going to make it prohibitively expensive or even technologically infeasible to build AI in Europe,” said Benjamin Mueller, a senior policy analyst at the tech-aligned think tank Center for Data Innovation. “The U.S. and China are going to look on with amusement as the EU kneecaps its own startups.”
The Commission’s proposal is still in the draft stages, and it has to go through multiple consultations with industry, government and civil society groups before it even gets voted on in the European Parliament.
Learn more about the steps governments and organizations around the world are taking to make sure the right to privacy is upheld by reading the latest articles at PrivacyWatch.news.
Sources include:
Tagged Under: AI, artificial intelligence, cameras, digital rights, Europe, European Union, Facial recognition, Glitch, legislation, privacy, privacy and surveillance, regulation, robotics, robots, surveillance, technology, terminators, trackers, tracking
COPYRIGHT © 2017 ROBOTICS.NEWS
All content posted on this site is protected under Free Speech. Robotics.News is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Robotics.News assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.