Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More

World Video | Defence | Foreign Affairs | Natural Events | Trade | NZ in World News | NZ National News Video | NZ Regional News | Search

 

EU Parliament’s Draft Of AI Act: Predictive Policing Is Banned, But Work Remains To Protect People’s Rights

Today, after almost a year of delays, the European Parliament’s lead committees have published their Draft Report on the Artificial Intelligence Act (AIA). Access Now welcomes the much needed addition of a ban on individual risk assessment for predictive policing — a dangerous and inherently discriminatory use of AI — but flags the troubling omission of additional prohibitions on location- and group-based predictive policing, emotion recognition, and other problematic practices that undermine human rights.

The Draft Report contains significant improvements for the protection of fundamental rights, in line with the recommendations of the joint civil society statement, An EU Artificial Intelligence Act for Fundamental Rights. These include the addition of rights for people affected by AI systems to lodge a complaint or seek judicial remedies, for public authorities to register their use of high-risk AI systems in a public database, and numerous improvements to procedures and enforcement. The rapporteurs have also resisted industry pressure to narrow the definition of AI systems or introduce loopholes for providers of “general purpose AI.” Unfortunately, civil society demands to include obligations for fundamental rights impact assessments, to fix glaring loopholes in the prohibition on remote biometric identification, as well as for broader transparency obligations, have been left out of this iteration of Parliament’s position.

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

“The rapporteurs have missed an important opportunity to protect people’s rights by completely banning remote biometric identification in publicly accessible spaces,” said Caterina Rodelli, EU Policy Analyst at Access Now. “As it stands, there are such broad exceptions to that ‘prohibition’ in the AI Act that it’s practically providing a legal basis for this dystopian technology.”

Access Now and a coalition of civil society organisations have continually called for a full prohibition on remote biometric identification in public spaces, as well as for prohibitions on emotion recognition — including AI lie detectors — problematic uses of biometric categorisation, and of AI in migration.

“The addition of a partial ban on inherently discriminatory predictive policing systems in the AI Act is laudable, but there are other equally problematic systems that need to be included in the list of prohibited practices,” said Daniel Leufer, Senior Policy Analyst at Access Now. “We’ve seen a proliferation of utterly indefensible AI systems that claim to do everything from predicting sexual or political orientation from your facial features, to detecting lies from facial microexpressions — we need comprehensive red lines to stop the ones that irremediably undermine our fundamental rights.”

The IMCO-LIBE Draft Report is an important first salvo in the Parliament’s work to safeguard fundamental rights, but Access Now demands braver, bolder proposals to ensure that the AIA genuinely works. All eyes are on the rapporteurs and shadow rapporteurs to beef up protections for people’s rights so the AIA can do what it’s supposed to — protect millions of people from abuse and exploitation by AI systems.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
World Headlines

 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.