Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn – TechCrunch

Civil society has been poring over the detail of the European Commission’s proposal for a risk-based framework for regulating applications of artificial intelligence which was proposed by the EU’s executive back in April.

The verdict of over a hundred civil society organizations is that the draft legislation falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias — and they’ve published a call for major revisions.

“We specifically recognise that AI systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, this collective statement sets out the call of 11[5] civil society organisations towards an Artificial Intelligence Act that foregrounds fundamental rights,” they write, going on to identify nine “goals” (each with a variety of suggested revisions) in the full statement of recommendations.

The Commission, which drafted the legislation, billed the AI regulation as a framework for “trustworthy”, “human-centric” artificial intelligence. However it risks veering rather closer to an enabling framework for data-driven abuse, per the civil society groups’ analysis — given the lack of the essential checks and balances to actually prevent automated harms.

Today’s statement was drafted by European Digital Rights (EDRi), Access Now, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, Fair Trials, PICUM, and ANEC — and has been signed by a full 115 not-for-profits from across Europe and beyond.

The advocacy groups are hoping their recommendations will be picked up by the European Parliament and Council as the co-legislators continue debating — and amending — the Artificial Intelligence Act (AIA) proposal ahead of any final text being adopted and applied across the EU.

Key suggestions from the civil society organizations include the need for the regulation to be amended to have a flexible, future-proofed approach to assessing AI-fuelled risks — meaning it would allow for updates to the list of use-cases that are considered unacceptable (and therefore prohibited) and those that the regulation merely limits, as well as the ability to expand the (currently fixed) list of so-called “high risk” uses.

The Commission’s proposal to categorizing AI risks is too “rigid” and poorly designed (the groups’ statement literally calls it “dysfunctional”) to keep pace with fast-developing, iterating AI technologies and changing use cases for data-driven technologies, in the NGOs’ view.

“This approach of ex ante designating AI systems to different risk categories does not consider that the level of risk also depends on the context in which a system is deployed and cannot be fully determined in advance,” they write. “Further, whilst the AIA includes a mechanism by which the list of ‘high-risk’ AI systems can be updated, it provides no scope for updating ‘unacceptable’ (Art. 5) and limited risk (Art. 52) lists.

“In addition, although Annex III can be updated to add new systems to the list of high-risk AI systems, systems can only be added within the scope of the existing eight area headings. Those headings cannot currently be modified within the framework of the AIA. These rigid aspects of the framework undermine the lasting relevance of the AIA, and in particular its capacity to respond to future developments and emerging risks for fundamental rights.”

They have also called out the Commission for a lack of ambition in framing prohibited use-cases of AI — urging a “full ban” on all social scoring scoring systems; on all remote biometric identification in publicly accessible spaces (not just narrow limits on how law enforcement can use the tech); on all emotion recognition systems; on all discriminatory biometric categorisation; on all AI physiognomy; on all systems used to predict future criminal activity; and on all systems to profile and risk-assess in a migration context — arguing for prohibitions “on all AI systems posing an unacceptable risk to fundamental rights”.

On this the groups’ recommendations echo earlier calls for the regulation to go further and fully prohibit remote biometric surveillance — including from the EU’s data protection supervisor.

The civil society groups also want regulatory obligations to apply to users of high risk AI systems, not just providers (developers) — calling for a mandatory obligation on users to conduct and publish a fundamental rights impact assessment to ensure accountability around risks cannot be circumvented by the regulation’s predominant focus on providers.

After all, an AI technology that’s developed for one ostensible purpose could be applied for a different use-case that raises distinct rights risks.

Hence they want explicit obligations on users of “high risk” AIs to publish impact assessments — which they say should cover potential impacts on people, fundamental rights, the environment and the broader public interest.

“While some of the risk posed by the systems listed in Annex III comes from how they are designed, significant risks stem from how they are used. This means that providers cannot comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well,” they urge.

They also argue for transparency requirements to be extended to users of high risks systems — suggesting they should have to register the specific use of an AI system in a public database the regulation proposes to establish for providers of such system.

“The EU database for stand-alone high-risk AI systems (Art. 60) provides a promising opportunity for increasing the transparency of AI systems vis-à-vis impacted individuals and civil society, and could greatly facilitate public interest research. However, the database currently only contains information on high-risk systems registered by providers, without information on the context of use,” they write, warning: “This loophole undermines the purpose of the database, as it will prevent the public from finding out where, by whom and for what purpose(s) high-risk AI systems are actually used.”

Another recommendations addresses a key civil society criticism of the proposed framework — that it does not offer individuals rights and avenues for redress when they are negatively impacted by AI.

This marks a striking departure from existing EU data protection law — which confers a suite of rights on people attached to their personal data and — at least on paper — allows them to seek redress for breaches, as well as for third parties to seek redress on individuals’ behalf. (Moreover, the General Data Protection Regulation includes provisions related to automated processing of personal data; with Article 22 giving people subject to decisions with a legal or similar effect which are based solely on automation a right to information about the processing; and/or to request a human review or challenge the decision.)

The lack of “meaning rights and redress” for people impacted by AI systems represents a gaping hole in the framework’s ability to guard against high risk automation scaling harms, the groups argue.

“The AIA currently does not confer individual rights to people impacted by AI systems, nor does it contain any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. As such, the AIA does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed,” they warn.

They are recommending the legislated is amended to include two individual rights as a basis for judicial remedies — namely:

  • (a) The right not to be subject to AI systems that pose an unacceptable risk or do not comply with the Act; and
  • (b) The right to be provided with a clear and intelligible explanation, in a manner that is accessible for persons with disabilities, for decisions taken with the assistance of systems within the scope of the AIA;

They also suggest a right to an “effective remedy” for those whose rights are infringed “as a result of the putting into service of an AI system”. And, as you might expect, the civil society organizations want a mechanism for public interest groups such as themselves to be able to lodge a complaint with national supervisory authorities for a breach or in relation to AI systems that undermine fundamental rights or the public interest — which they specify should trigger an investigation. (GDPR complaints simply being ignored by oversight bodies is a major problem with effective enforcement of that regime.)

Other recommendations in the groups’ statement include the need for accessibility to be considered throughout the AI system’s lifecycle, and they call out the lack of accessibility requirements in the regulation — warning that risks leading to the development and use of AI with “further barriers for persons with disabilities”; they also want explicit limits to ensure that harmonized product safety standards which the regulation proposes to delegate to private standards bodies should only cover “genuinely technical” aspects of high risks AI systems (so that political and fundamental rights decisions “remain firmly within the democratic scrutiny of EU legislators”, as they put it); and they want requirements on AI system users and providers to apply not only when the outputs are applied within the EU but also elsewhere — “to avoid risk of discrimination, surveillance, and abuse through technologies developed in the EU”.

Sustainability and environmental protection has also been overlooked, per the groups’ assessment.

On that they’re calling for “horizontal, public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems” — regardless of risk level; and covering AI system design, data management and training, application, and underlying infrastructures (hardware, data centres, etc.

The European Commission frequently justifies its aim of encouraging the update of AI by touting automation as a key technology for enabling the bloc’s sought for transition to a “climate-neutral” continent by 2050 — however AI’s own energy and resource consumption is a much overlooked component of these so-called ‘smart’ systems. Without robust environmental auditing requirements also applying to AI it’s simply PR to claim that AI will provide the answer to climate change.

The Commission has been contacted for a response to the civil society recommendations.

Last month, MEPs in the European Parliament voted to back a total ban on remote biometric surveillance technologies such as facial recognition, a ban on the use of private facial recognition databases and a ban on predictive policing based on behavioural data.

They also voted for a ban on social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality, and for a ban on AI assisting judicial decisions — another highly controversial area where automation is already been applied.

So MEPs are likely to take careful note of the civil society recommendations as they work on amendments to the AI Act.

In parallel the Council is in the process of determining its negotiating mandate on the regulation — and current proposals are pushing for a ban on social scoring by private companies but seeking carve outs for R&D and national security uses of AI.

Discussions between the Commission, Parliament and Council will determine the final shape of the regulation, although the parliament must also approve the final text of the regulation in a plenary vote — so MEPs’ views will play a key role.

Spread the love

Leave a Reply

Your email address will not be published.