The EU AI law: what you need to know

It’s been nearly a year since the European Commission unveiled the draft for what is arguably one of the most influential legal frameworks in the world: the EU AI Act. According to the Mozilla Foundation, the framework is still in progress and now is the time to actively participate in the efforts to shape its direction.

Mozilla Foundation’s mission is to ensure that the Internet remains a public resource that is open and accessible to all. Since 2019, Mozilla Foundation has focused a significant portion of its internet health movement building programs on AI.

We met with Mozilla Foundation’s Executive Director Mark Surman and senior policy researcher Maximilian Gahntz to discuss Mozilla’s focus and stance on AI, key facts about the EU AI Act and how it will work in practice, as well as Mozilla’s recommendations for improving it. , and ways to involve everyone in the process.

The EU AI law is coming, and it’s a big deal even if you’re not based in the EU

In 2019, Mozilla identified AI as a new challenge to internet health. The rationale is that AI makes decisions for us and about us, but not always with us: it can tell us what news we read, what advertisements we see, or if we are eligible for a loan.

The decisions AI makes can help humanity, but also harm us, Mozilla notes. AI can amplify historical bias and discrimination, prioritize engagement over user wellbeing, and further amplify the power of Big Tech and marginalize individuals.

“Reliable AI has been critical to us over the past few years, as data and machine learning and what we call AI today are such a central technical and social business fabric for what the internet is and how the internet intersects society and everything else. ” our lives,” Surman noted.

As AI permeates our lives, Mozilla agrees with the EU that changes are needed in AI standards and rules, writes Gahntz in Mozilla’s response to the EU AI Act.

The first thing to notice about the EU AI law is that it does not apply exclusively to EU-based organizations or citizens. The ripple can be felt around the world in a similar way to the effect the GDPR had.

The EU AI law applies to users and providers of AI systems located within the EU, suppliers located outside the EU who are the source of the marketing or commissioning of an AI system within the EU , and providers and users of AI systems based outside the EU when the results generated by the system are used in the EU.

This means that organizations developing and deploying AI systems must either comply with the EU AI Act or withdraw from the EU completely. That said, there are some ways the EU AI law differs from the GDPR, but more on that later.

regulations.jpg

Like all regulations, the EU AI law walks a fine line between business and research needs and citizens’ concerns

By ra2 studio — Shutterstock

Another important point about the EU AI law is that it is still being worked on and will take some time to come into effect. The lifecycle began with the formation of a high-level expert group, which, as Surman noted, coincided with Mozilla’s focus on trustworthy AI. Mozilla has been closely monitoring the EU AI Act since 2019.

As Gahntz noted, everyone involved in this process has been preparing to participate since the first draft of what the EU AI law was published in April 2021. The EU Parliament had to decide which committees and which people on those committees would work on it, and civil society organizations were given the opportunity to read the text and develop their point of view.

The point where we are now is where the exciting part begins, as Gahntz put it. This is when the EU Parliament develops its position, taking into account the input it receives from designated committees and from third parties. Once the European Parliament has consolidated what it means by the term Dependable AI, it will present its ideas on how to modify the original design.

The EU member states will do the same, and then there will be a final round of negotiations between the Parliament, the Commission and the member states, and then the EU AI law will be turned into law. It’s a long and winding road, and according to Gahntz, we’re looking at a horizon of at least a year, plus a transition period between the law’s entry into force and its actual entry into force.

Before the GDPR, the transition period was two years. So it probably won’t be before 2025 before the EU AI law comes into force.

Defining and categorizing AI systems

Before we get into the details of the EU AI law, we need to stop and ask what exactly it applies to. There is no such thing as a generally accepted definition of AI, so the EU AI Act provides an annex that defines the techniques and approaches that fall within its scope.

As noted by the Montreal AI Ethics Institute, the European Commission has adopted a broad and neutral definition of AI systems and designates them as software “developed using one or more of the techniques and approaches listed in Annex I.” and that, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions that affect the environments with which they interact”.

The techniques listed in the annex of the EU AI Act include both machine learning approaches and logic and knowledge-based approaches. They are very diverse, to the point of being criticized for “proposing to regulate the use of Bayesian estimates”. While navigating between business and research needs and citizen interests is a fine line, such claims don’t seem to get to the heart of the proposed legislation: the so-called risk-based approach.

In the EU AI Act, AI systems are classified into 4 categories based on the perceived risk they pose: unacceptable risk systems are banned completely (although there are some exceptions), high risk systems are subject to rules of traceability, transparency and robustness, low-risk systems require transparency from the supplier and minimal risk systems with no requirements.

It is therefore not about regulating certain techniques, but about regulating the application of those techniques in certain applications in accordance with the risk that the applications entail. In terms of techniques, the proposed framework notes that adjustments may be necessary over time to keep up with the evolution of the domain.

Excluded from the scope of the EU AI Act are AI systems that have been developed or used exclusively for military purposes. Third country governments and international organizations using AI systems under international law enforcement and judicial cooperation agreements with the EU or with one or more of its members are also exempt from the EU AI Act.

Flags of the European Union

In the EU AI Act, AI systems are classified into 4 categories based on the perceived risk they pose

Getty Images/iStockphoto

AI applications that manipulate human behavior to deprive users of their free will and systems that enable social scoring by EU member states are classified as an unacceptable risk and are banned outright.

High-risk AI systems include biometric identification, critical infrastructure management (water, energy, etc.), AI systems intended for missions in educational institutions or for human resource management, and AI applications for access to essential services (bank loans, public services, social benefits, justice, etc.), use for police missions, migration management and border control.

However, the application of biometric identification has a number of exceptions, such as searching for a missing child or locating suspects in cases of terrorism, human trafficking or child pornography. The EU AI Act requires risky AI systems to be recorded in a database managed by the European Commission.

Limited risk systems usually contain several bots. For them, transparency is the most important requirement. For example, if users are interacting with a chatbot, they should be notified so they can make an informed decision on whether or not to proceed.

Finally, according to the Commission, AI systems that do not pose a risk to citizens’ rights, such as spam filters or games, are exempt from the regulatory obligation.

The EU AI Acts as a way to achieve reliable AI

The main idea behind this risk-based approach to AI regulation is somewhat reminiscent of the approach used to label electrical household appliances based on their energy efficiency in the EU. Devices are categorized by their energy efficiency characteristics and are given labels ranging from A (best) to G (worst).

But there are also some important differences. Most notably, while energy labels are meant to be seen and taken into account by consumers, the risk assessment of AI systems is not designed with the same goal in mind. However, if Mozilla has its way, that could change by the time the EU AI law goes into effect.

Drawing analogies is always interesting, but what’s really important here is that the risk-based approach tries to minimize the regulatory impact on those who develop and implement AI systems that are of little to no problem, Gahntz said. .

“The idea is to focus on the areas where it gets tricky, where risks are introduced to people’s safety, rights and privacy, etc. That’s also the area we want to focus on, because regulation isn’t an end point in and of itself.

We aim to achieve this through our recommendations and our advocacy. The parts of the regulation aimed at mitigating or preventing risks will be strengthened in the final EU AI law.

There are many analogies to be drawn with other risk-based approaches that we see elsewhere in European law and regulation. But it’s also important to look at the risks specific to each use case. That basically means answering the question of how we can make sure AI is reliable,” said Gahntz.

Gahntz and Surman stressed that Mozilla’s recommendations have been developed with care and with due diligence to be involved in this process to ensure that no one is harmed and that AI ultimately becomes a net benefit for all.

We continue to elaborate on Mozilla’s recommendations to improve the EU AI Act, the underlying philosophy of Trustworthy AI and the AI ​​Theory of Change, and how to get involved in the conversation in Part 2 of this article.

Leave a Comment