EU Artificial Intelligence Act

21/02/2024

Author

Vasilis Ch. Charalambous

Senior Lawyer - Head of Tech

In a groundbreaking move in early December, the European Council and the European Parliament reached a provisional agreement on the text of the EU's new Artificial Intelligence Act, marking a significant development.



The European Commission announced the Coordinated Plan on Artificial Intelligence (AI) review back in 2021, which laid down a concrete set of joint actions for the European Commission and Member States on how to create EU global leadership on trustworthy AI and the Proposal for a Regulation on artificial intelligence which aimed at addressing risks of specific uses of AI. In a groundbreaking move in early December, the European Council and the European Parliament reached a provisional agreement on the text of the EU's new AI Act, marking a significant development. By fostering innovation and propelling Europe to the forefront of the industry, this regulation represents a major step towards safeguarding democracy, environmental sustainability, human rights, and the rule of law from high-risk artificial intelligence. The abovementioned Act imposes obligations for AI in accordance with the technology's impact and potential risks.

As stated in the initial proposal (Article 2), the regulation will apply to providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country, users of AI systems located within the Union and providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union.

The AI Act ensures that governance is stricter when the risk is higher. AI systems are divided into four risk classes: unacceptable risk, high risk, limited risk, and minimal or low risk. Unacceptable risks, such as AI systems that contravene EU values, pose serious threats to fundamental rights or public safety, are prohibited altogether.

Some Artificial Intelligence systems are deemed high-risk because they might negatively affect people's safety or their fundamental rights. To guarantee trust and a constant high standard of safety and human rights protection, all high-risk systems would be subject to a number of regulatory standards, including a conformity assessment. According to a press release by the Council of the EU, the provisional agreement, provides for a fundamental rights impact assessment as another element that must be completed before the deployers of a high-risk AI system can place it on the market. Furthermore, the provisional agreement calls for greater transparency with relation to the application of high-risk AI systems. Transparency measures, such as informing users about AI use so they may make educated judgments about its continued use, are necessary for limited-risk AI systems.

According to the press release by the Council of the EU, the provisional agreement prohibits a number of practices, including social scoring, the untargeted scraping of CCTV or internet images of faces, emotion recognition in the workplace and in schools, cognitive behavioral manipulation, biometric categorization to infer sensitive data, orientation or religious beliefs, and some instances of predictive policing for individuals.

The AI Act also adds protections for scenarios in which AI systems can be used to a wide range of tasks, like general-purpose AI models that are intended for a wide range of industries.

The initial proposal further addresses the potential misuse of biometric identification systems by law enforcement authorities. The use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is prohibited, unless certain limited exceptions apply and transparent and accountable processes are required for their deployment.

Transparency obligations will also apply for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’).

Companies that fail to comply with the regulation would face hefty fines ranging from 35 million euro or 7% of global turnover for violations in relation to the banned categories, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.

To ensure compliance with the AI Act, an AI Office will be established, which will be responsible for monitoring the market, supporting the harmonized application of the AI Act, conducting inspections, and coordinate joint cross-border investigations. The AI Office will receive guidance on general-purpose AI (GPAI) systems from a scientific panel of independent experts. This panel will help develop methods for assessing the capabilities of foundation models, provide guidance on the designation and emergence of high impact foundation models, and monitor potential material safety risks associated with foundation models. It should be also noted that Citizens will be able to file complaints against AI systems and get explanations of how these systems arrived at the decisions that impact them.

For the agreed text to become EU law, it must now be confirmed by the Council and the Parliament. With minor exclusions for certain clauses, the provisional agreement provides that the AI act shall take effect two years after its entry into force.

The AI Act has an impact that goes beyond European boundaries because it is anticipated to have an impact on AI regulation globally. AI governance is held to a high level by its emphasis on ethical principles, risk-based classification, and transparency.

You can find more information in the following press releases:

Press release of the Council of the EU

Press release of the European Parliament

Download PDF

Related Insights