The European Parliament has officially given its final approval to the EU AI Act, making it one of the first comprehensive sets of regulations for artificial intelligence (AI) in the world. The purpose of the act is to ensure that AI is trustworthy, safe, and respects the fundamental rights of the European Union while also supporting innovation. The legislation received overwhelming support, with 523 votes in favor, 46 against, and 49 abstentions.
During a virtual press conference held prior to the vote, EU Parliament members Brando Benifei and Dragos Tudorache expressed their excitement about the legislation, describing it as a historic milestone in the regulation of AI. Benifei emphasized that the final result of the law would promote the development of safe and human-centric AI, aligning with the priorities of the EU Parliament.
The journey towards the EU AI Act began five years ago, gaining momentum over the past year as powerful AI models became more prevalent. After lengthy negotiations, a provisional agreement was reached in December 2023, followed by a 71-8 vote from the Internal Market and Civil Liberties Committees to endorse the agreement in February 2024.
Today, as lawmakers cast their final votes, the legislation will undergo minor linguistic changes during the translation phase to accommodate all member states. The bill will then proceed to a second vote in April and be published in the official EU journal, likely in May.
Beginning in November, the bans on prohibited practices outlined in the EU AI Act will come into effect. Benifei emphasized that these bans would be mandatory from the moment of enactment. However, he clarified that the full implementation of the law would follow a timeline, allowing organizations to adapt gradually.
The EU AI Act classifies machine learning models into four categories based on their societal risk levels, with high-risk models subject to the strictest regulations. The top category, known as “unacceptable risk,” prohibits all AI systems that pose a clear threat to people’s safety, livelihoods, and rights. Examples include social scoring systems implemented by governments and voice-assisted toys that encourage dangerous behavior.
The legislation also addresses “high-risk” applications, which encompass critical infrastructures, education and training, safety components of products, essential public and private services, law enforcement activities that could infringe on fundamental rights, migration and border control management, and the administration of justice and democratic processes.
Furthermore, the EU AI Act recognizes “limited risk” situations, mainly focusing on the transparency of AI usage. For instance, it highlights the importance of users being aware when interacting with AI chatbots and ensuring that AI-generated content is identifiable.
To assist organizations in understanding their compliance with the legislation, the EU has developed a tool called “The EU AI Act Compliance Checker.” This tool enables organizations to determine where they stand within the framework of the law.
The EU AI Act also allows for the “free use” of minimal-risk AI, including applications such as AI-enabled video games and spam filters. The majority of AI systems currently used in the EU fall into this category.
Lawmakers have addressed generative-AI models, like AI chatbots, by introducing provisions that require developers to provide detailed summaries of the training data used and comply with EU copyright law. Additionally, AI-generated deepfake content must be labeled in accordance with the law to indicate that it has been artificially manipulated.
While the EU’s AI Act faced initial resistance from local businesses and tech companies, who expressed concerns about overregulation stifling innovation, it ultimately received praise from tech giant IBM. Christina Montgomery, IBM’s vice president and chief privacy and trust officer, commended the EU for passing comprehensive and smart AI legislation. She stated that the risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems.