A big enforcement deadline for the EU AI Act just passed – here's what you need to know
Fines for noncompliance are still way off, but firms should still prepare for significant changes

The first of a number of enforcement actions for the EU AI Act have officially come into effect, and experts have warned firms should accelerate preparations for the next batch of deadlines.
Passed in March last year, the first elements of the EU’s landmark legislation came into effect on the 2nd February 2025, bringing with it a series of rules and regulations that AI developers and deployers must adhere to.
The EU AI Act employs a risk-based approach to assessing the potential impact of AI systems, designating them as being minimal, limited, or high-risk. High-risk systems, for example, are those defined as posing a potential threat to life, human rights, or financial livelihood.
These particular systems are in the crosshairs following the introduction of the new rules this month.
Speaking to ITPro ahead of the deadline, Enza Iannopollo, principal analyst at Forrester, said lawmakers specifically chose to target the most dangerous AI use cases with the first round of rules.
“Requirements enforced on this deadline focus on AI use-cases the EU considers pose the greatest risk to core Union values and fundamental rights, due to their potential negative impacts,” Iannopollo said.
“These rules are those related to prohibited AI use-cases, along with requirements related to AI literacy. Organizations that violate these rules could face severe fines — up to 7% of their global turnover — so it’s crucial that requirements are met effectively,” she added.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Iannopollo noted that fines will not be issued immediately, however, as details about sanctions are still a work-in-progress and the authorities in charge of enforcement are still not in place.
While there may not be any big fines in the headlines in the next few months, Iannopollo said this is still an important milestone.
Tim Roberts, UK country co-leader at AlixPartners, said the first set of compliance obligations will act similarly to GDPR, mainly in that they will apply to any organization doing business with AI models in Europe.
With this in mind, it’s critical that companies are aware of these first batch of rules, even if they are not EU-based.
“Naturally, this also reignites the debate about striking the right balance between innovation and regulation. But instead of seeing them as opposing forces, it’s more useful to think of them as two things we need to get right in parallel … because regulation can be a facilitator of innovation - not a blocker,” Roberts said.
“The speed at which AI is advancing has caused discomfort for some consumers, but strong safeguards can build trust and create a thriving (and fairer) environment for greater business innovation.
“The EU AI Act is an important first step in this journey, and its success will depend on how well it is applied and how well it evolves, with the end goal being smarter regulation that drives businesses to continue pushing boundaries for the benefit of all.”
EU AI Act: Firms should tighten up risk assessments
Due to the global reach of the Act and the fact that requirements span the entire AI value chain, Iannopollo said enterprises must ensure they adhere to the regulation.
“The EU AI Act will have a significant impact on AI governance globally. With these regulations, the EU has established the ‘de facto’ standard for trustworthy AI and AI risk management,” she added.
To prepare for the rules, enterprises are advised to begin refining risk assessment practices to ensure they’ve classified AI use cases in line with the designated risk categories contained in the Act.
Systems that would fall within the ‘prohibited’ category need to be switched off immediately.
“Finally, they need to be prepared for the next key deadline on 2nd August. By this date, the enforcement machine and sanctions will be in better shape, and authorities will be much more likely to sanction firms that are not compliant. In other words, this is when we will see a lot more action.”
George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.