A big enforcement deadline for the EU AI Act is just around the corner

EU flags fly outside the union headquarters in Brussels, Belgium.
(Image credit: Getty Images)

The first of a number of enforcement deadlines for the EU AI Act is looming just around the corner, and experts are warning firms to ramp up preparations for the months ahead.

Officially passed in March last year, the first aspects of the EU’s landmark legislation will come into effect from 2 February 2025, bringing with it a host of rules and regulations that AI developers or deployers will need to adhere to.

The act works on a risk-based approach which assesses AI systems as either minimal, limited, or high-risk. High-risk systems are defined as those which pose a threat to life, financial livelihood, or human rights.

It is these systems that will be in the crosshairs initially from the enforcement deadline, according to Enza Iannopollo, principal analyst at Forrester, with lawmakers turning their attention to the most dangerous AI use cases.

“On 2nd February, the enforcement of a few — but mighty — requirements of the EU AI Act will begin. Requirements enforced on this deadline focus on AI use-cases the EU considers pose the greatest risk to core Union values and fundamental rights, due to their potential negative impacts,” Iannopollo said.

“These rules are those related to prohibited AI use-cases, along with requirements related to AI literacy. Organizations that violate these rules could face severe fines — up to 7% of their global turnover — so it’s crucial that requirements are met effectively,” she added.

The fines may not be metered out straight away though, Iannopollo said, as details about sanctions are lacking and the authorities in charge of enforcement are still not in place yet.

While there may not be any big fines in the headlines in the next few months, though, Iannopollo said this is still an important milestone.

Businesses should tighten up risk assessments

Owing to the global reach of the act and the fact requirements span the entire AI value chain, Iannopollo said organizations worldwide will need to step in line with regulation.

“The EU AI Act will have a significant impact on AI governance globally. With these regulations, the EU has established the ‘de facto’ standard for trustworthy AI and AI risk management,” she added.

To prepare, companies should start refining risk assessment practices to ensure they’ve classified AI use cases in line with the risk categories of the act itself, Iannopollo said.

Systems that would fall within the ‘prohibited’ category need to be switched off immediately.

“Finally, they need to be prepared for the next key deadline on 2nd August. By this date, the enforcement machine and sanctions will be in better shape, and authorities will be much more likely to sanction firms that are not compliant. In other words, this is when we will see a lot more action.”

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.