EU hammers out deal on AI Act, but it may have missed the mark
Some fear the EU AI Act may have negative consequences for innovation in the union
European lawmakers have reached a provisional agreement on the EU AI Act, a set of ‘harmonized rules’ governing the development and deployment of artificial intelligence (AI) systems.
After intense negotiations, the Council presidency and negotiators from the European Parliament settled on a political agreement on what will become the first comprehensive collection of rules regulating AI development and applications.
Set to come into effect in 2026, the details of the laws are yet to be finalized with discussions set to continue over the coming weeks.
The legislation will include a number of rules governing the uses of high-impact general purpose AI models and will prohibit a number of applications of AI technologies where the risk to public safety or human rights is deemed unacceptable.
The regulation includes a classification system to determine whether AI systems pose a limited or high risk of fundamental rights violations.
Systems deemed to present only a limited risk will be subject to ‘light transparency obligations’, whereas those considered high-risk will still be authorized but subject to a stricter set of requirements to be deployed in the EU.
A key concern of the negotiations was to ensure any requirements mandated by new laws regulating artificial intelligence would not be unnecessarily burdensome on firms trying to innovate in the market, so making the distinction between high and limited risk use cases was considered vital by stakeholders.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
As a result, requirements on companies developing foundation models were a stumbling block during the negotiations. As it stands foundation models will be subject to transparency obligations before they can enter the market, with a stricter set of obligations set out for ‘high impact’ foundation models.
AI’s use in law enforcement was another focus area of the negotiations. The rules are expected to allow the use of AI in a number of applications that will assist with law enforcement.
For example, law enforcement will be able to use real-time biometric identification systems in public spaces for cases involving terrorism, trafficking, rape, murder, and organized crime.
The provisional agreement outlines the sanctions regime that will come into place for AI developers who contravene the EU’s legal framework.
Penalties for violations of the EU AI Act will be a percentage of the company’s annual turnover in the previous financial year, with a fixed fee for smaller entities.
This will work out to €35 million or 7% for violating the list of banned AI applications, €15 million or 3% for violations of the act’s list of obligations for those deploying AI models, and €7.5 million or 1.5% for the incorrect supply of information.
Future-proof AI legislation sacrificed for a quick deal?
The regulatory framework has been criticized by some as still being too broad in its scope and thus risks stifling innovation in the European market.
Discover why cloud transformation requires you to rethink data protection
DOWNLOAD NOW
Daniel Friedlaender, Senior vice president and head of CCIA Europe, said the negative impact of this deal may reach far beyond AI companies.
“Regrettably speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt far beyond the AI sector alone.”
CCIA Europe’s policy manager, Boniface de Champris, echoed Friedlander’s feelings with a similarly pessimistic view of the impact this legislation will have on the technology sector in the region.
“The final AI Act lacks the vision and ambition that European tech startups and businesses are displaying right now. It might even end up chasing away the European champions that the EU so desperately wants to empower.”
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.