EU's AI legislation aims to protect businesses from IP theft
The AI Act's new bill would also seek to better promote the risks of individual AI models


A new draft of EU artificial intelligence (AI) legislation could better protect business IP from being secretly scraped by AI firms, with developers facing new transparency obligations on copyrighted content.
The long-awaited AI Act could force developers to disclose when they collect and use copyrighted material to train large language models (LLMs).
The aim is to protect firms from having information such as source code used without their permission.
In addition to protection from unauthorized uses of data, the bill would offer companies legal grounds to establish the degree to which AI firms they work with are using ethically sourced, non-copyrighted data.
This could save businesses from costly legal battles over the use of tools that, unbeknown to them, contain stolen intellectual property (IP).
The bill is also expected to categorize AI models by their risk factor, from ‘minimal’ to ‘unacceptable’.
It aims to provide clear criteria that organizations can use to assess whether an AI tool’s risks outweigh its use cases.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
High-risk AI systems have been defined as those that present “significant risks to the health and safety or fundamental rights of persons”, such as live facial recognition, and will be subject to additional transparency obligations.
EU’s AI Act and wider industry data scraping
At present, many developers of large language models (LLMs), the algorithms behind generative AI, use a great deal of data harvested from the internet for training purposes.
Lawmakers from across the EU’s political divide have come to a provisional agreement for the bill, which will now be pushed to the EU trilogue for further debate.
Previous drafts of the bill, first proposed in 2021, stated that transparency obligations “will not disproportionately affect the right to protection of intellectual property”.
The current bill states that non-compliant firms could face fines totaling up to 4% of their annual worldwide turnover, or €20 million ($22 million), whichever is higher.
Alistair Dent, chief strategy officer at data company Profusion, said that the requirement “raises the question of why AI should be treated differently from other platforms, such as social media or search engines”.
RELATED RESOURCE
“These platforms index or use a huge amount of copyrighted material often without citation - should they be forced to adhere to the same standards as AI?” he added.
Examples of this can be found in historical examples of wide-scale data scraping.
Earlier this year, Meta sued a ‘data scraping for hire’ firm Voyager Labs for its practices, alleging the firm had collected data on 600,000 Facebook users in a hidden campaign utilizing fake accounts.
Meta itself also attracted a €265 million ($291 million) fine for “unacceptable” data scraping from the Irish Data Protection Commission (DPC) in November 2022.
At present, firms face a confusing regulatory landscape over the permittance of data scraping.
A case in the US last year, between hiQ Labs and LinkedIn, concluded that web scraping was not in violation of federal law.
However, the court was not convened to directly rule on the practice of web scraping, nor did it offer judgment on how the legality of the practice could be impacted by claims of IP theft.
The EU, in contrast, has protections against the use of data that could infringe on the rights of citizens such as for biometric identification, and has issued fines on the subject as seen with the Irish DPC’s decision on Meta.
Like GDPR, the EU’s AI Act is expected to have widespread implications for the market.
Those firms looking to sell or deploy their models in the EU will have to comply with the terms of the bill, which will require a market-wide homogenized approach to risk and ethics criteria for AI products.
Researchers from the University of Oxford have created a tool called capAI, which they describe as “an independent, comparable, quantifiable, and accountable assessment of AI systems that conforms with the proposed AIA regulation”.
Within the paper, a proposal for internal review protocols is given as well as a suggestion that firms could provide stakeholders and customers with a scorecard for their AI system.
This could alleviate the concerns of some experts such as Dent.
"A risk-based approach, which seeks to categorize different uses of AI and then add rules based on the perceived 'risk' of the solution causing harm, has the drawback of being unable to anticipate how AI will develop and the impact new tools will have on society,” he stated.
“Put simply, if you can't know what form a new AI solution will take and how it will be used, it's very hard to predetermine what category it should go in and apply the compliance burden accordingly.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
‘Europe could do it, but it's chosen not to do it’: Eric Schmidt thinks EU regulation will stifle AI innovation – but Britain has a huge opportunity
News Former Google CEO Eric Schmidt believes EU AI regulation is hampering innovation in the region and placing enterprises at a disadvantage.
By Ross Kelly Published
-
The EU just shelved its AI liability directive
News The European Commission has scrapped plans to introduce the AI Liability Directive aimed at protecting consumers from harmful AI systems.
By Ross Kelly Published
-
A big enforcement deadline for the EU AI Act just passed – here's what you need to know
News The first set of compliance deadlines for the EU AI Act passed on the 2nd of February, and enterprises are urged to ramp up preparations for future deadlines.
By George Fitzmaurice Last updated
-
Execs are happy to let AI make decisions for them, and it’s got IT workers worried
News IT decision makers are more cautious than the C-suite with AI adoption
By Emma Woollacott Published
-
Put AI to work for talent management
Whitepaper Change the way we define jobs and the skills required to support business and employee needs
By ITPro Published
-
The CEO's guide to generative AI: A new way to run your business
Whitepaper Rethink your AI assumptions, plans, and strategies in real-time
By ITPro Published
-
EU agrees amendments to Cyber Solidarity Act in bid to create ‘cyber shield’ for member states
News The EU’s Cyber Solidarity Act will provide new mechanisms for authorities to bolster union-wide security practices
By Emma Woollacott Published
-
The EU's 'long-arm' regulatory approach could create frosty US environment for European tech firms
Analysis US tech firms are throwing their toys out of the pram over the EU’s Digital Markets Act, but will this come back to bite European companies?
By Solomon Klappholz Published