US starts exploring “accountability measures” to keep AI companies in check
The move follows Italy’s recent ban on ChatGPT due to data privacy concerns


Lawmakers in the US are set to explore potential “accountability measures” for companies developing artificial intelligence (AI) systems such as ChatGPT amid concerns over economic and societal impacts.
The National Telecommunications and Information Administration (NTIA), the US agency which provides advice to the government on technology policies, said it will launch a public consultation on AI products and services.
According to the NTIA, insights gathered from this consultation will help inform the Biden administration to develop a “cohesive and comprehensive federal government approach to AI-related risks and opportunities”.
“NTIA’s ‘AI Accountability Policy Request for Comment’ seeks feedback on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems that they work as claimed,” the department said in a statement on Tuesday.
In its statement, the NTIA said that potential audits of AI systems could work in a similar fashion to those conducted in the financial services industry to “provide assurance that an AI system is trustworthy”.
NTIA administrator Alan Davidson said the consultation will help inform the US administration’s long-term approach to AI products and prevent or mitigate any adverse effects.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” he said.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”
Concerns over AI's growth
The move from the NTIA follows mounting concerns about the potential impact of generative AI systems such as ChatGPT.
The rapid advent of generative AI products has prompted a degree of hesitancy among lawmakers on both sides of the Atlantic.
In late March, Italy announced a shock ‘ban’ on ChatGPT amid data privacy concerns.
The Italian data protection authority voiced serious concerns about the generative AI model and said it plans to investigate OpenAI “with immediate effect”.
RELATED RESOURCES
The three keys to successful AI and ML outcomes
Leverage the full power of artificial intelligence
Lawmakers elsewhere in Europe are also thought to be exploring a potential crackdown on AI systems, with German authorities among those cited as having serious concerns.
While lingering worries over generative AI products such as ChatGPT continue, some industry analysts described the Italian decision as an “overreaction”, saying that such crackdowns could have negative long-term implications for companies in the country exploring the use of AI.
Andy Patel, researcher at WithSecure, told ITPro that Italy’s decision had essentially “cut off” one of the most transformative tools currently available to businesses and individuals.
Industry stakeholders have also voiced a growing discontent over the speed of generative AI development.
Around the time of Italy’s ChatGPT decision, an open letter penned by tech industry figures including Elon Musk called for an immediate halt to “out of control” AI development.
The controversial letter demanded a six-month pause be imposed on companies building generative AI models and argued that there is a concerning lack of corporate and regulatory safeguards currently in place to moderate generative AI development.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Should AI PCs be part of your next hardware refresh?
AI PCs are fast becoming a business staple and a surefire way to future-proof your business
By Bobby Hellard
-
Westcon-Comstor and Vectra AI launch brace of new channel initiatives
News Westcon-Comstor and Vectra AI have announced the launch of two new channel growth initiatives focused on the managed security service provider (MSSP) space and AWS Marketplace.
By Daniel Todd
-
FTC announces probe into big name AI investments
The inquiry will examine the relationship between major cloud providers and AI companies, including Microsoft and OpenAI
By George Fitzmaurice
-
Answering your four biggest questions about generative AI security
Whitepaper A practical guide for organizations to make their artificial intelligence vision a reality
By ITPro
-
Delivering secure, trustworthy & scalable AI
Webinar Trend Micro Vision One as a solution to cyber risks
By ITPro
-
Zoom rejects claims AI training policy is mandatory after users vent confusion
News The video conferencing firm has denied customers’ data will be used to train AI without consent
By Rory Bathgate
-
Three ways risk managers can integrate real-time controls to futurize operations at the bank
Whitepaper Defining success in your risk management and regulatory compliance
By ITPro
-
Why are AI innovators pushing so hard for regulation?
Opinion Tech giants are scrambling to curry favor with lawmakers amid a pending regulatory crackdown
By Ross Kelly
-
White House targets close relationship with AI CEOs on safety
News Biden admin has called for greater controls on AI, while stressing the tech’s importance
By Rory Bathgate
-
Sundar Pichai: AI keeps me up at night
News The Google chief warned that recent AI developments will have a profound impact on society
By Ross Kelly