Has the US missed its moment on AI?
The Biden administration's approach to generative AI follows months of posturing by global partners


A lot of the coverage surrounding artificial intelligence (AI) in North America revolves around American exceptionalism. How it can boost business, create technological development, and keep America ahead of competing nations. But to date, the US has been short on AI regulation.
READ MORE
The White House made its long-awaited wade into the waters with an announcement on October 30 of an executive order on AI. The move begs the question: has the US missed its moment?
That announcement, Biden's long-anticipated Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, directed developers and agencies such as the National Institute of Standards and Technology (NIST) and National Security Council (NSC) to develop and implement standards in the space to increase national and personal security and decrease concerns around fraud.
The executive order did not limit its controls to generative AI, instead choosing to target AI more broadly to encompass other frameworks such as neural networks or machine learning. It requires companies to inform the federal government about any models that pose a serious risk to national security, reasserts support for using AI for cyber security through programs such as the AI Cyber Challenge, and seeks to protect Americans from AI-enabled fraud through the use of AI detection tools.
Industry experts like Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab and former industry analyst at Forrester covering the data science and AI space, see the EU AI Act as two years ahead. He likens 2022's AI Bill of Rights to a preamble and Biden's move as the proverbial first shot.
"There are tons of ways in which you can fall afoul of AI regulations with your use of AI on the European side, in a way on the US side, we're getting started. Sadly, there's probably going to need to be some major disasters or something really, really big to push us to get more effective AI regulation on the US side of things."
That air of uncertainty is echoed by the managing partner at data security and consultancy firm Zaviant, Will Sweeney. He says that while the executive order is one of the strongest tools in the American political sphere and does provide a framework and a needed path forward one edict from the White House isn't going to be earth-shattering.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"I think what the executive order looks like now is going to, in practice in 12 months or 24 months, be completely different," says Sweeney.
"And I don't think it's going to be a situation where you can publish something today and have it stand the test of time, because with AI we just know that that's not going to work."
Regional hurdles to unified regulation
There's an added wrinkle. Unlike the EU's legislation, the history of American law is far more localized. Take, for example, the California Consumer Privacy Act (CCPA) which the federal government continues to look towards in lieu of comprehensive federal law on data privacy. Local government is also moving on AI faster than the White House: California governor Gavin Newsom signed a generative AI executive order more than a month before Biden did, while New York City mayor Eric Adams attempted to release a plan for responsible AI to form a framework for the city's public bodies. That piecemeal style of governance is something that Sweeney says he expects to continue in the US.
"This kind of situation developed where certain companies, certain states, certain cities are taking different sides of the aisle in terms of whether or not they think AI is a good thing, and they want to embrace it, and make sure that you're using it appropriately and responsibly," Sweeney adds.
"And then you're having the other side, which is saying, 'No, we can't use it at all, it represents too much risk, either to us as a business because of our intellectual property, or because of the likelihood that it will change how we do business or completely, in some cases, eliminate the way we do business."
When one looks at both the AI Bill of Rights and the president's executive order through the lens of innovation, Carlsson is among a group of experts who believe that playing fast and loose with these rules will do more harm than good.
RELATED RESOURCE
Discover governance that will help you operationalize AI with confidence
DOWNLOAD NOW
"If you want to preserve the maximum ability to innovate, i.e. you want to minimize the threat of really effective legislation then the US has done a wonderful, wonderful job of this in the sense that there are no real safeguards or threats involved."
Carlsson says it's important that people on both sides of the Atlantic realize that it's not just AI regulation that we need to be thinking about in isolation, but how AI can intersect with existing federal and state protections. Examples include an early Illinois law (implemented in 2020) that prevents companies from using AI programs to disadvantage potential employees during interviews or healthcare regulations that already preclude AI from being used in harmful ways. None of these laws directly aim to mandate ethical AI, but enshrine other human rights in such a way that precludes certain AI activities at present.
A complex global picture
READ MORE
No territory is experiencing smooth sailing for AI regulation. Recent AI agreements between France, Italy, and Germany have put the nations on a collision course with the rest of the EU over how strict the proposed legislation will be. The UK government recently held its AI Safety Summit, with the historic Bletchley Park playing host to such guests as Kamala Harris, US vice president; Ursula von der Leyen, the president of the European Commission; and Wu Zhaohui, China's vice minister of Science and Technology. Each argued for better cooperation on AI safety but walked away having mainly made the case for their respective approaches to AI regulation.
International public policy voices like Hadrien Pouget, associate fellow at the Carnegie Endowment for International Peace, have argued that the White House's directive should bolster Europe's confidence on the global AI stage. Late or not, the move has certainly sent a signal to other regulators that the US is interested in playing its part when it comes to AI regulation.
Despite the current crossroads between billions of dollars, tech mogul optimism, and regulatory criticism, Sweeney says that the protections that this executive order moves forward are still cause for celebration.
"The United States, I think, is being a little bit more thoughtful about how they're approaching these tools. Which I think is ultimately a very good thing and I would like to see us kind of pick up the pace a little bit on the data, privacy, regulatory side of things."

John Loeppky is a British-Canadian disabled freelance writer based in Regina, Saskatchewan. He has more than a decade of experience as a professional writer with a focus on societal and cultural impact, particularly when it comes to inclusion in its various forms.
In addition to his work for ITPro, he regularly works with outlets such as CBC, Healthline, VeryWell, Defector, and a host of others. He also serves as a member of the National Center on Disability and Journalism's advisory board. John's goal in life is to have an entertaining obituary to read.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Meta executive denies hyping up Llama 4 benchmark scores – but what can users expect from the new models?
News A senior figure at Meta has denied claims that the tech giant boosted performance metrics for its new Llama 4 AI model range following rumors online.
By Nicole Kobie Published
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly Published
-
Productivity gains, strong financial returns, but no job losses – three things investors want from generative AI
News Investors are making it clear what they want from generative AI: solid financial and productivity returns, but no job cuts.
By Nicole Kobie Published
-
Legal professionals face huge risks when using AI at work
Analysis Legal professionals at a US law firm have been sanctioned over their use of AI after it was found to have created fake case law.
By Solomon Klappholz Published
-
Future focus 2025: Technologies, trends, and transformation
Whitepaper Actionable insight for IT decision-makers to drive business success today and tomorrow
By ITPro Published
-
Looking to use DeepSeek R1 in the EU? This new study shows it’s missing key criteria to comply with the EU AI Act
News The DeepSeek R1 AI model might not meet key requirements to comply with aspects of the EU AI Act, according to new research.
By Rory Bathgate Published
-
The DeepSeek bombshell has been a wakeup call for US tech giants
Opinion Ross Kelly argues that the recent DeepSeek AI model launches will prompt a rethink on AI development among US tech giants.
By Ross Kelly Published
-
OpenAI unveils its Operator agent to help users automate tasks – here's what you need to know
News OpenAI has made its long-awaited foray into the AI agents space
By Nicole Kobie Published