G7 outlines “innovation friendly” AI code of conduct
The new G7 rules come as the Biden administration signs an executive order on AI legislation
The G7 is set to unveil a new AI “code of conduct” which aims to foster an innovation-friendly environment for businesses globally.
The rules, agreed to by member nations, will ensure that AI development will be approached responsibly and not result in harmful outcomes for broader society and industry.
According to reports from Reuters, the agreement will take the form of a non-binding, 11-point code that will encourage organizations to implement “appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle”.
The rules will also outline how businesses can “tackle incidents and patterns of misuse after AI products have been placed on the market”.
This decision marks the continuation of the Hiroshima AI process initiated at the G7 Summit in May 2023, where discussions focused on the potential of generative AI to produce major advances in productivity and innovation while addressing the accompanying risks involved with the technology.
This was followed by a virtual meeting held on September 7 2023 where participating ministers committed to “develop[ing] guiding principles and an international code of conduct for organizations.”
The G7 code will be voluntary
Although the code of conduct will be voluntary, the move does signal member nations’ intent on adopting a collaborative approach to regulating the development of AI, and the nature of any legal frameworks that may follow in the future.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
In a recent G7 Digital & Tech Ministers’ statement, the primary concerns that future regulations would look to address were described as “the misuse and abuse of AI in ways that undermine democratic values, suppress freedom of expression, and threaten the enjoyment of human rights.”
The risks identified here closely reflect those highlighted in the discussions held by MEPs concerning the EU AI Act, expected to come into force by 2026. The Act represents one of the first steps towards regulating AI within the international community.
The rules discussed in the ongoing negotiations could include a full ban on AI being used for biometric surveillance, emotion recognition, and predictive policing, as well as disclosure requirements for AI-generated content.
Biden’s AI executive order
The move from the G7 comes as the Biden administration unveiled its own AI regulations today, with an executive order on ‘Safe, Secure, and Trustworthy Artificial Intelligence’.
The announcement specifically addresses concerns around bias, job displacement, and national security.
This includes new rules forcing companies to notify the government while training foundation models and for organizations to undergo a slew of safety assessments, including red team testing, before AI products can be publicly released.
Build a strong CASB and DLP protection approach from the ground up
DOWNLOAD NOW
The rules will also require the government to see the results of these assessments while further tests could also be required for any software that will be used by federal workers.
As part of the move, new regulations aimed at protecting against AI-enabled fraud have been unveiled, as well as guidance for content authentication and watermarking to indicate when content was generated using AI.
The executive order marks one of the first major interventions by a government in the AI space, and could precede the AI Safety Summit held in the UK on the 1 – 2 November 2023, centered around the risks AI poses to society.
The UK’s laissez faire approach
This affirmative action taken by Biden differs from the approach outlined by UK prime minister Rishi Sunak in a speech given on 26 October.
Based on Sunak’s comments, the UK appears to be taking a more hands-off approach to AI regulation, with Sunak making it clear that there would not be a “rush to regulate” AI in the UK.
The rationale behind this is that until there is a better understanding of artificial intelligence, any drive for regulation would be premature, stifle innovation, and detract from his hopes for the UK to become a market leader in AI development.
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.