Biden Administration seeks industry views on AI safety guidelines
Standards body looks for feedback on AI red-teaming and generative AI risk management
The US National Institute of Standards and Technology (NIST) has taken one of its first steps towards creating a framework for the safe development of artificial intelligence.
The Biden administration set out its plans in an executive order back in August on the ‘Safe, Secure, and Trustworthy Development and Use of AI’.
It requires, for example, that developers of the most powerful foundation models – those which could pose a serious risk to national security or the economy - to notify the government when training the AI model, and to share safety test results.
In addition, the executive order asked NIST to start work on the standards around the development and deployment of safe, secure, and trustworthy AI.
NIST is tasked with working on secure development practices for generative AI and to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities that could cause harm.
NIST also has to establish guidelines for developers of generative AI so that they can conduct red-teaming tests for deployment of safe and secure systems.
Red-teaming refers to using a team of ethical hackers – the ‘red team’ - to attempt to break into a network or system by using the same techniques that a malicious hacker would use, in order to create a realistic assessment of its security.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
In these scenarios, there is also usually a ‘blue team’ of defenders who try to stop the red team from completing their objectives.
NIST has now issued a Request for Information (RFI) to help with its work, asking for feedback across industry, academia, and beyond to help it develop industry standards.
The RFI specifically calls for information related to AI red-teaming, generative AI risk management, reducing the risk of synthetic content, and advancing responsible global technical standards for AI development.
“It is essential that we gather all perspectives as we work to establish a strong and unbiased scientific understanding of AI, which has the potential to impact so many areas of our lives,” said NIST Director Laurie E. Locascio
Responses will be accepted until February 2, 2024, and information collected will inform the draft guidance that NIST will release for public comment.
Getting the standards right isn’t going to be easy. Last week members of the House Science, Space, and Technology Committee sent a letter to NIST warning about the poor state of AI safety research right now in relation to the US Artificial Intelligence Safety Institute which NIST is also running.
“Findings within the community are often self-referential and lack the quality that comes from revision in response to critiques by subject matter experts. There is also significant disagreement within the AI safety field of scope, taxonomies, and definitions,” the letter said.
“Organizations routinely point to significant speculative benefits or risks of AI systems but fail to provide evidence of their claims, produce non-reproducible research, hide behind secrecy, use evaluation methods that lack construct validity, or cite research that has failed to go through robust review processes, such as academic peer review.”
US AI safety guidelines could take years to develop
The letter warned that developing the right metrics to measure AI trustworthiness across successive generations of large language models could itself take years, even without taking into account how these AI systems are deployed across sectors and use cases.
The US initiative aims to build a legal and regulatory framework to ensure that the rapid rise of AI is a net benefit to society rather than a potential disaster.
“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure, “ president Biden’s executive order notes.
Learn best practices for stopping encrypted attacks using zero trust
“At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
The US is just one of a number of nations globally currently exploring guidelines and legislation on the use of artificial intelligence. The UK recently published a set of guidelines around the secure development of AI, covering design, development, deployment, and maintenance.
Meanwhile the European Union pushed through its AI Act following weeks of intense discussions. The legislation has been met with mixed reaction from industry stakeholders and member states alike, with some arguing the regulations could inhibit innovation across the union.
Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.