FTC warns companies to use AI responsibly
AI bias could run afoul of the FTC Act
The Federal Trade Commission (FTC) has warned organizations in the US to use artificial intelligence responsibly, pointing to concerns over machine learning bias.
Last year, the FTC released guidance about how organizations should use artificial intelligence (AI). Since then, it has bought settlements relating to misuse of the technology. In a blog post published Monday, the Commission warned of the potential for biased outcomes from AI algorithms, which could introduce discriminatory practices that incur penalties.
"Research has highlighted how apparently 'neutral' technology can produce troubling outcomes including discrimination by race or other legally protected classes," it said. For example, it pointed to a recent study in the Journal of the American Medical Informatics Association that warned about the potential for AI to reflect and amplify existing racial bias when delivering COVID-19-related healthcare.
The Commission cited three laws AI developers should consider when creating and using their systems. Section 5 of the FTC Act prohibits unfair or discriminatory practices, including the sale or use of racially biased algorithms. Anyone using a biased algorithm that causes credit discrimination based on race, religion, national origin, or sex could also violate the Equal Credit Opportunity Act, it said. Finally, those denying others benefits, including employment, housing, and insurance, using results from a biased algorithm could also run afoul of the Fair Credit Reporting Act.
Companies should be careful what data they use to train AI algorithms, it said, as any biases in the training data, such as under-representing people from certain demographics, could lead to biased outcomes. Organizations should analyze their training data and design models to account for data gaps. They should also watch for discrimination in outcomes from the algorithms they use by testing them regularly.
The FTC added that it’s important to set standards for transparency in the acquisition and use of AI training data, including publishing the results of independent audits and allowing others to inspect data and source code.
A lack of transparency in how a company obtains training data could bring dire legal consequences, it warned, citing its complaint against Facebook alleging it misled consumers on its use of photos for facial recognition by default. The Commission also settled with app developer Everalbum, which it said misled users about their ability to withhold their photos from facial recognition algorithms.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The FTC also warned against overselling what AI could do. Marketing hyperbole that overplays technical capability could put a company on the wrong side of the FTC Act "Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence," it said, adding that claims of bias-free AI should fall under particular scrutiny.
"In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver."
"Hold yourself accountable – or be ready for the FTC to do it for you," it said.
Danny Bradbury has been a print journalist specialising in technology since 1989 and a freelance writer since 1994. He has written for national publications on both sides of the Atlantic and has won awards for his investigative cybersecurity journalism work and his arts and culture writing.
Danny writes about many different technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector and has worked as a presenter for multiple webinars and podcasts.