FTC warns companies to use AI responsibly
AI bias could run afoul of the FTC Act

The Federal Trade Commission (FTC) has warned organizations in the US to use artificial intelligence responsibly, pointing to concerns over machine learning bias.
Last year, the FTC released guidance about how organizations should use artificial intelligence (AI). Since then, it has bought settlements relating to misuse of the technology. In a blog post published Monday, the Commission warned of the potential for biased outcomes from AI algorithms, which could introduce discriminatory practices that incur penalties.
"Research has highlighted how apparently 'neutral' technology can produce troubling outcomes including discrimination by race or other legally protected classes," it said. For example, it pointed to a recent study in the Journal of the American Medical Informatics Association that warned about the potential for AI to reflect and amplify existing racial bias when delivering COVID-19-related healthcare.
The Commission cited three laws AI developers should consider when creating and using their systems. Section 5 of the FTC Act prohibits unfair or discriminatory practices, including the sale or use of racially biased algorithms. Anyone using a biased algorithm that causes credit discrimination based on race, religion, national origin, or sex could also violate the Equal Credit Opportunity Act, it said. Finally, those denying others benefits, including employment, housing, and insurance, using results from a biased algorithm could also run afoul of the Fair Credit Reporting Act.
Companies should be careful what data they use to train AI algorithms, it said, as any biases in the training data, such as under-representing people from certain demographics, could lead to biased outcomes. Organizations should analyze their training data and design models to account for data gaps. They should also watch for discrimination in outcomes from the algorithms they use by testing them regularly.
The FTC added that it’s important to set standards for transparency in the acquisition and use of AI training data, including publishing the results of independent audits and allowing others to inspect data and source code.
A lack of transparency in how a company obtains training data could bring dire legal consequences, it warned, citing its complaint against Facebook alleging it misled consumers on its use of photos for facial recognition by default. The Commission also settled with app developer Everalbum, which it said misled users about their ability to withhold their photos from facial recognition algorithms.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The FTC also warned against overselling what AI could do. Marketing hyperbole that overplays technical capability could put a company on the wrong side of the FTC Act "Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence," it said, adding that claims of bias-free AI should fall under particular scrutiny.
"In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver."
"Hold yourself accountable – or be ready for the FTC to do it for you," it said.
Danny Bradbury has been a print journalist specialising in technology since 1989 and a freelance writer since 1994. He has written for national publications on both sides of the Atlantic and has won awards for his investigative cybersecurity journalism work and his arts and culture writing.
Danny writes about many different technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector and has worked as a presenter for multiple webinars and podcasts.
-
Asus ZenScreen Fold OLED MQ17QH review
Reviews A stunning foldable 17.3in OLED display – but it's too expensive to be anything more than a thrilling tech demo
By Sasha Muller
-
How the UK MoJ achieved secure networks for prisons and offices with Palo Alto Networks
Case study Adopting zero trust is a necessity when your own users are trying to launch cyber attacks
By Rory Bathgate
-
White House targets close relationship with AI CEOs on safety
News Biden admin has called for greater controls on AI, while stressing the tech’s importance
By Rory Bathgate
-
US starts exploring “accountability measures” to keep AI companies in check
News The move follows Italy’s recent ban on ChatGPT due to data privacy concerns
By Ross Kelly
-
Democrats propose privacy-focused digital dollar
News The dollar would have to be able to make offline transactions and preserve anonymity
By Zach Marzouk
-
FTC scolds Facebook for citing it in researcher ban
News FTC warns that consent decree doesn't justify banning academics' accounts
By Danny Bradbury
-
FCC commissioner calls for big tech to help bridge digital divide
News FCC’s senior Republican wants the likes of Amazon, Apple, Facebook, and Google to help pay for broadband expansion
By Mike Brassfield
-
Facebook and Google plan new undersea cable to Asia
News The proposed cable would connect North America, Indonesia, and Singapore
By Mike Brassfield
-
Twitter: 'We do not shadow ban'
News The social media site responds to President Trump's inflammatory tweet with an explainer
By Bobby Hellard
-
Russia Today and Sputnik banned from advertising on Twitter
News The ban follows alleged interference in the US election
By Clare Hopping