Twitter to make AI algorithm open source to scour for biases
Users highlight racial bias in the way Twitter’s automated image-cropping tool selects portions for preview


Twitter has committed to open-sourcing its image-cropping machine learning algorithm after users identified potential racial bias.
The social media platform has confirmed it has “more analysis to do” on the algorithm that determines which elements of an image are shown when previewed after users posted examples of potential racial bias in the tool’s application.
Cryptography and infrastructure engineer, Tony Arcieri, conducted an experiment on the platform to highlight how Twitter’s automated image-cropping for previews might prefer white faces over black faces.
In the experiment, Arcieri used various combinations of a picture showing the faces of former US president Barack Obama and senator Mitch McConnell. Seemingly, regardless of positioning, or other potentially interfering factors such as the colour of the individuals’ tie, the algorithm preferred to show the face of Mitch McConnell in the cropped preview.
Twitter spokesperson Liz Kelley claimed, in response to the experiment, that the company’s own testing prior to the model being shipped found no evidence racial or gender bais present, though admitted further analysis was needed.
Kelley added that the firm will open source its machine learning algorithm “so others can review and replicate” the results of Arcieri’s experiment and get to the bottom of the issue.
Concerns about bias in artificial intelligence (AI) systems, and especially machine learning, which can often be seen as a black box, are rife, with many arguing that technology companies haven’t prioritised eradicating discrimination in their systems.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
RELATED RESOURCE
Humility in AI: Building trustworthy and ethical AI systems
How humble AI can help safeguard your business
The escalation of Black Lives Matter protests earlier in the year forced a number of tech companies into reflecting on the potential intrinsic bias present in many of their systems, especially facial recognition technologies.
A wave of organisations, including IBM and Amazon, rushed to either suspend or discontinue facial recognition systems and their use in law enforcement as a result of the movement, for instance.
The number of newly-launched AI-powered systems to have shown racial bias shows that tech companies, on the whole, have much more work to do in stamping this out. Microsoft’s AI news editor on MSN, for example, was shown to have wrongly identified a member of Little Mix in a story about band member Jade Thirlwall’s personal reflections on racism.

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Who owns the data used to train AI?
Analysis Elon Musk says he owns it – but Twitter’s terms and conditions suggest otherwise
By James O'Malley Published
-
Elon Musk confirms Twitter CEO resignation, allegations of investor influence raised
News Questions have surfaced over whether Musk hid the true reason why he was being ousted as Twitter CEO behind a poll in which the majority of users voted for his resignation
By Ross Kelly Published
-
Businesses to receive unique Twitter verification badge in platform overhaul
News There will be new verification systems for businesses, governments, and individuals - each receiving differently coloured checkmarks
By Connor Jones Published
-
Ex-Twitter tech lead says platform's infrastructure can sustain engineering layoffs
News Barring major changes the platform contains the automated systems to keep it afloat, but cuts could weaken failsafes further
By Rory Bathgate Published
-
‘Hardcore’ Musk decimates Twitter staff benefits, mandates weekly code reviews
News The new plans from the CEO have been revealed through a series of leaked internal memos
By Connor Jones Published
-
Twitter could charge $20 a month for 'blue tick' verification, following Musk takeover
News Developers have allegedly been given just seven days to implement the changes or face being fired
By Rory Bathgate Published
-
Twitter reports largest ever period for data requests in new transparency report
News The company pointed to the success of its moderation systems despite increasing reports, as governments increasingly targeted verified journalists and news sources
By Rory Bathgate Published
-
IT Pro News In Review: Cyber attack at Ikea, Meta ordered to sell Giphy, new Twitter CEO
Video Catch up on the biggest headlines of the week in just two minutes
By IT Pro Published