Facebook mistakenly reveals moderators to 'terrorists'
A security breach allowed potential terrorists to 'friend' Facebook employees
Facebook risked the safety of its content moderators when a security lapse exposed their personal information to suspected terrorists on the social network.
More than 1,000 Facebook staff reviewing and removing inappropriate content from the platform were affected by a bug in the software they use, which was discovered in November 2016.
This bug meant that the personal profile of a moderator appeared as a notification in the activity log of Facebook groups when they removed administrators from the platform. This meant that the moderator's personal profile was available to be viewed by the other admins in the group.
IT Pro understands that an activity log feature was introduced for Group admins in mid-October last year. Permissions were created to keep employees' moderation actions from creating entries in the log, but by revoking a group admin's privileges, moderators inadvertently created an entry which could be viewed by other admins of the group, although no notifications were produced to draw attention to it.
Out of the 1,000 moderators affected, six employees were determined to be "high priority" victims after Facebook determined that their profiles were likely to have been viewed by potential terrorists. Moderators suspected there was a problem when they started receiving friend requests from people affiliated with the organisations they were investigating.
Once the breach had been discovered, Facebook's head of global investigations, Craig D'Souza, contacted some of the affected employees directly, who were considered to be at the highest risk, and communicated with them using email, video conference and Facebook Messenger.
The Guardian was able to contact one of the six employees affected, who is an Iraqi-born Irish citizen who quit his job, fled Ireland and went into hiding in eastern Europe for a few months when he realised that seven individuals affiliated with a suspected terrorist group had viewed his personal profile.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The bug apparently was not fixed until 16 November, two weeks after it had been discovered, meaning it had been active for a month, although it had also retroactively exposed the personal profiles of moderators who had censored accounts as far back as 2016.
Apparently Facebook offered to install a home alarm monitoring system and provide transport to and from work to those in the high risk group, as well as counselling.
A Facebook spokesperson told IT Pro: "Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter. Even so, we contacted each of them individually to offer support, answer their questions, and take meaningful steps to ensure their safety.
"In addition to communicating with the affected people and the full teams that work in these parts of the company, we have continued to share details with them about a series of technical and process improvements we've made to our internal tools to better detect and prevent these types of issues from occurring."
IT Pro understands that Facebook has made changes to its infrastructure to prevent a worker's information becoming available externally. The company is also in the process of testing new administrative accounts, which will not require moderators to use their personal accounts when working.
Security analyst Graham Cluley said: "It must be depressing and unrewarding enough to be a member of the team which reviews hateful and disturbing content on Facebook, without the threat that your identity could be unmasked to suspected terrorists.
"Even if the chances of moderators themselves being physically attacked is perceived to be low, there will be fears that their family (perhaps still living in the Middle East) could be put at risk because Facebook allowed personal profiles to be revealed in such a slip-shod fashion."
Cluley claimed that "Facebook has never been primarily about building a safe community for friends to chat", arguing that it has let users down with "privacy gaffes and corporate policies designed to boost its advertising revenues", instead of protecting members.
He added: "This clearly was an accident, but one with potentially serious consequences. A company which had security and privacy at its heart would never have allowed a mistake like this to happen."
Zach Marzouk is a former ITPro, CloudPro, and ChannelPro staff writer, covering topics like security, privacy, worker rights, and startups, primarily in the Asia Pacific and the US regions. Zach joined ITPro in 2017 where he was introduced to the world of B2B technology as a junior staff writer, before he returned to Argentina in 2018, working in communications and as a copywriter. In 2021, he made his way back to ITPro as a staff writer during the pandemic, before joining the world of freelance in 2022.