How can facial recognition be made safer?
AI brings a new dimension to the balance between security and privacy, with boundaries of acceptable behaviour yet to be set
While facial recognition is gaining traction as an alternative to fingerprint scanning for biometric security, its use in public settings – particularly for broad-spectrum surveillance – is growing increasingly controversial. In particular, both campaign groups and governments alike are worried by the threat they pose to individuals’ right to privacy.
In the UK, there have been countless legal challenges about the accuracy of systems used by law enforcement officials, and plans to deploy facial recognition at a Kings Cross development site were shelved following backlash from privacy campaigners. On the other side of the Atlantic, California has banned police from using the technology for three years after citing similar concerns. In Europe, meanwhile, the EU is mulling a five-year ban on the technology.
Despite this, surveillance and facial recognition systems remain widespread in countries all over the world. Is there a way to marry up the right to privacy with the potential for better public safety facial recognition promises to bring?
When surveillance undermines privacy
There’s no denying that surveillance technology has become commonplace, particularly in the UK, but most people associate surveillance as CCTV cameras. In reality, we are being increasingly surveilled in a number of ways – especially in our digital lives. According to Andrew Rogoyski, innovation manager at Roke Manor Research, this is a cause for concern.
He tells IT Pro: “The internet supergiants such as Google and Facebook profile our online behaviour in order to better target you for advertising – it’s how they make their money. So while we’re worrying about the safety of CCTV, think about the safety and fairness of your digital profiles.”
“Perhaps more extreme is China’s social credit system, which allocates a score to each citizen’s behaviour, including CCTV surveillance, for example, jay-walking, but also integrates other forms of surveillance like failure to pay a court bill, spending too many hours playing video games, anti-social behaviour on public transport and so on,” he continues. “This score can affect your options on travel, schools, and other real-world choices.”
When combined with artificial intelligence, surveillance technology can pose an even bigger threat to our privacy. Rogoyski says: “The use of AI learning techniques, where computers are taught to recognise people and identify subjects of interest, has led to concerns about systems being poorly implemented, with problems such as ethnicity and gender biases becoming evident.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
He also claims another problem emerges around retrospective surveillance, with the number of cameras in public spaces increasing. “Law enforcement often have to search through hundreds of hours of footage, taking days of investigator’s time, often under pressure to deliver immediate results. As we grow to rely on such technologies, we need to ensure that they are effective and not missing vital scenes or evidence. “
Clamping down on facial recognition
With these challenges in mind, how can facial recognition and surveillance technologies be made safer? Rogoyski says one way to do this is making sure people are aware of where and how they should be used. “We are seeing an increase of cameras in sensitive places like schools. The UK Government’s Office of Surveillance Commissioners has a code of conduct that provides vital guidance on how such systems should be introduced, but sadly these guidelines are not always understood or followed,” he says “Making sure that these systems are both effective and fair still requires some significant research to achieve these aims.”
Jason Tooley, chief revenue officer at Veridium and a board member at techUK, agrees that this tech suffers from the risk of challenge and controversy. “As seen in cases such as the South Wales Police pilot programme and the Kings Cross St Pancras surveillance scandal, external factors such as poor lighting as well as gender and racial bias can have an impact on its reliability and therefore opens it up to criticism,” he explains. “The challenges of biometric maturity and the public expectations associated with the technology and external factors such as public acceptance are particularly difficult challenges to overcome.”
Tooley believes that to deliver effective public safety and leverage innovation, organisations must take a strategic approach as they trial biometric technology and not solely focus on a single factor style approach. “The requirement for verification of identity is moving towards a multi-factor combination approach, with elements of this being explicit such as facial recognition or possession, as well as implicit elements such as location or behaviour,” he says.
He argues that a better understanding of the public’s expectations, the usage of facial recognition technology and articulating the benefits for everyone publicly will assist in adoption and acceptance. “For police forces, in particular, an open biometric strategy that delivers the ability to use the right biometric techniques for the right scenario as well as leveraging public acceptance and awareness will deliver a host of benefits and ultimately achieve better crime prevention.”
Putting end-users first
Clearly, when it comes to developing or implementing these technologies, companies need to be aware of these challenges and how they can be overcome. Mark Thompson, global lead for the privacy advisory practice at KPMG, says organisations should start by putting the individual at the heart of the solution and leveraging privacy engineering to ensure privacy is maintained.
“A key first step is the need for interactions with the actual users of the solution, understanding their expectations and what is acceptable to them. Secondly, organisations must explore all the potential ways in which the solution could be used, abused and potentially expose the individual to risk and harm,” he says.
Thompson says businesses should also create smart privacy-engineered solutions that get the privacy balance right. He explains: “Innovative encryption technologies can be leveraged to protect the identity of individuals only when there is a specific crime that requires that one individual’s data be unencrypted, in order to support law enforcement services. Significantly, throughout all of this, there needs to be a key focus on customer experience to ensure the end solution delivers a great experience for the end-user.”
Georgia Shriane, senior associate at law firm Boyes Turner, says organisations need to accurately identify the risk areas first in order to make this technology safer. But this, she admits, isn’t easy. “We have already learnt, through arresting the wrong people or mis-identifying suspects, that there is a risk of the LFR technology not being accurate enough,” she tells IT Pro.
“Can we justify proceeding on this trial-and–error basis using large amounts of biometric data (which itself throws up human rights questions) in order to explore the success of the technology, how to improve it and make it “safer ? It seems to be very hit and miss.”
There are also challenges around privacy laws. She adds: “Another point to bear in mind is that even our relatively recent GDPR is out of date when it comes to LFR (the GDPR in its final form was passed in 2016 and came into force in 2018 – without particularly considering the difficulties of LFR) and the LFR needs to be properly legislated for.”
For many organisations, facial recognition and surveillance systems form an important part of security procedures and other critical operations. But it’s clear that they need to be aware of a range of important issues, particularly around privacy, when it comes to using this tech. Most importantly, they need to ensure the safety of end-users is put first.
Nicholas Fearn is a freelance technology journalist and copywriter from the Welsh valleys. His work has appeared in publications such as the FT, the Independent, the Daily Telegraph, the Next Web, T3, Android Central, Computer Weekly, and many others. He also happens to be a diehard Mariah Carey fan. You can follow Nicholas on Twitter.