Facial recognition (“FR”) is rapidly becoming part and parcel of everyday life. Whether to get through passport control or simply to unlock your phone, the technology reads the delicate contours and intricacies of your face to establish identification. Airports market it as a more reliable and efficient means of tracking who enters and exits the country, and Apple says they’re saving you time, better protecting you and the deeply personal information kept on your phone.
This same technology is being trialled and used in varying UK law enforcement departments. It has been deployed to help pick out a person in a crowd, Where’s Wally-style, or match someone in police custody to an individual in their database. Although this has been deployed by the Metropolitan Police force for a number of years now, its questionable accuracy has often been brought into the limelight. It was famously used at the 2017 Notting Hill Carnival celebrations over the August bank holiday to scan the hoards of partying individuals and cross-compare them with a database of over 20 million criminals. Unfortunately for the police – and the technology’s provider – it achieved a woeful 98% false-positive rate which resulted in the wrongful arrests of some 30 carnival attendees.
Recruiters have also been known to use similar software in an attempt to gauge whether or not the candidate would be a good hire, or how they react to a high-pressure situation or complex task, although these methods have attracted criticism. Similar tech has been used by the US Transportation Security Administration (TSA) since 2003. They are exploring what happens when the digital reading goes beyond simple facial identification; instead of just deciphering who you are, the new recognition software works out if you are scared, surprised or nervous. The TSA is hoping to use this to detect potential terror threats or the guilty face of a drug mule. As in the case of the Metropolitan Police’s attempts, success has been minimal if not non-existent.
The key caveat that will ultimately lead to the success of these technologies is data; the more diverse and abundant the data, the better. It is largely understood that the reason for these early failures in implementing FR into society is that the machines do not yet have enough to go on and therefore struggle to distinguish between faces or recognise emotions. But this may not remain the case for much longer; already we can see data being harvested beyond that taken by CCTV or other public collection. Affectiva, the first business known to market “artificial emotion intelligence” collects data from 87 countries with a database of 7.5 million faces. Most of the data is collected from individuals opting-in to be recorded whilst driving to work or even watching TV.
The obvious questions here are: where is our data going and is it safe? A year on from the Cambridge Analytica scandal, we know this: if the data can be stored, then there is someone who will want to collect, collate, quantify and then sell it. Personal information, no matter how mundane, can give the owners or processors a very easy way into assessing our habits, therefore drastically increasing the ease of targeted sales and content, tracking or even blackmail. The news is rife with examples of badly protected data and fears that adequate value is not being placed on it. Last week, a Dutch security expert exposed that a Chinese firm specialising in FR and AI had left unprotected the sex, address, passport photo and movements of the last 24 hours of over 2 million individuals. The majority of this information had been gathered from strategically placed government surveillance cameras in public areas – note the lack of consent given here.
It’s not specific to any one country – it appears to be the price we may all end up paying for better security and safer streets. As per usual, dialogue and better regulation is the order of the day. Without this, the fundamental Human Right to privacy is hanging in the balance.