Facial recognition is changing everything… the way we live and interact with our society. Like other technologies, it is widely used today in surveillance systems, mainly to track and identify criminals/fugitives in the fight against crimes such as human trafficking, kidnapping, etc. In business and finance, technology is becoming a popular choice in payment to maximize security and minimize fraud.
In transportation, the technology has been deployed in train stations and airports to save travelers time from checking in, help them pay for their fares, and identify unlicensed drivers and jaywalkers. In the medical field, it is used in patients identification, monitoring, sentiment analysis, and genetic disorder diagnoses, while in education, facial recognition helps to improve campus security, combat school bullying, as well as attendance tracking, etc.
Though facial recognition is benefiting our society in many ways, controversy and concerns are rising, and there are a good number of reasons why people are not happy about the use of technology. They include the potential risks related to privacy, security, accuracy, and bias.
In February 2019, Security experts identified SenseNets, a facial recognition and security software company in Shenzhen, as having a severe data leak from an unprotected database, including more than 2.5 million records of citizens with personal information. In August 2019, more than 1 million people’s personal information, including biometric data such as fingerprints and facial recognition information, was found on a publicly accessible database used by UK metropolitan police, defense contractors and banks alike. Such data breaches can put the victims at a considerable disadvantage, particularly when considering biometric information is almost permanent, and the effects of the leak are severe and durable.
Though typically considered as a means of identifying security, facial recognition is not considered to be sufficiently safe. Research shows that GAN-generated Deepfakes videos are challenging for facial recognition systems, and when considering the further development of face-swapping technology, such a challenge will be even more significant. In another research, ArcFace’s best public face ID system is attacked by adding printed paper stickers to a hat, and the Face ID model became confused.
In real-world scenarios, facial recognition systems aren’t always reliable. A report shows that the UK South Wales Police facial recognition system misidentified thousands of trials, resulting in 2,297 false positives of a total of 2,470 matches, with an error rate of around 92 percent. Critics worry that such deficient performance could lead to erroneous arrests as well as a drag on police work. Another evaluation from Essex University showed that the facial recognition technology of the Metropolitan Police only made eight correct in its 42 matches, with an error rate of 81 percent, and such deployment was likely to be found “illegal” if challenged in court.
In the project “Gender Shades,” conducted by MIT Media Lab and Microsoft Research, IBM, Microsoft, and Megvii facial analysis algorithms were evaluated, and the results show that darker-skinned females are the most vulnerable group to gender misclassification, with error rates up to 34.4 percent higher than those of lighter-skinned males. In an American Civil Liberties Union (ACLU) report, a test was done on Amazon’s “Recognition” facial recognition tool by comparing photos of 535 US Congress members with a face database of 25,000 arrest photographs. The results include 28 false matches, 39 percent of which are people of color, although they make up only 20 percent of the input.
In the United States, San Francisco legislators have unanimously voted to ban the use of facial recognition technology across local agencies, including transportation and law enforcement authorities. They argue that the ban will protect them against possible inaccuracy and bias, and preserve their privacy and freedom. A few months later, Somerville and Oakland also passed their ban on the use of facial recognition in cities. In March 2019, senators introduced a bipartisan bill, called the Commercial Facial Recognition Privacy Act, to offer legislative oversight over the commercial application of facial recognition. The bill is likely to prohibit commercial users from collecting and re-sharing facial data without their consent in order to identify or track consumers.
Meanwhile, a poll conducted by GlobalData shows that 53% of the people say ‘no’ to police use of facial recognition. However, the use of technology by police and other law enforcement is proving divisive. They said that they were not happy with the use of facial recognition technology by law enforcement, while 47% said they were pleased with its use by such organizations.
“The response comes as the EU is considering a ban on the use of facial recognition until the technology reaches a greater stage of maturity. A draft white paper, which was first published by the news website EURACTIV in January, showed that the European Commission was considering a temporary ban,” technology editor Lucy Ingham said.
“It proposed that ‘use of facial recognition technology in public spaces by private or public actors would be prohibited for 3–5 years during which a sound methodology could be identified and developed for assessing the impacts and possible risk management measures.”
“While this may seem extreme, particularly given that police forces around Europe are already using facial recognition, there is a case for the technology not yet being mature enough for regular use. For example, an independent report on the facial recognition technology used by the Metropolitan Police to identify potential suspects found that it was inaccurate in 81% of cases. However, the Met claimed that the error rate was only 1 in 1000.”
Since then, the Met Police has announced that it will now use the technology as part of routine operations, a move that Silkie Carlo, director of Big Brother Watch, branded “an enormous expansion of the surveillance state and a severe threat to civil liberties in the UK. However, police forces maintain that technology prevents crime and does not breach privacy.
“There are also issues with identifying people of color, with tests by the US government finding that even the most accurate facial recognition technologies misidentify black people at a rate at least five times higher than for white people,” Lucy said.