The twisted case of facial recognition
Proceeding with caution
Another incident, more abstract, but possibly even more disturbing, is the case of the research paper proposed in May 2020 by data scientists at Harrisburg University that claimed that they could predict “with 80% accuracy and no racial bias, whether someone is a criminal based solely on a picture of their face.” As reported in Wired, by late June, more than 1,000 machine learning researchers had denounced the Harrisburg findings in a public letter, and the intended publisher had declined to go ahead with the publication. This proposed research was only the latest in a long series of papers attempting to claim that criminality can be determined by using facial recognition analytics. In June 2020, according to The New York Times, Microsoft, Amazon, and IBM each announced that they would stop supplying their facial recognition technology to law enforcement.
It’s telling for the state of our AI that both the advances in machine translation and the advances in facial recognition center on two of the greatest strengths embodied in the DNA of our species. We are designed to communicate with each other through sounds and complex symbolic messaging, and we can naturally do this with a facility that has so far provided us an evolutionary advantage. We are also designed to recognize details of human faces down to the most minute and precise degree. This is not only a social advantage but often a survival tool.
While our statistically-driven computer intelligence has gotten us very excited by improving its accuracy rate up to 80% and sometimes 90% in certain applications, we too often let this excitement blind us to the very human tragedies that occur when mistaken machine understandings or mistaken identities start to impact real people’s everyday lives.
While machine translations are doing impressively well in the applications they address today, the performance of facial recognition in industrial-grade deployments is another story altogether. Currently, we are in danger of plummeting into the depths of irrational exuberance around facial recognition and the video surveillance it enables. We have viewed China’s program to suppress dissent of any sort through ubiquitous surveillance cameras as the overreach of a totalitarian regime. But when here in the U.S. we are prepared to substitute machine identification of crime suspects for the human judgment and investigation protocols of law enforcement professionals, we are ignoring the glaring inadequacies of the AI.
Hype versus reality
Let us not succumb to the hype being generated around machine vision. Treat with extreme skepticism any claims that the abilities of neural networks to draw clairvoyant inferences from pixel analytics must be more valuable than our own. Let’s recognize that our own inadequacies in data preparation and in removing all inbuilt bias when we train new algorithms are literally guiding the technology into troubling problems. We know that our short attention spans around model maintenance of learning systems and our gullible eagerness to adopt the latest shiny tech gadget inclines us to take shortcuts and accept (at least initially) fundamentally flawed solutions.
Facial recognition technology has a well-deserved reputation for problematic behaviors in the wild. For the sake of all of us citizens, we must exercise extreme vigilance until we can take the twists out of the judgments generated by these systems.