← Back to Surveillance info

Facial recognition

Systems that identify or verify a person's identity from photos or video—in real time or after the fact—with well-documented accuracy failures that fall hardest on Black, brown, and trans people.

What it is / How it works

Facial recognition systems use deep-learning models to convert a face image into a numerical "faceprint." That faceprint is compared against a reference database—mugshots, driver's license photos, social media images—and a ranked list of candidates is returned. A human operator (or increasingly, an algorithm) then decides whether there is a match.

Systems can run in two modes: 1:1 verification (does this face match this claimed identity?) used at borders and checkpoints, and 1:N identification (who is this person, out of a database of millions?) used in policing. Real-time systems can scan faces from live camera feeds and alert officers when a match occurs.

Accuracy depends heavily on image quality, lighting, camera angle, and the demographic composition of the training data. Databases built from mugshots over-represent people who have been previously arrested—themselves a product of racist policing—which compounds bias in the system.

Harms and bias

NIST studies and independent audits consistently find that commercial facial recognition misidentifies darker-skinned people, women, elderly people, and transgender or nonbinary people at significantly higher rates than white men. In law enforcement use, false positives cause wrongful arrests; several documented cases have involved Black men being arrested on false matches, some of whom spent days in custody.

Facial recognition enables persistent tracking: once a face is in a database, every camera integrated with that system can log where that person appears. Used at scale, this is mass surveillance of everyone who enters a space—protests, clinics, places of worship, courthouses. It chills protected activity and threatens the right to anonymous participation in public life.

Hardware and infrastructure

Facial recognition runs on top of existing camera infrastructure; any sufficiently high-resolution camera feed can be analyzed. Purpose-built hardware includes:

  • Avigilon (Motorola Solutions) – Appearance Search and video analytics built into cameras and their ACC VMS, including a facial recognition module
  • Hikvision / Dahua – cameras with on-device facial recognition, widely deployed globally
  • NEC NeoFace – facial recognition appliances used at airports and border crossings
  • Idemia – biometric hardware and software for border control, law enforcement, and civil ID programs
Software vendors
  • Clearview AI – built a database of 30+ billion images scraped from the public web; licenses face-search to law enforcement agencies and governments worldwide; subject to multiple regulatory actions and bans
  • Dataworks Plus – facial recognition search tool widely used by state and local police, often accessing DMV and mugshot databases
  • Amazon Rekognition – cloud facial analysis API; sold to law enforcement until advocacy pressure led to a pause; still available commercially
  • Microsoft Azure Face – cloud facial recognition API; Microsoft restricted access for law enforcement after NIST bias findings
  • Briefcam (Canon) – video analytics platform with facial recognition and crowd-analysis features, integrated with many VMS platforms
  • Palantir – data fusion and analytics platform that can ingest and correlate facial recognition outputs with other surveillance data