Facing the Issue: San Francisco Bans City Use of Facial Recognition Technology

Snell & Wilmer
Contact

Snell & Wilmer

[co-author: Haley Breedlove, Summer Associate*]

On May 21, 2019, the City of San Francisco passed an ordinance banning the use of facial recognition software by police and other city agencies. The San Francisco Board of Supervisors voted 8-1 in favor of the ban, which went into effect in June 2019. The ordinance bans facial recognition technology from being used by all city departments, including police and transit authorities. The ordinance includes exceptions for the San Francisco International Airport and the Port of San Francisco, because they are under the federal government’s jurisdiction. Proponents of the law agree that facial recognition technology can be a helpful asset for police and other protection agencies to find suspects and missing persons. However, there are privacy, civil liberty, and accuracy concerns surrounding the technology. The ordinance states in its general findings,

“While surveillance technology may threaten the privacy of all of us, surveillance efforts have historically been used to intimidate and oppress certain communities and groups more than others, including those that are defined by a common race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective . . . . [t]he propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits.”

San Francisco is believed to be the first U.S. city to ban itself from using the rapidly growing technology and potentially paves the way for other city and state governments to follow suit in banning the often-unregulated technology.

What is Facial Recognition?

Facial recognition is a form of biometrics which measures data points of an individual’s facial features and identifies patterns in those facial features. Like other common biometrics such as fingerprints and iris scans, facial biometrics measure various features and digitizes the information for storage in a searchable database. However, unlike other biometrics, the concern with facial recognition is that when left unregulated it can more easily be performed without the consent of the individual.

How is Facial Recognition Being Used?

Facial recognition technology is emerging everywhere and has wide-ranging and growing usage. It is most commonly used in security and surveillance systems, both privately and by government agencies, to identify and track individuals. Amazon, Microsoft, IBM, Facebook, and the Chinese company Face++ are among the leaders in facial recognition software development. As the technology continues to develop, the systems have become less expensive and therefore its use has become more widespread.

How Facial Recognition is Already Used by Government and Police Agencies

The FBI operates a facial recognition database of over half of American adults, mostly compiled from criminal records, state drivers’ license databases, and passport photos. The FBI’s photo database, called Next Generation Identification – Interstate Photo System (NGI-IPS), allows the FBI and certain state and local governments to submit facial recognition search requests to identify unknown persons. The U.S. government recently began using facial recognition in some airports to track international travelers. Hundreds of police agencies already use facial recognition technology to identify and track suspects in routine crimes, such as thefts and shootings. The software is implemented in police body cameras, police vehicles, and surveillance cameras in busy areas. Police often obtain video of the suspect from police cameras or private surveillance cameras and can match the individual to a database of criminal records and public data.

Other Uses of Facial Recognition Software

Facial recognition is also used outside the area of government security including,

Issues with Facial Recognition Software

Facial recognition technology used to identify and track individuals has evolved in recent years leading to discussions about ethical and privacy concerns. The rapid development of the technology has arguably outpaced the ability to regulate it.

Privacy Concerns

Issues arise where facial recognition is used without individual’s consent and their facial biometric data is stored, shared, and sometimes sold. As Neema Singh Guliani, Senior Legislative Counsel at the ACLU recently stated as a facial recognition hearing before Congress, “we may have put the cart before the horse” with the rapid growth of facial recognition technology and lack of protections for consumers. Although facial recognition software has potential public safety benefits when used by police to track and locate criminal suspects, it potentially comes at the cost of individual liberties and civil rights.

Does Facial Recognition Technology Result in Inaccurate and/or Racially Biased Results?

Some groups oppose the use of facial recognition software not only for its impact upon individual privacy rights, but also due to concerns over the potential for inaccurate and/or and racially biased results. The technology used by law enforcement agencies is said to be inaccurate as high as 15% of the time. Additionally, people of color are more likely to be misidentified than Caucasians.

Gender Shades, a study published in February 2018 by MIT Media Lab highlighted the concerning inaccuracies in matching faces to gender in top facial recognition software developed by Amazon, IBM, and Microsoft. The study found facial recognition software, including Amazon’s “Rekognition” software, was better at matching white males than darker skinned or female faces.

IBM and Microsoft have voiced concern with the technology’s accuracy due to its rapid development and implemented changes to address the racial and gender biases found by the study. Amazon, however, has disputed the study and claims that in internal tests of an updated version of Rekognition it found “no difference” in gender classification accuracy across all genders.

In 2018, the ACLU tested Amazon’s facial recognition software to compare members of Congress with compiled public arrest records. Amazon’s software reportedly misidentified 28 members of Congress as criminals. Additionally, in January 2019, the MIT Media Lab released a follow-up study to Gender Shades showing that the facial recognition software, although improving in accuracy still has a racial bias and is not entirely accurate.

Potential For Misuse?

As facial recognition technology continues to improve in accuracy, the concern for potential misuse remains. Where public and private companies are left unregulated and freely able to obtain facial recognition data it is unclear how this data may be used or even sold. Additionally, when sensitive data is obtained and stored in databases, there is the potential for this data to be breached.

Are there Regulations on Facial Recognition Technology?

There are currently no federal regulations regarding the use of facial recognition technology for commercial or government use. However, a few states have passed regulations related to facial biometric data. In 2008, Illinois passed the Biometric Information Privacy Act (BIPA), which regulates the usage of biometric data, including consent and collection requirements, storage practices, secure transmission, sale of biometrics, and other necessary privacy guidelines. The BIPA applies to private individuals and entities. However, it excludes federal, state, and local governments. A few other states, including Texas, and Washington have similar passed regulations on facial biometric use by private entities.

The U.S. Senate is currently considering the Commercial Facial Recognition Privacy Act of 2019. The Act is aimed at prohibiting commercial users of facial recognition technology from collecting and re-sharing data for identifying and tracking consumers without their consent. The Act does not include the federal government or any state or local government, law enforcement agencies, the national security agency, or intelligence agency.

San Francisco’s ban, coming from the center of technology, may lead to other state and local governments to consider similar bans and regulations. Major technology companies, even Microsoft who develops facial recognition technology, realize that some regulation is necessary. Similar bans and regulations have been presented in several cities across the U.S.. It will be interesting to see if other city and state governments follow in banning their own use of such technology at least until greater regulations are in place.

Stay Tuned!

*Haley Breedlove is a 2021 J.D. candidate at the University of New Hampshire School of Law and a 2019 summer associate in the Phoenix office of Snell & Wilmer. She is not yet admitted to practice law. Her work in researching and drafting this article is greatly appreciated.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Snell & Wilmer | Attorney Advertising

Written by:

Snell & Wilmer
Contact
more
less

Snell & Wilmer on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide