Legislators hear testimony on facial recognition bills
In line with its “Press Pause on Face Surveillance” campaign, the ACLU of Massachusetts gave testimony at a hearing Nov. 23 for the state Joint Committee on the Judiciary in support of bills moving through the Massachusetts legislature that would limit use of facial recognition technology.
The companion bills H.135 and S.47 would strengthen legislation passed last session as part of a police reform bill. The 2020 legislation instituted limits on law enforcement use of facial recognition technology and required documentation of those uses. If passed, the new law would prohibit use of facial recognition technology by all government agencies and officials except in a handful of specifically identified situations.
In her testimony, Kade Crawford, director of the ACLU’s Technology for Liberty Program, said the current law does not go far enough to protect racial justice, privacy, civil rights and civil liberties.
“Facial recognition is dangerous when it works and when it doesn’t,” Crawford said. “There are many indications that the technology is, in too many situations, not ready for prime time.”
Under the proposed legislation, government agencies would be prohibited from using facial recognition technology to track or monitor people in public spaces. Some structured exceptions are made in law enforcement situations, which proponents point to as a key use of the growing technology.
For law enforcement agencies to use the technology, police would be required to first obtain a warrant, except in cases of emergencies involving immediate danger of death or serious injury or if the technology is being used to identify a deceased person.
Supporters of the legislation point to a track record of instances where the technology failed to properly identify people of color and women, especially citing 2018 research from MIT called the Gender Shades study that examined the accuracy of AI gender classification products.
In their research, the team from MIT found that the three AI gender classifiers they examined struggled more to identify female faces of color. Of the three, one had a nearly 35% gap in accuracy between lighter male faces and darker female faces. At the time of the MIT research, IBM’s tool identified lighter male faces with 99.7% accuracy. Darker female faces were identified with only 65.3% accuracy.
State Rep. Orlando Ramos, who presented the bill in the House, said use of the technology can be problematic for people of color.
“This is technology that is inconsistent, inaccurate and overall dangerous when it comes to misidentifying people of color and putting them in a situation of having an unnecessary run-in with law enforcement officers,” Ramos said in testimony.
For Johnny McInnis, political director for the Boston Teachers Union, the proposed legislation is important because it expands regulations to all public agencies in the commonwealth, not just law enforcement.
“Students, teachers and school staff must be able to get to school without worrying that their movement is being tracked by biased technology,” McInnis said.
He said that if the use of facial recognition technology expands, it could impact the comfort of families, especially from more marginalized communities, in attending Boston Public Schools events.
“We oppose the use of face surveillance in schools, which would not help participation in school events among immigrant families and parents of color who may fear coming to school events if they think they are being tracked by surveillance technology,” McInnis said.
Hayley Tsukayama, legislative activist for the Electronic Frontier Foundation, called the proposed bills a step in the right direction, but called on the state to restrict government use of the technologies completely.
“We urge you to continue to broaden and strengthen this bill,” Tsukayama said. “Specifically, at EFF, we believe that face recognition use by the government is so dangerous that it must be banned entirely.”
Jake Parker, senior director of government relations at the Security Industry Association (SIA), testified in opposition to the bills. He said he supports legislation that would make sure the technology is used appropriately and worries that banning these technologies could have unintended consequences.
“Any kind of advanced technology you want to make sure is being used properly and not in a way that would harm people, and facial recognition technology has a long history of successful use in law enforcement, but we need to make sure that stays that way,” Parker said.
Instead, the issue for him is the near-complete restriction on their use.
“The problem with the proposed bills … is instead of setting up rules for using the technology, all the proposals include complete bans on use of the technology in certain situations or by certain users,” Parker said.
He also suggested that MIT’s Gender Shades study is not a fair comparison for the kinds of uses that could be prohibited by this bill. Parker said most law enforcement uses involve matching two faces, not using AI to identify characteristics about the individuals, the latter of which was the subject of the Gender Shades study.
In other words, the nearly 35% gap the Gender Shades study found does not indicate an ability to find similarities between two photos, but rather to make a guess about the characteristics in one.
“Software that estimates the age or sex of a person in a photo is fundamentally different than matching photos of specific individuals,” Parker said. “This does not result in identification or misidentification and doesn’t apply to facial recognition.”
An analysis by SIA of June data from the federal National Institute of Standards and Technology found that the top 150 algorithms were each more than 99% accurate across Black male, white male, Black female and white female demographics when it came to matching mug shots.
Jonathan Winer, chief legal officer for Clearview AI, which develops facial recognition software, said that this sort of technology can be an essential tool to protect public safety when used properly.
Restricting corporate use
The Joint Committee on the Judiciary also heard testimony on H.117, which would institute restrictions on the use of facial recognition technology for companies. Under H.117, companies would be required to use facial recognition technology only in way that would benefit, and with the consent of, the individuals whose data they would be using. It also institutes a general ban on the technology in public-facing spaces operated by businesses.
State Rep. Dylan Fernandes, who sponsored the bill, said it aims to address his concerns that facial recognition technology could allow companies to alter or deny services to individuals based only on a profile from the facial recognition technology.
SIA’s Parker said in his testimony that he objected to H.117’s limits on opt-in applications of facial recognition in private businesses for uses such as speeding up transactions at stores or check-ins at hotels, securing bank transactions or confirming the identity of patients in health care facilities.
“The use of facial recognition technology is so very different in different applications; it has different implications when it comes to privacy, especially,” Parker said. “I think that what should be done is you want to preserve those benefits for consumers, for citizens, but at the same time, if there are applications of facial recognition technology or ways we don’t want to see them used, those should be specifically and narrowly restricted.”
Fernandes, however, said he sees the stakes involved in facial recognition technology as being higher.
“Putting this fiduciary responsibility on companies will ensure that our most deeply personal possession — our face — will remain our own and is only used in our best interest,” Fernandes said.