Facial Recognition Technology: Minimizing Risk in the Face of Increasing Liability
At the turn of the century, facial recognition technology was more science fiction than fact. Rapid advances have fueled a proliferation of this technology — which continues to expand into new areas of public and private life. At the same time, various states and municipalities are enacting new, stringent laws regulating the use of facial recognition technology by commercial entities. It is thus imperative that all companies using facial recognition technology take actionable steps to leverage it in an effective fashion that complies with current/anticipated laws.
Overview of Facial Recognition Technology
Facial recognition technology involves of using “biometrics” (i.e., individual physical characteristics) to digitally map an individual’s facial “geometry.” These measurements are then used to create a mathematical formula known as a “facial template” or “facial signature.” This stored template or signature is then used to compare the physical structure of an individual’s face to confirm their identity or uniquely identify that individual.
Technological advancements continue to unlock new ways for companies to utilize facial template data to improve the efficiency and effectiveness of their operations. Today, it is commonplace to use your face to unlock a smartphone or “check in” at the airport. But while facial recognition technology has produced a myriad of benefits, its use also carries significant privacy risks. Unlike other forms of mutable personally-identifiable information, once compromised, facial template and other forms of biometric data lose their ability to be used as a secure identifying feature.
The Rise of Biometric Privacy Regulation (And Corresponding Risk)
To combat the risk posed by facial template data and other biometric data, several states enacted laws that regulate the collection and use of facial template data by business entities.
Illinois’ Biometric Information Privacy Act (“BIPA”) is considered the most stringent state law. Under BIPA, a private entity cannot collect or store facial template data without first providing notice, obtaining written consent and making certain disclosures. BIPA also contains a private right of action provision that permits recovery of statutory damages between $1,000 and $5,000.
Beyond Illinois, Texas and Washington have also enacted biometric privacy laws covering facial recognition technology, which impose similar requirements related to notice, consent, and mandatory security measures.
This new wave of biometric privacy laws has created substantial liability. This risk arises primarily because of the statutory damages made available under BIPA, which the Illinois Supreme Court made much easier to recover due to a 2019 ruling that plaintiffs can pursue BIPA claims even where no actual harm or damage is sustained. As just one example of the tremendous liability posed by BIPA, in early 2020, Facebook settled a major BIPA class lawsuit for a staggering $550 million. Moving forward, companies utilizing facial template data in connection with their business operations will continue to see a flurry of BIPA class action filings.
In addition, many states without laws regulating facial recognition technology ramped up their efforts at the start of 2020 to enact similar laws of their own.
For example, at the start of the year Washington’s legislature introduced the Washington Privacy Act (“WPA”) which, among other things, imposes a stringent set of requirements and limitations on biometric data and the use of facial recognition technology. Importantly, one version of the bill proposed by the Washington House of Representatives contained a private right of action provision with statutory penalties of $50,000 for each negligent violation, and $100,000 for intentional violations. If enacted, the WPA would provide the greatest amount of statutory damage awards in any piece of U.S. data privacy legislation.
Even if the WPA fails to make it into law, it is clear the risk of potential legal liability — with corresponding sky-high damage awards—will increase exponentially in the immediate future.
Fortunately, there are several best practices companies can implement to minimize the risk of becoming embroiled in high-stakes class action litigation stemming from the use of facial template or other biometric data.
Privacy policies should encompass the following issues: (1) notice that facial template data is being collected and/or stored; (2) the current and reasonably foreseeable purposes for which the company utilizes facial template data; (3) how facial template data will be used; (4) a description of the protective measures used to safeguard facial template data; and (5) the company’s facial template data retention and destruction policies and practices. These policies should also strictly prohibit the disclosure of any individual’s facial template data without their consent and should ban the company and its employees from selling or otherwise profiting from any such data.
Second, to further support the principle of transparency, companies should provide conspicuous, advance notice of the use of facial recognition technology before any facial template data is captured, used, or stored. In so doing, companies should offer consumers meaningful notice regarding how facial templates are created, and how such data will be used, shared and stored by the company. Where appropriate, or required by law, contextual and just-in-time notices may be necessary.
Third, when feasible, companies should obtain express, affirmative consent from consumers before any data derived from facial recognition technology is collected, used, or stored.
The Federal Trade Commission (FTC) recommends companies obtain consumers’ affirmative consent before capturing or using facial template data, at a minimum, where a company intends to use consumers’ facial template data in a way that diverges from what was represented when the company originally collected the consumer’s data, and where a company intends to use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify that individual without assistance.
Fourth, companies should obtain signed, written consent — in the form of a written release — from consumers authorizing the company to collect, use and store their facial template data prior to the time any such data is captured or used for any purpose.
Data Security Measures
Finally, companies must ensure they implement effective data security safeguards to protect all data captured, used and stored through facial recognition technology from improper disclosure, access or acquisition. Companies should ensure they safeguard facial template data: (1) using the reasonable standard of care applicable to their given industry; and (2) in a manner that is the same or more protective than that in which the company stores, transmits and protects other forms of sensitive personal information. Companies should also periodically assess their facial template data security measures and complete any updates/modifications to their security programs to address and neutralize any new or evolving threats and vulnerabilities.
Facial recognition could potentially change almost every aspect of our daily lives. And companies using facial recognition technology must comply with an increasingly complex maze of laws, which will only become more difficult to navigate moving forward.
As such, companies that incorporate facial recognition technology into their business practices (even those operating in jurisdictions where no facial recognition or other biometric laws are on the books) should consider taking proactive measures to create/implement compliance programs that encompass the principles and practices described above. By doing so, companies can ensure they maintain legal compliance to mitigate potential risk. Including experienced counsel in this process remains an important first step that can pay significant dividends.
“Facial Recognition Technology: Minimizing Risk in the Face of Increasing Liability,” by David J. Oberly, Jeffrey N. Rosenthal, and Ana Tagvoryan was published in Security Magazine on February 26, 2020.