Publications
Newsletter

The BR Privacy & Security Download: January 2024

The BR Privacy & Security Download

Welcome to this month's issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security & Data Protection practice. We invite you to share this resource with your colleagues and visit Blank Rome’s Privacy, Security & Data Protection webpage for more information about our team.


STATE & LOCAL LAWS & REGULATION

Utah Consumer Privacy Act Takes Effect
The Utah Consumer Privacy Act (“UCPA”) has come into effect on December 31, 2023. Of all its predecessors, the UCPA most closely follows the Virginia Consumer Data Protection Act (“VCDPA”) and provides consumers the rights to access and delete personal data and opt out of the processing of personal data for the purposes of targeted advertising and the sale of personal data. The UCPA also requires consumers to be given notice and an opportunity to opt out of the processing of their sensitive data and requires opt-in parental consent for the processing of personal data of children under 13. The UCPA does not provide for a private right of action. Utah’s Attorney General has the exclusive authority to enforce the UCPA with the assistance and consultation of the Division of Consumer Protection, to whom consumers may submit complaints for alleged violations. The UCPA provides for a 30-day cure period.

CPPA Board Advances Latest Proposed Revisions to CCPA Rules
The California Privacy Protection Agency (“CPPA”) Board (the “Board”) voted unanimously to advance discussions about the most recent draft of Proposed Revisions to the California Consumer Privacy Act Regulations (the “Draft Revisions”). The Board’s discussions focused on automated decision-making technology, risk assessments, and cybersecurity audits. The Draft Revisions, if approved, would: (1) require businesses to provide consumers a “meaningful understanding” about the business’ data collection and third-party data disclosure practices; (2) clarify that service providers and contractors are third parties for purposes of satisfying disclosure notices; (3) require mobile applications to link to their privacy policies in the app setting menus; (4) heighten protections over personal information of consumers under 16 by deeming such data to be “sensitive” information; and (5) increase monetary amounts with respect to applicability thresholds for “businesses” subject to the regulations, as well as increased civil penalties for violations. The formal rulemaking process for the package of rule changes will begin after CPPA staff revises the Draft Regulations to reflect the Board’s discussions.

CPPA Approves Legislative Proposal to Require Browsers to Offer Opt-Out Preference Signals
The California Privacy Protection Agency (“CPPA”), which enforces the California Privacy Rights Act (“CPRA”), has approved to advance a legislative proposal that would require browser vendors to offer opt-out preference signals for consumers to exercise their CPRA right to opt out of the sale/sharing of personal information. Under the CPRA, businesses are required to honor opt-out preference signals as valid opt-out requests. Opt-out preference signals allow consumers to optout with all businesses they interact with online without having to make individualized requests. Currently, to exercise their opt-out rights, consumers must either use a browser that supports an opt-out preference signal or find and download a browser plugin. To date, only a limited number of browsers offer native support for opt-out preference signals. If the proposal is adopted, California would be the first state to require browser vendors to offer consumers the option to enable these signals.

NetChoice Sues Utah to Block Social Media Regulations
NetChoice, LLC (“NetChoice”), a trade group representing social media companies (“SMCs”), including Meta, TikTok, and X, filed suit against Utah in NetChoice v. Reyes, and naming Attorney General Sean Reyes and Division of Consumer Protection Director Katherine Hass. The lawsuit seeks to block the enforcement of rules under the Utah Social Media Regulation Act (the “Act”), effective on March 1, 2024, on the basis that enforcement of the Act’s age-verification restrictions, social media curfew, and other requirements imposed against SMCs and minor Utahns’ under the age of 18, if enforced, would violate the First Amendment and the Fourteenth Amendment, as well as federal law, and would effectively compromise Utahns’ data security while stripping away their parental rights. NetChoice has previously succeeded in challenging similar regulatory efforts in other states, including California and Arkansas.


FEDERAL LAWS & REGULATION

HHS Finalizes Health Data, Technology, and Interoperability Rule
The U.S. Department of Health and Human Services (“HHS”) published the final “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Rule” (the “HTI-1 Rule”), which, among other things, (1) establishes first-of-its-kind predictive algorithm transparency requirements; (2) revises certain definitions and exceptions (and adds an exception) with respect to information blocking and sharing standards of electronic health information; and (3) implements the Electronic Health Record (“EHR”) Reporting Program provision of the 21st Century Cures Act and adopts new interoperability-focused reporting metrics for certain developers participating in the updated ONC Health IT Certification Program. The final rule is effective on February 8, 2024.

HHS Releases Healthcare Cybersecurity Strategy
The U.S. Department of Health and Human Services (“HHS”) released a concept paper outlining HHS’s cybersecurity strategy for the healthcare sector. The concept paper builds on the National Cybersecurity Strategy released by the Biden Administration. The paper provides four areas of action. First, HHS states that it will release new voluntary healthcare and public health sector cybersecurity performance goals (“HPH CPGs”) to help healthcare institutions plan and prioritize the implementation of high-impact cybersecurity practices. Second, HHS will work with Congress to obtain authority and funding for financial support and incentives for domestic hospitals to implement high-impact cybersecurity practices. Third, HHS will propose new enforceable cybersecurity standards informed by the HPH CPGs. The Centers for Medicare and Medicaid Services will propose new cybersecurity requirements for hospitals through Medicare and Medicaid requirements and the HHS Office for Civil Rights will begin new rulemaking activities to add new cybersecurity requirements to the HIPAA Security Rule. Finally, it will mature the Administration for Strategic Preparedness and Response’s (“ASPR”) coordination role as a “one-stop shop” for healthcare cybersecurity.

FTC Proposes Updates to Children’s Online Privacy Rules
The Federal Trade Commission (“FTC”) issued a notice of proposed rulemaking seeking comment on the FTC’s proposed changes to the Children’s Online Privacy Protection Rule. The FTC states the proposed changes are aimed at shifting the burden from parents to providers to ensure that digital services are safe and secure for children. Proposed changes to the rule include requiring separate opt-in for targeted advertising, creating limits on practices that encourage children to stay online, codifying FTC guidance on the use of education technology to prohibit commercial use of children’s information, increasing accountability for safe harbor programs and strengthening data security requirements, among other changes. The FTC has also proposed expanding the definition of personal information under the rule to include biometric identifiers.

FCC Adopts Updated Data Breach Notification Rules
The Federal Communications Commission (“FCC”) announced it has adopted rules to modify the FCC’s breach notification rules. The changes expand the definition of “breach” to include inadvertent access, use, or disclosure of customer information, except where such information is acquired in good faith by an employee or agent of a carrier, and such information is not used improperly or further disclosed. The updated FCC rule will require carriers to notify the Commission of breaches, in addition to existing obligations to notify the U.S. Secret Service and FBI. The changes also eliminate the requirement to notify customers of a breach in instances where a carrier can reasonably determine that no harm to customers is reasonably likely to occur, or where the breach solely involves encrypted data and the carrier or provider has definitive evidence that the encryption key was not also accessed, used, or disclosed. Changes also require carriers to notify customers of breaches of covered data without unreasonable delay after notification to the FCC and law enforcement agencies, and in no case more than 30 days after reasonable determination of a breach, unless a delay is requested by law enforcement.

FCC Enters MOU with Four States
The Federal Communications Commission’s  (“FCC”) Enforcement Bureau announced that it has signed Memoranda of Understanding (“MOU”) with the Attorneys General of Connecticut, Illinois, New York and Pennsylvania to share expertise, resources, and coordinated efforts in conducting privacy, data protection, and cybersecurity related investigations to protect consumers. The MOU comes as part of the work of the FCC’s Privacy and Data Protection Task Force, which was created to work on privacy and data protection issues subject to the FCC’s authority under the Communications Act. The MOU affirms that the FCC and State Attorneys General “share close and common legal interests in working cooperatively to investigate and, where appropriate, prosecute or otherwise take enforcement action in relation to privacy, data protection, or cybersecurity issues” under sections 201 and 222 of the Communications Act.

OCC Publishes Statement on Emerging Risks with AI
The Office of the Comptroller of the Currency (“OCC”) has highlighted artificial intelligence (“AI”) as an emerging risk facing the federal banking system in its Semiannual Risk Perspective for Fall 2023 report. The report covers risks facing national banks, federal savings associations and federal branches and agencies based on data as of June 30, 2023. In the report, the OCC acknowledges that developments in AI may result in several benefits, such as reducing costs and increasing efficiencies; improving products, services and performance; strengthening risk management and controls; and expanding access to credit and other banking services. However, the OCC cautions that widespread adoption of AI may present significant challenges relating to compliance, credit, reputation, and operational risks. The report also notes that cyber threats continue, and as banks continue to leverage new technology to further digitalization efforts, there is a heightened risk of fraud and error, including fraud targeting peer-to-peer and other faster payment platforms.

FBI Publishes Guidance on Delay of Cyber Incident Disclosures
The Federal Bureau of Investigation (“FBI”) has published guidance on how companies can request a delay in disclosing cyber incidents to the Securities and Exchange Commission (“SEC”). The SEC’s new rules, which took effect on December 18, 2023, require registrants to disclose to the SEC any “material” cybersecurity incidents via an 8-K. The SEC’s rules allow the Department of Justice to determine if a delay in filing the 8-K is merited for reasons of national security or public safety. The FBI’s guidance states that to request a delay, registrants must email the FBI information about when the incident occurred, when materiality was determined and detailed information about the incident. The FBI’s guidance encourages registrants to engage with the FBI prior to its materiality determination and states that failure to report the cybersecurity incident to the FBI immediately upon the company’s materiality determination will cause a delay, or the request to be denied.

Bipartisan Bill Against Airport Facial Scans Introduced in Senate
The Traveler Privacy Protection Act, S.B. 3361 (the “Bill”), was introduced to the Senate and referred to the Committee on Commerce, Science, and Transportation. The Bill, if passed, would prohibit the use of “facial recognition technology” and “facial matching software” in airports, and require the Transportation Security Administration (“TSA”) to, within 90 days of the Bill’s enactment, “dispose of any facial biometric information, including images and videos” previously collected through such facial recognition technologies. The introduction of the Bill follows the TSA’s announcement of its plans to expand its use of facial recognition scans over the next several years. The authors of the Bill assert that travelers are largely unaware of their ability to opt out of these “voluntary” facial recognition screenings, and that the Bill seeks to address TSA’s failure to prominently display notices at its checkpoints to inform travelers of their right to opt-out of providing their sensitive biometric data to the government. 

Senator Markey Tells Car Manufacturers to Do More to Protect Consumer Privacy
Senator Edward J. Markey sent letters to 14 car manufacturers urging them to implement and enforce stronger privacy protections. The letters requested responses to a variety of questions relating to the car manufacturers’ practices regarding data collection and use, including whether the automakers provide notice to vehicle owners or users of their privacy practices and whether the automakers allow users to delete their data, regardless of the user’s state of residence. The letters follow a review of connected vehicle privacy practices by the California Privacy Protection Agency’s Enforcement Division started in July 2023 and a September 2023 report by Mozilla stating that its review of car brand privacy notices revealed failures to meet minimum privacy standards. 


U.S. ENFORCEMENT

FTC Prohibits Retailer From Using Facial Recognition Surveillance
The FTC announced a settlement with Rite Aid that will prohibit the retailer from using facial recognition technology for surveillance purposes for five years. Rite Aid had deployed artificial intelligence-based facial recognition technology to identify customers that may have engaged in shoplifting and other problematic behaviors. However, the FTC alleged in its complaint that Rite Aid failed to take reasonable measures to prevent harm to consumers, resulting in mistaken accusations of wrongdoing by Rite Aid employees against consumers that had been flagged by the surveillance technology. The FTC’s proposed order requires Rite Aid to implement comprehensive safeguards to prevent harm to consumers when using automated systems that use biometric information to track them or flag them as security risks, or discontinue using any such technology if it cannot control potential risks to consumers.

HHS OCR Settles First Ever Phishing Cyber-Attack Investigation
The U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) settled its first ever phishing attack investigation against Lafourche Medical Group (“Lafourche”) for a 2021 phishing attack that affected the electronic protected health information (“PHI”) of approximately 34,862 individuals. OCR’s investigation into the incident revealed that prior to the 2021 incident, Lafourche failed to conduct a risk analysis to identify potential threats or vulnerabilities to electronic PHI as required by the Health Insurance Portability and Accountability Act (“HIPAA”) and that Lafourche had no information security policies or procedures to safeguard PHI. The settlement requires Lafourche to pay $480,000 to OCR and to implement a corrective action plan to be monitored by OCR for two years. The corrective action plan includes implementing certain security measures, maintaining written policies and procedures to comply with HIPAA and providing training to staff who have access to PHI.


INTERNATIONAL LAWS & REGULATION

EU AI Act Heads Toward Formal Adoption
The European Parliament and European Council reached political agreement on the EU AI Act (the “AI Act”). The AI Act is the world’s first comprehensive law governing use of artificial intelligence. The AI Act introduces a risk-based framework. Certain uses of AI are prohibited under the AI Act because they present unacceptable risk. AI systems deemed high risk will be subject to the strictest requirements, including data governance, transparency and assessment obligations. Limited risk AI systems are subject to fewer obligations, and AI systems that are not prohibited or deemed high or limited risk are not subject to AI Act requirements. General purpose AI systems are also subject to risk-based requirements, depending on whether they are considered to have systemic risk. The AI Act provides fines for non-compliance of 3% of annual global turnover and 7% of annual global turnover for violations on prohibited practices or non-compliance related to requirements on data. The agreed version is now subject to formal approval by both the European Parliament and the European Council. Once effective, entities will have 6 months to comply with all AI Act requirements relating to prohibited AI systems, 12 months for high-risk AI systems, and 24 months for all other obligations.

Political Agreement Reached on Cyber Resilience Act
The European Parliament and European Council reached a political agreement on the Cyber Resilience Act. The Cyber Resilience Act is intended to improve digital product cybersecurity. The Cyber Resilience Act will require manufacturers of hardware and software to implement cybersecurity measures across the entire lifecycle of the product, from the design and development, to after the product is placed on the market. Compliance with the regulation’s requirements will be required to obtain a CE marking indicating compliance. The Act would also require manufacturers to provide consumers with timely security updates for several years after the purchase. The agreed version is now subject to formal approval by both the European Parliament and the European Council. Once adopted, the Cyber Resilience Act will enter into force on the 20th day following its publication in the Official Journal of the European Union. Once effective, manufacturers, importers and distributors of hardware and software products will have 36 months to comply with most of the Act’s rules.

Canadian Privacy Commissioner Releases Principles for Generative AI
The Office of the Privacy Commissioner of Canada released Principles for Responsible, Trustworthy and Privacy Protective Generative AI Technologies. The principles are intended for developers and providers of foundation models and generative AI systems. Principles include ensuring legal authority for collecting and using personal information, use of personal information only for appropriate purposes, establishing necessity and proportionality in the use of generative AI and personal information in generative AI models to achieve the indented purpose, being open and transparent about collections, establishing accountability, individual access, data minimization, accuracy, and security. For each of the privacy principles, the document identifies actions and considerations for the application of that principle to the use and development of generative AI technologies.

CJEU Rules Fear May Qualify as Damages Under GDPR
The Court of Justice of the European Union (“CJEU”) ruled in VB v. Natsionalna agentsia za prihodite (C‑340/21) that fear of possible misuse of personal data can constitute non-material damage under the GDPR. The case stems from a cyber-attack on the Bulgarian National Revenue Agency that affected the personal data of 6 million individuals. The individual who brought the case alleged that the agency had violated GDPR’s data security requirements and that, they had suffered non-material damage as a result of the breach in the form of fear that their personal data might be misused in the future, or that they might be blackmailed, assaulted, or kidnapped. The CJEU ruled that, while a data breach itself does not indicate a failure to comply with the GDPR’s security requirements, fear can constitute damage under the GDPR, provided a court finds that it is well founded in the specific circumstances at issue for the individual.


RECENT PUBLICATIONS & MEDIA COVERAGE


© 2024 Blank Rome LLP. All rights reserved. Please contact Blank Rome for permission to reprint. Notice: The purpose of this update is to identify select developments that may be of interest to readers. The information contained herein is abridged and summarized from various sources, the accuracy and completeness of which cannot be assured. This update should not be construed as legal advice or opinion, and is not a substitute for the advice of counsel.