Publications
Newsletter

The BR Privacy & Security Download: February 2026

The BR Privacy & Security Download

Welcome to this month’s issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security & Data Protection practice. We invite you to share this resource with your colleagues and visit Blank Rome’s Privacy, Security & Data Protection webpage for more information about our team.


RECENT HIGHLIGHT

Blank Rome Welcomes New Business Litigation Of Counsel Glen L. Abramson in Philadelphia 

Blank Rome LLP is excited to announce that Glen L. Abramson has joined the firm’s Philadelphia office as of counsel in the Business Litigation practice group, as well as the firm’s Privacy, Security & Data Protection Team.


STATE & LOCAL LAWS & REGULATIONS

California AG Announces Surveillance Pricing Enforcement Sweep: California Attorney General (“AG”) Rob Bonta announced an investigative sweep into “surveillance pricing” practices in recognition of Data Privacy Day. Surveillance pricing refers to businesses’ use of consumers’ personal information such as shopping history, browsing data, location, and demographics to set targeted, individualized prices for products and services. The California Department of Justice is sending inquiry letters to businesses with significant online presence in the retail, grocery, and hotel sectors. These letters request information regarding companies’ use of consumer data in pricing decisions, public disclosures about personalized pricing, pricing experiments, and compliance measures related to algorithmic pricing and civil rights laws. AG Bonta emphasized that such practices may violate the California Consumer Privacy Act (“CCPA”), particularly its “purpose limitation” principle, which restricts businesses from using personal information in ways inconsistent with consumers’ reasonable expectations. The press release notes that surveillance pricing is often invisible to consumers, who typically cannot compare individualized prices with one another. This investigative sweep continues AG Bonta’s pattern of industry-wide compliance inquiries, following prior sweeps targeting streaming services, location data, and mobile applications.

New Jersey Governor Issues Executive Order on Protection of Children Online: New Jersey Governor Mikie Sherrill signed an Executive Order (the “Order”) establishing a comprehensive state framework to address the mental health impacts of digital technology on children and adolescents. The Order is intended to respond to youth mental health issues that have coincided with the widespread adoption of smartphones, social media, and artificial intelligence technologies. The Order mandates that all Executive Branch departments prioritize children’s mental health outcomes when addressing youth interactions with technology platforms, the internet, and social media. It directs the Chief Operating Officer to facilitate interagency coordination on matters pertaining to children’s online safety. Executive agencies are instructed to review existing policies and regulatory frameworks to identify opportunities to promote age-appropriate internet use, mitigate harms such as cyberbullying, deepfakes, and online exploitation, and support parents, educators, and healthcare providers in addressing mental health impacts of online activity. A key provision establishes the Office of Youth Online Mental Health Safety and Awareness within the Department of Health. This office will collect and analyze data and recommendations from state agencies to guide youth in responsible use of online platforms. The administration also intends to establish a research center at a state institution of higher education focused on digital technology’s relationship to children’s mental health. The Order took effect immediately upon signing. 

Massachusetts Governor Announces Legislation on Social Media Requirements for Kids: Massachusetts Governor Maura Healey announced plans to introduce legislation imposing strict new requirements on social media platforms to protect children and teenagers under 18 years of age. The announcement was made during Governor Healey’s third State of the Commonwealth Address. The proposed legislation would mandate that social media companies implement age verification systems and obtain parental consent before minors can access their platforms. Additionally, the legislation would require platforms to disable features considered addictive, such as continuous scrolling and notifications during certain hours. Social media companies that fail to comply with these requirements would face significant financial penalties. The Governor’s proposal is expected to align closely with legislative standards previously recommended by Massachusetts AG Andrea Campbell. The legislation is expected to be formally filed in the coming weeks.

Connecticut Governor and AG Announce Legislation to Combat Social Media Addiction: Connecticut Governor Ned Lamont and AG William Tong announced proposed legislation to protect minors from the harmful and addictive features of social media platforms. The bill, modeled after similar laws in New York, California, and Utah, would prohibit social media companies from exposing minors to addictive algorithms and notifications without parental consent. Key provisions of the legislation include default privacy settings, time-of-use restrictions, and a prohibition on notifications between 9:00 p.m. and 8:00 a.m. Social media companies would be required to submit annual reports to the state disclosing the number of minors on their platforms, the number of minors with parental consent to use addictive algorithms, and the average daily time spent on the platform by minors, broken down by age and time of day. The legislation would also mandate a warning label pop-up when minors open a social media application informing them of associated mental health dangers. Similar legislation passed the Connecticut House of Representatives in 2025 but did not receive a Senate vote before adjournment. The proposal enjoys bipartisan support from members of the General Law Committee. The announcement follows AG Tong’s ongoing litigation against Meta for allegedly designing harmful features that purposefully addict youth, as well as an active investigation into TikTok. 

California Governor and CalPrivacy Tout Success of DROP System: Governor Gavin Newsom and the California Privacy Protection Agency (“CalPrivacy”) announced the launch of the Delete Request and Opt-out Platform (“DROP”), a first-of-its-kind online tool enabling California residents to block the sale of their personal information by data brokers. DROP is responsive to the requirements of the Delete Act, signed by Governor Newsom in 2023. The platform allows Californians to submit a single deletion request to all registered data brokers, rather than contacting each broker individually. As of the announcement date, more than 155,000 Californians had already utilized the platform. 

IAB Releases Agentic AI Roadmap for Digital Advertising: AB Tech Lab, the global technical standard-setting body for digital advertising, announced the release of its Agentic roadmap, a framework for scaling autonomous artificial intelligence (“AI”) agent-based buying and selling in digital advertising. The roadmap prioritizes interoperability, security, and high-performance execution by extending existing industry standards rather than introducing fragmented, competing frameworks. The initiative builds upon established transaction, management, and delivery standards for advertising as well as privacy and regulatory frameworks such as the Global Privacy Protocol (“GPP”) and the Transparency and Consent Framework (“TCF”). IAB Tech Lab is integrating these standards with modern protocols to enable secure, machine-speed execution and coordination between independent agentic systems. The roadmap emphasizes development of trust, provenance, measurement, and transaction-integrity signals, with continued alignment with GPP and TCF. 


FEDERAL LAWS & REGULATIONS

FCC Waives Robotext Consent Revocation Rule Until Early 2027: The Federal Communications Commission’s (“FCC”) announced an extension of the compliance deadline for its “revoke-all” consent rule until January 31, 2027. The rule, originally adopted in early 2024, broadly applies consumer revocation of consent to all future calls and text messages, including those unrelated to the original communication that prompted the revocation. The FCC cited “good cause” for the extension, noting that it needs additional time to review the record compiled in response to a recent further notice of proposed rulemaking and to avoid imposing potentially unnecessary compliance costs on affected businesses. The agency acknowledged that multiple organizations indicated they would face significant hardship and resource burdens if required to comply while the rule’s future remains uncertain. Financial institutions have been particularly vocal in their opposition to the rule, stating the rule would prevent them from sending critical communications, such as low-balance alerts, to customers who had revoked consent solely in response to marketing messages. 

FTC Conducts Age Verification Technology Workshop: The Federal Trade Commission (“FTC”) held a workshop examining age verification and estimation technologies, signaling that these tools are becoming a key mechanism in privacy compliance efforts. The workshop addressed the interplay between age verification technologies and the Children’s Online Privacy Protection Act (“COPPA”), along with how organizations can deploy age verification at scale while navigating the evolving regulatory landscape. FTC Chair Andrew Ferguson emphasized that the agency views age verification not as creating new legal obligations under COPPA, but as a tool to help companies advance innovative compliance efforts while identifying when existing obligations apply. International perspectives were also featured, with the U.K. Information Commissioner’s Office noting that self-declaration of age is not sufficient age assurance, and emphasizing that the entire age assurance process, from initial screening to decision-making, must be secure and accountable. Stakeholders debated the effectiveness of age verification mandates, with concerns raised about potential privacy risks from additional data collection, while proponents argued these regulations empower parents and provide necessary protections against harmful content. 

FTC Will Host Workshop on Consumer Injuries and Benefits Arising from Data Collection, Use, and Disclosure: The FTC announced that it will host a public workshop titled “Measuring Injuries and Benefits in the Data-Driven Economy” on February 26, 2026. The workshop aims to examine how the FTC can better understand and measure consumer injuries and benefits arising from the collection, use, and disclosure of consumer data. This event follows the FTC’s previous workshop on the same topic held in December 2017, with a focus on marketplace developments that have occurred during the intervening eight years, particularly the rapid growth of consumer data collection and use. The workshop will feature panel discussions addressing several key topics of interest to privacy and data security professionals. These include: quantifying informational injuries and the potential benefits of consumer data practices; examining the impacts of data breaches on consumers and strategies to minimize resulting harms; analyzing the costs and benefits of behavioral and contextual advertising; and assessing how consumers’ privacy preferences, beliefs, and decisions can be measured. 

NIST Blog Discusses Need for Standards for Healthcare AI: In a recent article on the National Institute of Standards and Technology’s (“NIST”) Taking Measure Blog, Ram D. Sriram, Chief of the Software and Systems Division at NIST, examines the growing role of AI in healthcare and emphasizes the critical importance of developing standards to ensure AI systems are reliable and trustworthy. Sriram highlights several current and emerging AI applications in medicine, including transcription services that automatically populate electronic health records and chatbots that may eventually assist patients with basic medical inquiries. While these technologies promise increased efficiency and improved patient-provider interactions, Sriram stresses that AI deployment in healthcare requires careful attention to correctness and reliability. The blog discusses vulnerabilities in AI systems, including the risk of “Trojan” attacks where bad actors intentionally corrupt datasets to manipulate AI reasoning. Sriram notes that such attacks can occur at the input, model, or environmental interaction level, underscoring the need for robust safeguards and standards, and that NIST researchers are actively developing methods to detect these threats. 


U.S. LITIGATION

Supreme Court to Review FCC In-House Enforcement Proceedings: The U.S. Supreme Court agreed to hear a case challenging the FCC’s authority to impose fines on major wireless carriers for sharing customer location data without consent. The case stems from nearly $200 million in penalties the FCC levied in 2024 against Verizon, AT&T, T-Mobile, and Sprint after determining that the carriers sold access to customer location data to third-party aggregators without obtaining user consent. The central constitutional question is whether the FCC’s in-house enforcement proceedings, which assess wrongdoing and impose penalties before companies can challenge the determination in federal court, violate the Seventh Amendment right to a jury trial. Federal appellate courts have split on this issue: the Second Circuit upheld the FCC’s fine against Verizon, while the Fifth Circuit ruled in AT&T’s favor, finding the agency’s procedures unconstitutional. Both Verizon and the FCC have appealed their respective adverse rulings to the Supreme Court. This case follows the Court’s 2024 decision striking down the Security and Exchange Commission’s (“SEC”) similar in-house enforcement scheme in securities fraud matters. Meanwhile, T-Mobile and Sprint recently lost their D.C. Circuit bid for en banc reconsideration of their $92 million in combined fines, leaving their challenge effectively exhausted at the circuit level. The Court is expected to issue its ruling by the end of June 2026. A decision affirming the Fifth Circuit’s reasoning could substantially limit federal agencies’ enforcement capabilities concerning consumer data privacy violations.

Supreme Court to Review Definition of Consumer Under VPPA: The U.S. Supreme Court has agreed to hear Salazar v. Paramount Global, a case that will clarify who qualifies as a “consumer” under the Video Privacy Protection Act (“VPPA”). The dispute centers on whether the VPPA, a statute enacted in 1988 following the publication of then-Supreme Court nominee Robert Bork’s video rental history, applies to individuals who subscribe to non-audiovisual content, such as digital newsletters, from providers that also offer video services. The case arises from allegations that Paramount Global illegally shared personal information of 24/7 Sports’ newsletter subscribers with Facebook through pixel tracking technology. The Sixth Circuit previously dismissed the proposed class action, holding that Salazar did not qualify as a “consumer” because he subscribed to a newsletter rather than directly to audiovisual materials. The Second and Seventh Circuits have adopted a broader interpretation, holding that the VPPA covers any person who subscribes to any of a business’s goods or services. 

Texas Court Lifts Injunction on TV Manufacturer Data Collection Technology: A Texas state judge vacated a temporary restraining order (“TRO”) against Samsung Electronics that had briefly blocked the company from collecting automatic content recognition (“ACR”) data from its smart TVs. The TRO, issued just one day earlier on January 5, 2026, had prohibited Samsung from collecting, using, selling, transferring, or sharing ACR data relating to Texas consumers. The underlying lawsuit, filed by Texas AG Ken Paxton in December 2025, alleges that Samsung violated the Texas Deceptive Trade Practices Act through deceptive ACR data collection practices. The initial TRO found “good cause to believe” that Samsung’s data collection system was unlawful because consumer consent was not informed, privacy choices were not meaningful, users could not reasonably understand the surveillance model, and the system defaulted to “maximal data extraction.” The Court also found that Samsung used “dark patterns,” including single-click enrollment during TV setup, while requiring over 200 clicks across multiple menus to access relevant privacy disclosures. The Samsung lawsuit is one of five similar actions filed against major TV manufacturers—including Sony, LG, Hisense, and TCL—all alleging unlawful consumer surveillance through ACR technology.

Federal Judiciary Committee Hears Testimony Proposed Federal Evidence Standards for Machine Generated Evidence: The Federal Judiciary’s Advisory Committee on Evidence Rules held a public hearing on proposed Rule 707, which addresses the admissibility of machine-generated evidence offered without an accompanying expert witness. The draft rule would require that such evidence satisfy the reliability requirements of Federal Rule of Evidence 702(a)-(d), which governs expert testimony. The rule aims to prevent parties from evading expert reliability standards by offering AI or machine outputs directly without the scrutiny typically applied to expert testimony. Witnesses representing diverse stakeholders—including Lawyers for Civil Justice, major corporations, and plaintiffs’ counsel—raised concerns that the proposed rule may inadvertently create an easier pathway for admitting AI evidence. Several witnesses urged decoupling Rule 707 from Rule 702 and advocated for more AI-specific terminology, such as “machine opinions,” to avoid confusion with routine technology outputs. Others testified that the effort, while commendable, may be premature given the evolving nature of AI. Notably, Advisory Committee Chair Judge Jesse M. Furman announced a nationwide survey of federal trial judges regarding their experiences with deepfakes, reflecting growing judicial interest in AI-manipulated evidence. The comment period will close on February 16, 2026.

Multinational Technology Company Wins Dismissal of Privacy Class Action Claims: Judge Edward J. Davila of the U.S. District Court for the Northern District of California granted Apple’s motion to dismiss in In re Apple Data Privacy Litigation, a putative class action alleging Apple improperly collected user data through its first-party applications even after users disabled the “Share Device Analytics” setting. The Court dismissed claims under the California Invasion of Privacy Act (“CIPA”), Pennsylvania’s Wiretapping and Electronic Surveillance Act, invasion of privacy under the California Constitution, and California’s Unfair Competition Law (“UCL”), among other claims. The Court held that plaintiffs failed under their CIPA claims in part because Apple’s first-party apps cannot constitute a “pen register” because they are part of the source of the transmitted communication, not a separate device or process intercepting it. The Court also found that plaintiff’s assertion that users would assume that disabling “Share Device Analytics” would prevent all data collection was “objectively unreasonable.” According to the Court, internet communications are presumptively not confidential because they are by nature recorded. The Court granted plaintiffs leave to amend within 30 days but expressed doubt that the identified deficiencies could be cured.


U.S. ENFORCEMENT

FDA Issues New Guidance on Wearables and Clinical Decision Software: The U.S. Food and Drug Administration (“FDA”) issued two guidance documents clarifying its regulatory approach to digital health products, with a focus on reducing regulatory burdens for low-risk technologies. FDA Commissioner Marty Makary emphasized the FDA’s intent to “cut red tape and promote medical innovation”. The first guidance addresses “general wellness” wearable devices. Under this policy, the FDA will not scrutinize non-invasive, non-implanted devices intended solely to promote healthy lifestyles, such as wrist-worn products that track sleep quality, pulse rates, or physical activity, provided they do not make claims related to the diagnosis, treatment, or prevention of disease. Products meeting these criteria would be exempt from premarket review, registration, and other standard medical device requirements. The second guidance concerns clinical decision support (“CDS”) software used by healthcare professionals. Software that provides recommendations to clinicians may be excluded from the medical device definition under section 520(o)(1)(E) of the Federal Food, Drug, and Cosmetic (“FD&C”) Act. The guidance outlines criteria that CDS software must satisfy to be excluded. Software that provides specific diagnostic outputs or supports time-critical decision-making remains subject to FDA oversight. Although neither document explicitly references AI, Commissioner Makary stated the guidance will “promote more innovation with AI and medical devices.” For a more detailed analysis for the new FDA Guidance, see our Blank Rome client alert on the topic.

CalPrivacy Issues New Data Broker Enforcement Actions: CalPrivacy announced two new enforcement decisions stemming from its Data Broker Enforcement Strike Force. The actions target companies that failed to comply with California’s Delete Act registration requirements. The first decision imposes a $45,000 fine on Rickenbacher Data LLC, d/b/a Datamasters, a Texas-based personal information reseller. According to CalPrivacy, Datamasters bought and resold names, addresses, phone numbers, and email addresses of millions of individuals with sensitive health conditions, including Alzheimer’s disease, drug addiction, and bladder incontinence, for targeted advertising purposes. The company also marketed lists based on age, perceived race, political views, and consumer behavior. As part of the decision, Datamasters must cease all sales of Californians’ personal information. The second decision requires S&P Global, Inc. to pay a $62,600 fine for failing to register as a data broker due to an administrative error, and mandates implementation of compliance auditing procedures. 

New York AG Questions Grocery Delivery App Company on Algorithmic Pricing: The New York AG sent a letter to Instacart requesting detailed information regarding the company’s compliance with New York’s Algorithmic Pricing Disclosure Act (“the Act”), which took effect on November 10, 2025. The inquiry was prompted by a December 2025 report from Groundwork Collaborative and Consumer Reports, which found that Instacart users were shown significantly different prices for identical products at the same stores at the same time. The Act requires entities using “personalized algorithmic pricing,” defined as dynamic pricing set by an algorithm using personal data, to provide clear and conspicuous disclosure to consumers. The AG’s letter raises concerns that Instacart’s disclosure practices, which relied on fine-print links to a general pricing policy page, may fail to meet the Act’s “clear and conspicuous” standard and do not appear on individual product pages where prices are displayed. 

FTC Finalizes Order with Auto Manufacturer on Collection and selling of Geolocation Data: FTC announced the finalization of a consent order with General Motors LLC, General Motors Holdings LLC, and OnStar, LLC, resolving allegations that the companies unlawfully collected, used, and sold consumers’ precise geolocation and driving behavior data without obtaining adequate notice and affirmative consent. The FTC’s underlying complaint, first announced in January 2025, alleged that GM employed a misleading enrollment process to induce consumers to sign up for its OnStar connected vehicle service and the OnStar Smart Driver feature. The agency further alleged that GM failed to clearly disclose its collection of precise geolocation and driving behavior data through the Smart Driver feature and subsequently sold this data to third parties without consumer consent. The final order imposes a five-year prohibition on disclosing consumers’ geolocation and driver behavior data to consumer reporting agencies. For the full 20-year duration of the order, GM must obtain affirmative express consent before collecting, using, or sharing connected vehicle data (with limited exceptions, such as providing location data to emergency responders). The order also requires GM to provide consumers with mechanisms to request copies of their data, seek deletion, disable precise geolocation collection where technically feasible, and opt out of geolocation and driver behavior data collection. 


INTERNATIONAL LAWS & REGULATIONS

OAIC Announces privacy compliance sweep: The Office of the Australian Information Commissioner (“OAIC”) has announced its first-ever privacy compliance sweep. This targeted enforcement initiative will review the privacy policies of approximately 60 businesses across six sectors that collect personal information through in-person interactions. The sweep will focus on rental and property agencies, pharmacists, licensed venues, car rental companies, car dealerships, pawnbrokers, and second-hand dealers. These sectors were selected due to the particular privacy risks associated with in-person collection of personal information, especially identity documents. Privacy Commissioner Carly Kind emphasized that in-person data collection creates power and information asymmetries, leaving consumers vulnerable to overcollection and associated security risks. The Commissioner noted that consumers often lack the information necessary to make informed decisions when confronted with requests for personal details from retailers and service providers. Entities will be assessed for compliance with Australian Privacy Principle (“APP”) 1.4, which prescribes mandatory content requirements for privacy policies. Non-compliant businesses may face compliance notices, infringement notices, and penalties of up to $66,000. 

Brazil ANPD Publishes Regulatory Priorities for 2026-27: Brazil’s National Data Protection Agency (“ANPD”) published its Map of Priority Topics for the 2026-2027 biennium and an updated 2025-2026 Regulatory Agenda. These publications lay out the ANPD’s priorities for regulating and enforcing obligations under both the General Law for the Protection of Personal Data and the newly enacted Digital Statute of the Child and Adolescent. The Priority Topics Map identifies four enforcement priorities: (i) rights of data subjects; (ii) protection of children and adolescents in the digital environment; (iii) government processing of personal data; and (iv) AI and emerging technologies in the context of personal data processing. Enforcement activities will include monitoring the secondary use of personal data for targeted advertising, verifying privacy-by-design and by-default implementations, and assessing measures to prevent minors from accessing prohibited content. 

UK ICO Releases Report on Agentic Commerce: The United Kingdom’s Information Commissioner’s Office (“ICO”) announced a new Tech Futures report examining the emergence of “agentic commerce,” AI-powered systems capable of autonomously making purchases, negotiating prices, sourcing financing, and managing household finances on behalf of consumers. According to the ICO, agentic AI refers to AI that automates decision-making, interacts with its environment, solves problems in real time, and mimics certain reasoning and planning functions. The ICO anticipates that personal shopping “AI-gents” could become a commercial reality within the next five years, with these systems proactively anticipating consumer needs based on learned preferences, behavioral patterns, and knowledge of upcoming events. While acknowledging the transformational potential of this technology, the ICO emphasized that technological advancements must not come at the cost of data privacy. The ICO announced that throughout 2026, it will actively monitor developments in agentic AI and engage with AI developers and deployers to ensure clarity regarding legal compliance obligations

ICO Issues International Transfer Guidance: The UK ICO published updates to its guidance on international transfers of personal data under UK data protection law. The ICO expanded its guidance on what is and isn’t a restricted transfer, introducing a ‘three step test’ and providing more examples. The updated guidance includes new content to support organizations with their obligations, including information about who is responsible for the transfer rules and key responsibilities in the context of international transfers. The ICO also added a description of practical steps organizations can take to help compliance efforts. 

EU Commission Proposes Cybersecurity Package: The European Commission (the “Commission”) proposed a comprehensive cybersecurity package aimed at strengthening the EU’s resilience against cyber and hybrid threats targeting essential services and democratic institutions. The package centers on a revised Cybersecurity Act with four key components: (1) Information and Communication Technologies (“ICT”) Supply Chain Security proposals that establish a trusted ICT supply chain security framework using a harmonized, risk-based approach to address third-country supplier risks across 18 critical sectors; (2) a revised European Cybersecurity Certification Framework (“ECCF”) that introduces streamlined procedures enabling certification schemes to be developed within 12 months by default; (3) Simplified NIS2 Compliance, including targeted amendments to the NIS2 Directive aim to ease compliance for businesses meeting certain definitions of small and mid-sized enterprises; and (4) providing the EU Agency for Cybersecurity (“ENISA”) enhanced powers to issue early threat alerts, support ransomware incident response in cooperation with Europol, and operate a single-entry point for incident reporting. The Cybersecurity Act will take effect immediately upon approval by the European Parliament and Council, with Member States having one year to transpose NIS2 amendments into national law. 

South Korea AI Basic Act Takes Effect: South Korea implemented a comprehensive set of laws regulating AI through its Act on the Development of Artificial Intelligence and Establishment of Trust (“AI Basic Act”). The legislation, which took effect ahead of the European Union’s phased AI Act rollout, reflects South Korea’s ambition to become a world leader in AI innovation and regulation. AI Basic Act mandates human oversight for “high impact” AI applications in sectors such as nuclear safety, drinking water production, transportation, healthcare, and financial services including credit evaluation and loan screening. Additionally, companies must provide advance notice to users about products or services utilizing high-impact or generative AI, and implement clear labeling when AI-generated content could be mistaken for reality. Non-compliance can result in fines up to 30 million won (approximately $20,400). 

EDPB-EDPS Issue Joint Opinion on Digital Omnibus on AI: The European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) issued a joint opinion addressing the European Commission’s (the “Commission”) proposal to streamline implementation of the EU AI Act through the “Digital Omnibus on AI.” The joint opinion expresses concern over several proposed changes. First, they advise against removing the registration obligation for AI systems classified as “non-high risk” under the EU AI Act, arguing this could undermine accountability and incentivize improper use of exemptions. Second, they recommend limiting the proposed expansion of lawful bases for processing special categories of personal data—such as ethnicity or health data—for bias detection, restricting use to situations where adverse bias risks are sufficiently serious. The joint opinion supports new EU-level AI regulatory sandboxes to promote innovation. It also recommends maintaining AI literacy obligations for providers and deployers, rather than shifting responsibility solely to Member States and the Commission. Finally, the EDPB and EDPS raise concerns about delaying application of high-risk AI system provisions, urging co-legislators to minimize delays given the rapid evolution of AI technologies. 

Brazil EU Adequacy Agreement Finalized: The European Commission finalized an adequacy decision recognizing Brazil’s General Data Protection Law (“LGPD”) as providing protections equivalent to the EU General Data Protection Regulation (“GDPR”). This decision enables the free flow of personal data between the EU and Brazil, creating what officials describe as “the world’s largest area for safe, cross-border data flows, covering over 670 million people.” The agreement is part of a broader free trade partnership between the EU and Latin American countries. Consistent with its standard procedures for adequacy decisions, the Commission will conduct a review every four years. This adequacy decision follows other recent Commission activity on this front, including the renewal of EU-U.K. adequacy in December 2025.


RECENT PUBLICATIONS & MEDIA COVERAGE

Blank Rome partners Yelena M. Barychev and Sharon R. Klein and senior AI innovation and client solutions manager Bill Rueter will serve as panelists for the Association of Audit Committee Members Inc.’s (“AACMI”) Making AI Work: Identifying and Mitigating Liability, Managing Risks, and Mastering Prompts live webinar, being held on Thursday, February 19, 2026, from 12:00 to 1:00 p.m. EDT.

Blank Rome partners Jennifer J. Daniels and Philip N. Yanella were featured in this TechCrunch article discussing potential changes to the TikTok company’s privacy policies.

Blank Rome partner Sharon R. Klein was featured in this SecurityWeek article discussing increasing difficulties for companies to manage compliance demands.  

Blank Rome partner Sharon R. Klein was featured in this HealthCare Dive article discussing how artificial intelligence innovations may affect healthcare industry privacy laws. 


© 2026 Blank Rome LLP. All rights reserved. Please contact Blank Rome for permission to reprint. Notice: The purpose of this update is to identify select developments that may be of interest to readers. The information contained herein is abridged and summarized from various sources, the accuracy and completeness of which cannot be assured. This update should not be construed as legal advice or opinion, and is not a substitute for the advice of counsel.