Welcome to this month's issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security & Data Protection practice. We invite you to share this resource with your colleagues and visit Blank Rome’s Privacy, Security & Data Protection webpage for more information about our team.
STATE & LOCAL LAWS & REGULATIONS
California Court Tentatively Delays Enforcement of CPRA Regulations
The Sacramento Superior Court issued a tentative ruling in California Chamber of Commerce v. California Privacy Protection Agency to delay the enforcement of regulations issued by the California Privacy Protection Agency (“CPPA”), the rulemaking authority charged with promulgating final regulations under the California Privacy Rights Act (“CPRA”), until March 29, 2024. Although the CPRA specifies that final regulations issued by the CPPA by July 1, 2022, would be enforceable on July 1, 2023, the CPPA did not issue its final regulations until March 29, 2023, and several key rulemaking topics contemplated by the CPRA (e.g., cybersecurity audits, risk assessments, and automated decision-making technology) remain unaddressed. The court determined that the CPPA’s failure to issue final regulations by July 1, 2022, should prohibit the agency from enforcing those provisions without providing businesses a similar one-year timeline for enforcement. The ruling does not affect regulations issued prior to March 29, 2023.
Connecticut Passes Bill to Protect Health and Children’s Data
Connecticut passed S.B. 3 to amend the state’s comprehensive privacy law to include “consumer health data,” defined as personal data used to identify a consumer’s physical or mental health condition or diagnosis, to the definition of “sensitive data.” S.B. 3 further requires opt-in consent before consumer health data is sold and prohibits the use of a geofence to establish a virtual boundary that is within 1,750 feet of any mental health or reproductive/sexual health facility. S.B. 3 also prohibits controllers who have actual knowledge or willfully disregard that they offer online services, products, or features to minors from: (i) processing any minor’s personal data for the purposes of targeted advertising, sale, or certain types of profiling; (ii) collecting a minor’s precise geolocation; or (iii) using any system design feature to significantly increase, sustain, or extend any minor’s use of such online service, product, or feature.
Nevada Passes Legislation Expanding Protections for Health Data
Nevada Governor Joe Lombardo signed SB370 into law (the “Act”). The Act expands protections for “consumer health data” of Nevada residents, which is defined as personally identifiable information that is linked or reasonably capable of being linked to a consumer and that a regulated entity uses to identify the past, present, or future health status of the consumer, including health conditions, medical interventions, use or acquisition of medications, bodily functions, vital signs or symptoms, and reproductive or sexual health care. Most provisions of the Act apply to “regulated entities,” which means any person that conducts business in Nevada or provides products and services targeted to Nevada consumers and that alone or with others determines the purposes and means of processing consumer health data. The Act prohibits regulated entities from collecting or sharing consumer health data except with prior opt-in consent or to the extent necessary to provide a product or service that the consumer has requested. Regulated entities will be required to provide a detailed consumer health data privacy policy and provide consumers with access, opt-out, and deletion rights with respect to their health data. The Act also prohibits any person, whether or not a regulated entity, from selling consumer health data without written authorization or implementing any geofencing within 1,750 feet of a medical facility. The Act does not provide a private right of action. Violations are enforceable by the Nevada Attorney General under Nevada’s deceptive trade practice statute. The Act will be effective on March 31, 2024.
Colorado and Connecticut Comprehensive Privacy Laws Enter into Force
On July 1, 2023, Colorado and Connecticut comprehensive privacy laws entered into force. The Colorado and Connecticut laws are the third and fourth state comprehensive privacy laws to become effective, following California and Virginia, and add to the complex patchwork of state laws companies must navigate. Ten states have now passed comprehensive privacy legislation, with Utah’s law set to become effective on December 31, 2023, and other states becoming effective at various times in 2024, 2025, and 2026.
Connecticut Attorney General Issues Guidance on Connecticut Data Privacy Act
Connecticut Attorney General William Tong released guidance on the Connecticut Data Privacy Act (“CTDPA”), which came into effect on July 1, 2023. Attorney General Tong highlighted that the CTDPA establishes new “baseline privacy rights” for Connecticut consumers and requires covered businesses to appropriately limit their collection of personal data, be transparent about how they use and secure that data, and obtain consumer consent before collecting sensitive information—such as precise location data, biometric data, and certain health information. He further emphasized that the CTDPA requires covered businesses to maintain a privacy notice that clearly describes how consumers may exercise their rights under the CTDPA and how the CTDPA prohibits businesses from discriminating against consumers for exercising those rights. He also noted that the CTDPA requires covered businesses to obtain opt-in consent before selling the personal data of a consumer under 16 years old or sending the consumer targeted ads.
Florida Medical Information Data Localization Laws Take Effect
Amendments to the Florida Electronic Health Records Exchange limiting the geographic location of offsite storage of patient information became effective on July 1, 2023. The amendments require healthcare providers using certified electronic health record technology to ensure that any patient information stored in an offsite environment, including within a vendor’s cloud computing services, is physically maintained in the continental U.S., U.S. territories, or Canada. The law’s localization requirements apply to health care providers regulated by the Florida Agency for Health Care Administration, including health care clinics, hospitals, nursing homes, home health agencies, certain licensed providers such as physicians, pharmacists, dentists, chiropractors, podiatrists, naturopathic physicians, physician assistants, acupuncturists, optometrists, registered nurses, advanced practice registered nurses, midwives, speech-language pathologists, occupational therapists, and pharmacies, among other licensed entities and providers. Providers should review the requirements with their electronic health records and other vendors to ensure compliance with the new localization requirements.
Oregon Legislature Passes Strong Consumer Privacy Act
Oregon became the eleventh state to pass comprehensive data privacy legislation with the passage of SB 619, the Oregon Consumer Privacy Act (“OCPA”). If signed into law, the OCPA will go into effect for businesses on July 1, 2024, with a thirty-day right to cure that sunsets on January 1, 2026. In comparison to other comprehensive state data privacy laws, the OCPA is modeled most similarly to the privacy acts of Connecticut and Colorado (including Colorado’s rulemaking). However, Oregon’s legislature uniquely omits certain entity-level exemptions (e.g., “financial institutions” under the Gramm-Leach-Bliley Act, “covered entities” under the Health Insurance Portability and Accountability Act, and, beginning on July 1, 2025, non-profit organizations) in favor of data-level exemptions. The OCPA also adds a unique definition of “biometric data” and provides a new consumer right to obtain a list of specific (as opposed to categories of) third-party entities to whom the business has disclosed the consumer’s or any personal data.
Florida Amends Mini-TCPA
Florida Governor Ron DeSantis signed HB 761 into law to immediately amend the Florida Telephone Solicitation Act (“FTSA”) with business-friendly restrictions. The FTSA has been commonly referred to as Florida’s “mini” version of the federal Telephone Consumer Protection Act (“TCPA”), as the TCPA and FTSA both protect consumers from receiving unwanted telemarketing calls and texts using automated technology. However, prior to the passage of HB 761, the FTSA arguably applied to a more expansive scope of “automated systems” than its federal counterpart. As amended, the FTSA’s definitions and scope of applicability align more closely with the TCPA’s. Among other things, HB 761 also establishes a new pre-filing requirement for plaintiffs in text-messaging cases and applies retroactively to “any putative class action not certified on or before the date of this act.”
Texas Amends Its Data Breach Notification Law
Texas has passed an amendment (S.B. 768) to its data breach notification law, shortening the time period for when notice to the Texas Attorney General is required and how such notice must be provided. Under the Texas data breach notification statute, notice to the Texas Attorney General is required where the data breach involves at least 250 Texas residents. Prior to the amendment, entities subject to the Texas data breach notification law had 60 days after determining a breach occurred to notify the Texas Attorney General. The amendment shortens this time frame to 30 days. The amendment also requires notification to the Texas Attorney General to be submitted electronically using a form accessed through the Texas Attorney General’s website. The amendment will take effect on September 1, 2023.
Connecticut Passes Law on Artificial Intelligence
Connecticut passed S.B. 1103, which regulates state agencies’ use of artificial intelligence (“AI”). S.B. 1103 requires the Department of Administrative Services to identify the systems used by state agencies that employ AI and to perform assessments to ensure that the use of such systems does not result in unlawful discrimination or disparate impacts. The system inventory must be conducted by December 31, 2023, and the assessments completed by February 1, 2024. The inventory and assessments must thereafter be conducted on an annual basis. S.B. 1103 also requires the Office of Policy and Management to develop policies and procedures by February 1, 2024, to govern the procurement, implementation, and ongoing assessment of systems that employ AI and are used by state agencies. These policies and procedures will govern how state agencies can procure systems from vendors that include AI. Additionally, beginning on October 1, 2023, state contracting agencies must include provisions that require businesses working with them to comply with the CTDPA.
DAA Releases Best Practices for IoT Connected Devices
The Digital Advertising Alliance (“DAA”), a self-regulatory group that establishes and enforces privacy practices for digital advertising, has issued guidelines and best practices (the “Guidance”) intended to clarify how companies can apply the DAA’s privacy practice principles of transparency and control to the collection, use, and transfer of data from Internet of Things (“IoT”) connected devices (e.g., smart appliances, connected TVs, and smartwatches) for interest-based advertising and other covered purposes. Among other things, the Guidance recommends that “first parties,” meaning companies with whom consumers intentionally interact with to access a digital property through a connected device, provide consumers a “clear, meaningful, and prominent link” to a data collection disclosure notice that also links (or provides a similar control) to an opt-out choice mechanism. DAA plans to convene a Working Group with key industry stakeholders to consider actionable plans to operationalize the best practices.
FEDERAL LAWS & REGULATIONS
U.S. and UK Announce UK Extension to the Data Privacy Framework
The U.S. and UK have announced that the countries have reached a commitment to establish the UK Extension to the Data Privacy Framework (“Framework”) that will create a “data bridge” between the countries. The Framework would act as a mechanism for the personal data transferred from the UK to the U.S., and U.S. companies who are approved to join the Framework would be able to receive UK personal data without entering into standard contractual clauses. Further technical work will need to be completed in the coming months before a decision on whether to establish the data bridge is made and will be dependent on the U.S.’s designation of the UK as a qualifying state under Executive Order 14086. The European Commission recently adopted an adequacy decision for the EU-U.S. Data Privacy Framework, which will allow personal data flows without the use of other legal mechanisms, such as the standard contractual clauses.
Twenty-Four AGs Submit Comment Letter to NTIA Calling for Greater AI Transparency and Accountability
A bipartisan coalition of 24 AGs submitted a comment letter responding to the Request for Public Comment on AI Accountability Policy, Docket No. NTIA-2023-0005, issued on April 13, 2023, by the National Telecommunications and Information Administration (“NTIA”). In the letter, the AGs made recommendations for independent standards for AI transparency and consumer disclosures, testing, assessments, and audits (including regular third-party audits) of AI systems and a concurrent enforcement model for state AGs within any federal regulatory regime governing AI.
Senator Hawley Announces Guiding Principles for American AI Legislation
Senator Hawley (R-Mo.) issued a press release announcing five guiding principles aimed at ensuring the responsible development of AI legislation while safeguarding Americans’ privacy and preventing harmful impacts, particularly risks of harm to children. The proposal includes: (i) creating private rights of action to hold AI companies accountable; (ii) protecting personal data by prohibiting the collection of sensitive personal data without consent, with stiff penalties for misuse; (iii) enforcing age limits on use to shield minors from harmful effects and proactively block companies from deploying or promoting generative AI models to children; (iv) blocking technology to and from China to promote American AI independence; and (v) establishing a licensing system to protect consumers, promote transparency, and require developers of generative AI models to obtain a license.
Senators Introduce Bipartisan Bill That Would Restrict Exports of Personal Data
A bipartisan group of senators introduced the Protecting Americans’ Data from Foreign Surveillance Act of 2023 (“PADFSA”). If enacted, PADFSA would protect Americans’ data from being exploited by foreign nations. The bill achieves this objective by applying tougher criminal and civil penalties to prevent employees of foreign corporations from accessing U.S. data from abroad or sending data to unfriendly foreign nations. Specifically, the bill directs the Secretary of Commerce to (i) identify categories of personal data that could harm U.S. national security, (ii) compile a list of low-risk countries where data could be shared without regulation and a list of high-risk countries where exports of sensitive data would be blocked, and (iii) create a license issuance system for data exports to nations not included on either list. The bill would also impose export control penalties on senior executives who knew or should have known that their employees illegally exported Americans’ data.
Senators Reintroduce Bipartisan Bill to Provide Data Broker Deletion Mechanism
A bipartisan group of senators introduced the Data Elimination and Limiting Extensive Tracking and Exchange (“DELETE”) Act. If passed, the DELETE Act would create a system for Americans to request all data brokers or companies that collect personal data for commercial use to delete any personal data the broker or company may have collected. The bill would also request that all data brokers or companies to not collect personal data in the future. Under current law, Americans must request removal from each individual data broker or company to ensure their private data is protected. The DELETE Act would direct the Federal Trade Commission to create an online dashboard for Americans to submit a data deletion request. This request would then be sent to all data brokers registered with the FTC. The bill would also create a ‘do not track’ list, which would protect taxpayers from future data collection by brokers or companies.
FTC Provides Lessons in IoT Privacy and Security
Two recent FTC complaints against Amazon and Ring allege that both companies used voice recordings to train their algorithms while ignoring consumers’ rights. The FTC issued a business blog post intended to provide lessons for companies using AI, biometric data, and other sensitive information. Its lessons included that (i) the FTC will hold companies accountable for how they obtain, retain, and use consumer data in AI and machine learning algorithms – algorithms built by improperly obtained data are subject to deletion; (ii) companies must ensure that consumers have meaningful control over their personal data; and (iii) companies should place special safeguards on human review and employee access to personal data. The FTC also made clear that it will use all available tools to protect the privacy of children and biometric data, and will back up both policies with enforcement actions.
Interagency Guidance on Third-Party Risk Management Released
The Federal Deposit Insurance Corporation (“FDIC”), the Board of Governors of the Federal Reserve System (“FRB”), and the Office of the Comptroller of the Currency (“OCC”) issued their final Interagency Guidance on Third-Party Relationships: Risk Management (“Guidance”). The Guidance is not legally binding but provides principles to support a risk-based approach to third-party risk management that banking organizations may consider when developing and implementing risk management practices for all stages in the life cycle of third-party relationships, including in the planning, due diligence, contract negotiation, ongoing monitoring, and termination stages. The Guidance notes that sound third-party risk management considers the level of risk, complexity, and size of the banking organization and the nature of the specific third-party relationship. The Guidance replaces existing guidance regarding risk management practices for third-party relationships, including the FRB’s 2013 guidance, the FDIC’s 2008 guidance, and the OCC’s 2013 guidance and 2020 frequently asked questions.
White House Extends Deadline for Contractors’ Secure Software
On June 9, 2023, the White House released an update extending the deadline for federal contractors to provide self-attestations that their software complies with government-developed security requirements. The original deadline for compliance, pursuant to a September 2022 memorandum, was June 12, 2023, for critical software and September 14, 2023, for other software. The new deadline for critical software is three months after federal contractors finalize an attestation form. For other software, the new deadline is six months after the finalization of the attestation form. The update also included various clarifications to the September 2022 memo, including the requirement that third-party software components incorporated into government-used software do not need separate attestations from federal contractors. In addition, attestations are not required for open-source software or proprietary software that is freely obtained and publicly available.
U.S. ENFORCEMENT
HHS Settles with Health Care Provider for Disclosing Patient Information in Response to Negative Online Reviews
The U.S. Department of Health and Human Services’ (“HHS”) Office of Civil Rights (“OCR”) announced it reached a settlement with Manasa Health Center, LLC (“Manasa”), which provides adult and child psychiatric services in New Jersey, for alleged violations of the Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule. The settlement resolves a complaint received by OCR in April 2020, alleging that Manasa impermissibly disclosed a patient’s protected health information, including information regarding the patient’s diagnosis and treatment of their mental health condition, when Manasa posted a response to the patient’s negative online review. The OCR, upon further investigation, also found that Manasa potentially violated HIPAA by impermissibly disclosing the protected health information of three other patients in response to their negative online reviews and failing to implement HIPAA Privacy policies and procedures. Manasa has agreed to pay $30,000 to OCR and to implement a corrective action plan to resolve these potential violations.
Microsoft to Pay $20 Million to Settle FTC Action over Illegal Collection of Children’s Xbox Data
Microsoft has agreed to pay $20 million to settle an action filed in a Washington District Court by the Federal Trade Commission (“FTC”) over allegations that Microsoft, through its Xbox Live gaming network service, violated the federal Children’s Online Privacy Protection Act (“COPPA”) by (i) collecting personal information (e.g., names, email addresses, phone numbers, and photos) from children under the age of 13 (“Children”) without parental consent; (ii) failing to provide parents with adequate notice of the information collected from Children and the purpose for collection; (iii) disclosing information to third parties; and (iv) retaining Children’s personal information for longer than reasonably necessary. In addition to the civil penalty and standard injunctive provisions to cure the alleged violations, Microsoft must also implement a comprehensive privacy program to increase protections for Xbox users under the age of 13.
Business Associate Agrees to $75,000 Settlement with HHS OCR over HIPAA Data Breach Action
OCR announced that iHealth Solutions, LLC, d/b/a Advantum Health (“iHealth”), a “business associate” (as defined under the federal Health Insurance Portability and Accountability Act (“HIPAA”)) that provides coding, billing, and onsite information technology services to health care providers has agreed to settle potential violations of HIPAA arising from a data breach involving unsecured protected health information (“PHI”) of 267 individuals. OCR had launched an investigation of iHealth following the receipt of a breach report stating that iHealth had experienced an unauthorized transfer of PHI, which included patient names, dates of birth, addresses, Social Security Numbers, email addresses, diagnoses, and other medical information from its unsecured server. In addition to the impermissible disclosure of protected health information, OCR’s investigation found evidence of the potential failure by iHealth to have in place an analysis to determine risks and vulnerabilities to electronically protected health information across the organization. Pursuant to the settlement agreement, iHealth must pay $75,000 to OCR and implement a corrective action plan.
HIPAA Settlement Reached with Community Hospital for Impermissible Access of Medical Records
OCR announced a $240,000 settlement with Yakima Valley Memorial Hospital for violations of the HIPAA Privacy, Security, and Breach Notification Rules. The OCR investigated and found that several security guards from Yakima Valley Memorial Hospital impermissible accessed the medical records of 419 individuals. Along with the $240,000 settlement, Yakima Valley Memorial Hospital agreed to implement a plan to update its policies and procedures to protect health information and train its employees to prevent this behavior in the future. As a result of the settlement, the hospital will be monitored for two years by OCR to ensure compliance with HIPAA.
FTC Requires Publishers Clearing House to Pay $18.5 Million to Misled Consumers
The Federal Trade Commission announced an order requiring Publishers Clearing House (“PCH”) to pay $18.5 million to consumers. The FTC charged that PCH used “dark patterns” to mislead consumers about entering the company’s sweepstakes drawings, making these consumers believe that a purchase is necessary to win or increase their chances of winning. The FTC also charged that PCH added surprise shipping and handling fees to the costs of products, misrepresented “risk-free” ordering, used deceptive marketing emails, and misrepresented its policies on selling users’ personal data to third parties. PCH will also implement a number of key changes to its email and internet operations, including separating its sweepstakes from its sales operations, making clear, conspicuous, and unavoidable disclosures, destroying consumer data collected prior to January 1, 2019, and preserving records of any market, behavioral, or psychological research.
FTC and DOJ Settle COPPA Violation Allegations with Voice Assistant Service
The Federal Trade Commission (“FTC”) and the Department of Justice (“DOJ”) settled their complaint against Amazon relating to allegations that Amazon failed to honor parental deletion rights and retained data for years in violation of the Children’s Online Privacy Protection Act Rule (“COPPA Rule”). As part of the settlement, Amazon will be required to pay a $25 million civil penalty and be prohibited from using geolocation, voice information, and children’s voice information subject to consumers’ deletion requests for the creation or improvement of any data product. The settlement touches on several areas the FTC is currently emphasizing in privacy enforcement, including safeguarding children’s data and geolocation data and maintaining appropriate retention and deletion procedures.
FTC Settles with Home Security Camera Company over Unauthorized Employee Access to Videos and Lax Security
The FTC announced a settlement with Ring, a seller of internet-connected home security cameras, doorbells, and related accessories. The FTC alleged that Ring failed to restrict employee and contractor access to customer videos, used customer videos to train algorithms without consent, and failed to implement appropriate security safeguards. According to the FTC, Ring used customers’ private recordings to train algorithms without adequately notifying customers or obtaining consent prior to January 2018 and had buried terms relating to use of data in connection with product improvement and development in its terms of service and privacy policy. The complaint also alleged Ring failed to implement standard security measures to protect consumer information from credential stuffing and brute force attacks. The proposed FTC order requires that Ring implement a mandated privacy and security program, delete video and certain other data it collected prior to 2018, and delete any work product derived from that video. Ring will also pay $5.8 million to be used for consumer refunds.
INTERNATIONAL LAWS & REGULATIONS
China Guideline for Filing the Standard Contract for Cross-border Transfer of Personal Information Becomes Effective
The Guideline for Filing the Standard Contract (“SC”) for Cross-border Transfer of Personal Information (the “Guideline”) issued by the Cyberspace Administration of China (“CAC”) became effective on June 1, 2023, offering a new mechanism for cross-border transfers of, including access to, personal data collected and generated in operations within China to an organization, entity, or individual outside of China. The Guideline offers further insight on how to use the SCs, including the scope of applicability of use under certain circumstances. However, it remains unclear whether a data handler subject to China’s Personal Information Protection Law (“PIPL”) by virtue of extraterritorial jurisdiction may use the SC for cross-border transfers of personal information.
European Parliament Agrees on Negotiation Position for AI Act
The European Parliament approved its negotiation position on the EU’s proposed AI Act. The approval is a significant step toward passing potentially the world’s first artificial intelligence regulation. The AI Act will now be the subject of a three-way negotiation between the European Parliament, the EU Council of Ministers, and the European Commission, referred to as a trilogue. The current proposed text of the AI Act takes a risk-based approach to artificial intelligence regulation, banning certain uses deemed to pose too much risk, placing significant regulation on high-risk systems, and placing relatively few restrictions on low-risk use cases. Trilogue negotiations could be complete, and a final text of the AI Act could be agreed upon, by the end of 2023.
European Parliament and EU Council Reach Agreement on Data Act
The EU Council presidency and European Parliament representatives reached a provisional agreement on the Data Act, a proposed EU regulation to harmonize rules relating to fair access to and use of data. The Data Act is intended to set rules for access and use of data generated by connected devices. The agreed text aims to make switching providers easier for consumers, put in place safeguards against unlawful data transfer by cloud services providers, and provide for the development of interoperability standards so that data may be reused between sectors. The provisional agreement will now go to the EU Council and European Parliament for endorsement and revision.
European Data Protection Board Adopts the Final Version of Guidelines to Calculate Administrative Fines
The European Data Protection Board (“EDPB”) has adopted guidelines on the calculation of administrative fines under the GDPR. These guidelines intend to create a uniform methodology used by supervisory authorities. The guidelines contain a five-step methodology composed of the following steps: (i) identify the processing operations in the case and evaluate the application of Article 83(3) of the GDPR; (ii) identify the starting point for further calculation of the fine amount; (iii) evaluate aggravating and mitigating circumstances related to past and present behavior of the controller; (iv) identify the legal maximum(s) for the infringement(s) and corporate liability and; (v) assess the effectiveness, proportionality, and dissuasiveness of the fine. The methodology should be an assessment of all relevant facts and circumstances in each instance. The guidelines also introduced changes in how the size of an organization impacts the starting amount for calculating fines.
Berlin Data Protection Authority Issues Fine for Lack of Transparency about Automated Decision-Making
The Berlin Commissioner for Data Protection and Freedom of Information (“Commissioner”) fined a bank €300,000 for failing to provide comprehensible information to a data subject about how a credit scoring algorithm works. In the case examined by the Commissioner, the bank accepted a digital application for a credit card using information from a data subject on income, occupation, and other personal details. The credit application was rejected without any special justification. The data subject asked the bank for information about how his information was scored, but the bank provided only general information on the scoring procedure that was not related to the data subject’s particular case. The Commissioner found that because the data subject was not provided with information that allowed him to understand why his application was scored in a certain manner, the bank violated the General Data Protection Regulation by failing to provide the data subject with information that is sufficiently detailed to allow the data subject to challenge the automated decision.
RECENT PUBLICATIONS & MEDIA COVERAGE
- Data Matters—Risks and Best Practices for Use of Generative AI (The Legal Intelligencer)
- Maritime Ransomware (Pratt’s Privacy & Cybersecurity Law Report)
- Protecting Against Invasion of Privacy Chat Box Class Actions (Blank Rome Client Advisory)
- Florida Dials Back State’s Mini-TCPA (Blank Rome Client Advisory)