I’ve been waiting, and then it arrived, in a client’s updated Outside Counsel Guidelines:

The Firm and its personnel or subcontractors shall not use any External Generative AI Tools in the performance of any services in relation to a Matter … or use External Generative AI Tools to generate any deliverable, document or content for [Client], regardless of the intended end use.

No problem. We don’t use generative AI tools in our client work. 

Don’t get me wrong – we’re not Luddites here, and I’m not dreading a Skynet singularity anytime soon.  We’re a law firm focused on Information Governance, and so we need to remain up to speed on the impacts of a wide range of information technologies.  And the promise of AI is indeed breathtaking.  Sam Altman had a lot on his plate last week, but his Hard Fork interview of November 15, just two days before all the crazy erupted at OpenAI, is a must-listen on the transformative potential for AI, with due regard for attendant risks.

Yet we each choose for ourselves what tools we will use – or not use – when lawyering for clients.  I don’t presume to know whether other lawyers should use generative AI tools, but I do know my answer.  And it’s no for me and my law firm, for three reasons.

First, generative AI is not dependable enough for my taste in doing client work.  And that’s not just the idiocy of filing a brief generated by AI without checking its accuracy.  Simply put, the reliability, or as Altman prefers, the “controllability,” of generative AI is not there yet, or at least not for me.  Hallucination at three percent or closer to 27 percent is well beyond my comfort level – my professional goal remains to hallucinate less than one percent of the time. 

Second, after decades at a big firm, I designed my small firm to be simple, stripping away anything that has more risk or cost than value to me. For example, we don’t use Wi-Fi, because Wi-Fi is unnecessary for us, it has security vulnerabilities, and the requisite safeguards would complicate our firm’s annual ISO 27001 data security audits. For me, using generative AI tools would bring more risk and uncertainty than value to my law practice.

Last, I need to go to the source to ensure I know what I’m doing.  When advising clients on data retention and data security requirements under U.S. federal and state statutes and regulations, I need to personally read the laws and regulations, without any filter or intermediary. Otherwise, I’m not sure. And to be frank, it all sticks better in my head when I take the time to do my work at this deliberate pace, which puts the “sure” in “slowly but surely.”

I know that these are early days for generative AI.  In Gartner Hype Cycle terms, we are still on the Peak of Inflated Expectations, trending toward the Trough of Disillusionment.  I’ll be watching from the sidelines, as AI eventually reaches the Plateau of Productivity. For purely creative writing, including some blogging, I of course acknowledge AI’s kickstarting value. And Kevin O’Keefe nails it when he reminds us that “using AI in legal writing is like taking an open book exam—you still need to understand the concepts.”

So, while state ethics committees will continue to ponder the legal ethics of AI use (despite GPT-4 apparently scoring higher than most lawyers-to-be when taking the Multistate Professional Responsibility Exam), my choice will remain Organic Intelligence, not AI.

No snobbery about this – you do you, and I’ll do me. But in my own legal work for clients, I’m simply not content with using AI-generated content.

As we watch the tsunami of state comprehensive consumer privacy laws now spreading from California across the U.S., it’s time to revisit the flood zone of state-level PII breach notification statutes, which also flowed forth from California back in 2002. By 2018 that wave had reached every state, along with the District of Columbia, Puerto Rico, Guam, and the U.S. Virgin Islands.  Each state has its own unique approach. And the states continue to expand their requirements, especially their definitions of what constitutes PII and the timing and content of mandated notifications. Changes since 2018 are in bold below, reflecting how the tide continues to rise.

Remember, these laws are triggered by the affected individuals’ residency, not where the breach occurred. So, when businesses with employees and customers in many states suffer a data breach, they must stay above water with a wide variety of conflicting and evolving state-level PII breach notification laws. 

Scope of PII

State PII breach notification laws generally apply to a state resident’s name combined with another identifier useful for traditional identity theft, such as the individual’s Social Security number, driver’s or state identification number, or financial account number with access credentials. But an ever-growing number of states include additional “name +” combination elements in their PII definition (again, bold indicates changes since 2018):

  • Medical information (Alabama, Arkansas, Arizona, California, Colorado, Connecticut, DelawareD.C., Florida, Illinois, Maryland, Missouri, Montana, Nevada, North Dakota, Oregon, Pennsylvania, Puerto Rico, Rhode Island, South Dakota, Texas, VermontWashington, and Wyoming)
  • Health insurance information (Alabama, Arizona, California, Colorado, Connecticut, Delaware, D.C., Florida, Illinois, Maryland, Missouri, Montana, Nevada, North Dakota, Oregon, Pennsylvania, Rhode Island, Texas, VermontWashington, and Wyoming)
  • Unique biometric data (Arizona, Arkansas, California, Colorado, Connecticut, Delaware, D.C., Iowa, Illinois, Louisiana, Maryland, Nebraska, New Mexico, New York, North Carolina, Oregon, VermontWashington, Wisconsin, and Wyoming)
  • Genetic data (Arkansas (used for identification purposes), California, Delaware, D.C., Maryland, Vermont, and Wisconsin)
  • Shared secrets or security token for authentication (Wyoming)
  • Taxpayer ID or other taxpayer information (Alabama, Arizona, California, Connecticut, Delaware, D.C., Maryland, Montana, Puerto Rico, VermontVirginia (employee TIN plus withholding), and Wyoming)
  • IRS identity protection PIN (Arizona, Connecticut, and Montana)
  • Employee ID number with access code or password (North Dakota and South Dakota)
  • Email address or Internet account number, with security access information (Alabama, Delaware, Florida, Maryland, Nevada, New Jersey, Pennsylvania, Rhode Island, and Wyoming)
  • Digital or electronic signature (Arizona, North Carolina, North Dakota, and Washington)
  • Employment ID number combined with security access information (North Dakota and South Dakota)
  • Birthdate (North Dakota and Washington)
  • Birth or marriage certificate (Wyoming)
  • Mother’s maiden name (North Dakota)
  • Work-related evaluations (Puerto Rico)
  • Information collected by automated license plate recognition system (California)

And in Arizona, California, Colorado, Connecticut, the District of Columbia, Florida, Georgia, Illinois, Indiana, Maine, Maryland, Nebraska, New York, North Carolina, Oregon, South Dakota, Texas, Vermont, and Washington, notification requirements can attach to specified identification data even without the individual’s name (in some such states with the proviso that such information would sufficiently enable unauthorized account access or identity theft).

PII media & encryption/redaction safe harbors

All of the state breach notification laws apply to PII in electronic or computerized form. But in several states, including Alaska, Hawaii, Indiana, Iowa, Massachusetts, North Carolina, Rhode Island, Washington, and Wisconsin, a breach of PII in any medium, including paper records, can trigger notification requirements.

Effective encryption of PII is an explicit safe harbor from notification obligations in virtually every jurisdiction, but 22 states now add the condition that the encryption key must not have been compromised in the breach. Thirty-three states now explicitly provide “redaction” as a safe harbor (with six states adding the condition that the means to un-redact are uncompromised), as do 23 states if other means are used to render the information unreadable or unusable.

Notification requirements  

The mandated time frame for notifying affected individuals has commonly been the most “expeditious” or “expedient” time possible, “without unreasonable delay,” considering such factors as the need to determine the scope of the breach, to restore system integrity, and to identify the affected individuals. But increasingly, states are imposing or tightening outside deadlines for notifications:

  • 60 days:  Connecticut (formerly 90 days), Delaware, Louisiana, South Dakota, and Texas (formerly no day limit)
  • 45 days Alabama, Arizona, Indiana (formerly no day limit), Maryland, New Mexico, Ohio, Oregon, Rhode Island, Tennessee, Vermont, and Wisconsin
  • 30 days Colorado, Florida, Maine (formerly no day limit), and Washington (formerly 45 days)
  • 10 days:  Puerto Rico

Twenty-nine jurisdictions’ statutes now contain prescribed minimum content for breach notifications to individuals, and various states have unique content requirements for such notices.

Thirty-seven of the jurisdictions now require breach reporting to the state’s Attorney General or other designated state agencies, triggered at various specified thresholds of affected individuals, ranging from one to over 1,000. And a similar majority of the states require breach reporting to credit agencies, triggered at differing thresholds, from one to over 10,000.

And all but eight jurisdiction’s statutes now contain some notion of a “risk of harm” exclusion to notification duties, either imbedded in the statute’s breach definition, or as an independent exception to the duty to notify.

… and the changes keep flowing

The floodplain of these state PII breach notification statutes remains in motion. Notable trends since 2018 include the ongoing rise in states that include biometric data, genetic data, medical information, health insurance information, and taxpayer information as PII, and the continuing increase of states establishing, or shortening, deadlines for making notifications.  States are also becoming yet more directive in specific content requirements for notifications, such as the manner in which credit monitoring and identity theft protection services are offered.

The life raft of a preempting federal law for PII breach notification remains slim, largely because of states’ concerns about such preemption.  So, businesses must continue to wade through the various compliance requirements in these state laws.  Yes, it continues to be a struggle to stay above water. But keeping up with the changes is crucial — both for security incident response readiness, and also for compliantly defining the scope of information subject to the organization’s security safeguards and controls.

Well, turns out I was both right and wrong in my prediction from two years ago: “For the 2020s, the dots already connect clearly – the new impetus for managing information retention and disposal will be data privacy and security compliance.  Buckle up.” That prediction is indeed playing out, but far faster than I expected.

Again, we’ve always known that managing data volumes is prudent for U.S. businesses.  But as a matter of pure legal compliance, U.S. federal and state laws have traditionally followed a “mandatory minimum” retention approach, requiring that businesses keep specified records for at least a required minimum retention period, but not compelling disposal.  With precious few exceptions, U.S. businesses have not been legally required to (1) manage data with retention schedules and (2) dispose of unnecessary data.  And U.S. privacy and data security laws have generally been silent on retention periods for protected information.

But that was then. As noted two years ago, a wide range of new data security and privacy laws are transforming retention schedules and data disposal from merely prudent practices into compliance requirements. And since then, as explored in this blog series, the pace has quickened, with:

Managing data with retention scheduling and disposing of unnecessary data are now compliance requirements for data privacy and security.

What should you do about this?

  • Clarify what constitutes protected information, based on your business’s geographic footprint and scope of operations.
  • Understand where protected information resides, both in your business’s data systems and through your relationships with service providers and contractors.
  • Update and legally validate your business’s data retention schedule, with particular attention to legally required retention periods, including retention maximums, for records and data sets containing protected information.
  • With that foundation in place, ensure that your business’s policies, contracts, privacy notices, training, and compliance systems foster compliant practices for the safeguarding, timely disposal, and other processing of protected information.

But aren’t these the same things that have always been good to do?  Yes indeed.  Managing records and information (more broadly, Information Governance) has been perennially prudent, particularly as our digital age has multiplied the volume and velocity of business data.

Redundant, obsolete, or trivial/transitory data (ROT) is still stubbornly pervasive. It’s not merely unhelpful – ROT escalates cost, risk, and exposure. Here’s my current favorite image for making elimination of ROT a business priority, from talented Canadian RIM professional Christine (CD) Delay:

Courtesy of Christine (CD) Delay

Yet something else remains true. In the real world, what to do has never been as impactful as why to do it.  In the 2000s, a powerful impetus for managing information retention and disposal was the rise of ediscovery, triggering concerns about (1) explosive litigation costs due to unnecessarily retained data and (2) the specter of spoliation sanctions if information is managed poorly.  In the 2010s, an additional, new impetus was the fear of data breaches, with their resulting reputational damage, business interruption, regulatory implications, and legal exposures, all multiplied by retaining unnecessary data. And now, for the 2020s, the newest impetus for managing information retention and disposal is crystal clear – data privacy and security compliance

We’re witnessing a “rapid, unscheduled disassembly” (thanks SpaceX) of comprehensive consumer privacy laws across the United States. While these new state laws generally have a different, sleeker structure than California’s CCPA/CPRA, they share a similar impact – each such law compels or motivates covered businesses to delete unnecessary data.

Following California’s lead, comprehensive consumer privacy laws have now been enacted in Virginia (effective January 1, 2023), Colorado (effective July 1, 2023), Connecticut (effective July 1, 2023), Utah (effective December 31, 2023), and Iowa (effective January 1, 2025). Here’s how these new laws address data retention and the deletion of unnecessary data:

Data Minimization and Storage Limitation

  • Virginia Consumer Data Protection Act (VCDPA)
    Under the VCDPA, controllers must limit collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. Va. Code Ann. § 59.1-578(A)(1). Controllers may only process personal data for purposes either reasonably necessary to or compatible with the purposes for which such personal data is processed, as disclosed to the consumer, unless the consumer’s consent is obtained or as otherwise provided in the VCDPA. Va. Code Ann. § 59.1-578(A)(2).
  • Colorado Privacy Act (CPA)
    The CPA requires that controller’s collection of personal data must be adequate, relevant, and limited to what is reasonably necessary in relation to the specified purposes for which the data are processed. Colo. Rev. Stat. § 6-1-1308(3). A controller must not process personal data for purposes that are not reasonably necessary to or compatible with the specified purposes for which the personal data are processed, unless the controller first obtains the consumer’s consent. Colo. Rev. Stat. § 6-1-1308(4).
  • Connecticut Data Privacy Act (CTDPA)
    Under the CTDPA, controllers must limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. CTDPA § 6(a)(1). Controllers must, except as otherwise provided in CTDPA, not process personal data for purposes that are neither reasonably necessary to, nor compatible with, the disclosed purposes for which such personal data is processed, as disclosed to the consumer, unless the controller obtains the consumer’s consent. CTDPA § 6(a)(2).

In each of these laws, the definition of “process” includes the storage and deletion of consumers’ personal information, and so their processing limitation includes an obligation to not unnecessarily retain consumer data. And as with California’s CCPA/CPRA, the obligation to provide consumers a compliant privacy policy on how personal data will be “processed” requires notice of retention practices, which as a practical matter are based upon the business’s records retention policies and records retention schedules.

Defacto Deletion Impact

All five of these new laws provide deletion rights to consumers, but covered businesses are not required to delete data for which retention is required by records retention laws or regulations:

  • Virginia: The VCDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations. Va. Code Ann. § 59.1-582(A).
  • Colorado: The CPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations. Colo. Rev. Stat. § 6-1-1304(3)(a).
  • Connecticut: The CTDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or municipal ordinances or regulations. CTDPA § 10(a).
  • Utah: The UCPA does not restrict a controller’s or processor’s ability to comply with a federal, state, or local law, rule, or regulation. Utah Code Ann. § 13-61-304(1).
  • Iowa: The ICDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations. Iowa Code § 715D.7(1).

Thus, similar to the original CCPA, these laws reward covered businesses that carefully manage their data with retention schedules and that delete unnecessary data. Covered businesses that manage personal data under a legally validated retention schedule and that dispose of such data once retention is no longer legally required can avoid uncertainty, inefficiency, and cost in handling consumer deletion requests.

And on and on…

And it ain’t over yet, not even close. Comprehensive consumer privacy bills are percolating in many more state legislatures across the country. In April alone, three new comprehensive consumer privacy acts passed in state legislatures and were sent to governors in Indiana, Montana, and Tennessee. If signed into law, these three additional states’ laws will have the same double impact on data deletion as those of Virginia, Colorado, and Connecticut, by both (1) explicitly requiring data minimization and storage limitation, and (2) incenting covered businesses to use legally validated retention schedules and data deletion to curb inefficiency and cost in handling customer deletion requests:

  • Indiana Consumer Data Privacy Act (ICDPA)(effective January 1, 2026): A controller must limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer, and controller must not process personal data for purposes that are neither reasonably necessary for nor compatible with the disclosed purposes for which the personal data is processed, unless the controller obtains the consumer’s consent. Ind. Code § 25-15-4-1(1)&(2). “Processing” includes storage and deletion of personal information. Ind. Code § 25-15-2-21. And in handling consumer data requests, the ICDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations, such as retention requirements. See Ind. Code § 25-15-8-1(a)(1).
  • Montana Consumer Data Privacy Act (MCDPA)(effective October 1, 2024): A controller must limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the personal data is processed, as disclosed to the consumer; and must not, except as otherwise provided in the MCDPA, process personal data for purposes that are not reasonably necessary to or compatible with the disclosed purposes for which the personal data is processed as disclosed to the consumer unless the controller obtains the consumer’s consent. MCDPA § 7(1)(a)&(2)(a). “Processing” includes storage and deletion of personal data. MCDPA § 2(17). And when handling consumer data requests, nothing in the MCDPA restricts a controller’s or processor’s ability to comply with federal, state, or municipal ordinances or regulations, such as retention requirements. See MCDPA § 11(1)(a).
  • Tennessee Information Privacy Act (TIPA)(effective July 1, 2025): A controller must limit the collection of personal information to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data is processed, as disclosed to the consumer; and, except as otherwise provided in TIPA, must not process personal information for purposes that are beyond what is reasonably necessary to and compatible with the disclosed purposes for which the personal information is processed, as disclosed to the consumer, unless the controller obtains the consumer’s consent. Tenn. Code Ann. § 47-18-3204(a)(1)&(2). “Processing” includes storage and deletion of personal information. Tenn. Code Ann. § 47-18-3201(18). And for responding to consumer data requests, TIPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations, such as data retention requirements. See 47-18-3208(a)(1).

California set all of this in motion with the CCPA. But remember, this is not the first time that California has lit a match on data-related laws that then swept across the United States. Consider this: in 2003, only California had a state-level law requiring notification of individuals whose PII had been breached. By 2018, PII breach notifications were required by statute in all 50 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands.

And so, as comprehensive data privacy legislation ignites across the states in 2023 and beyond, the imperative will only escalate for businesses to manage their data with retention schedules and to dispose of unnecessary data.

Last month California finalized its updated regulations under the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA). With the CPRA, California has upped the ante on requiring data retention schedules and disposal of unnecessary data.

As always, to fully appreciate where we are, we need to remember from where we’ve come. With but rare exceptions, U.S. data privacy laws have not explicitly required data retention schedules, or data minimization (only collect data we need), or storage limitation (dispose of data when no longer needed). But this began to change in January 2020 with the CCPA, the United States’ first state-level comprehensive data privacy law. 

CCPA incented retention schedules and data disposal

The CCPA mandated data minimization through the vehicle of notice, by prohibiting covered businesses from collecting additional categories of PI or using collected PI for additional purposes beyond the purposes noticed at collection, without such notice to the consumer.  Cal. Civ. Code § 1798.100(b). Yet as originally enacted, the CCPA did not explicitly require either retention scheduling or, absent a consumer’s verifiable deletion request, data disposal. 

But the CCPA quietly did make managing data retention and disposal a practical priority, due to the repercussions of the CCPA’s deletion right.   Consumers’ ability under the CCPA to request deletion of their PI shifted decision-making power for data retention from the covered business to its consumers, leaving the business at their unpredictable mercy – some consumers might be fine with, or oblivious to, lengthy data retention, while others could insist, through verifiable deletion requests, that their PI be disposed of promptly.  The result is a costly and inefficient predicament for such businesses. 

Yet the CCPA’s deletion right has safe harbors.  A covered business can compliantly refuse a deletion request if retaining the consumer’s PI is necessary for such matters as completing the transaction with the consumer, performing a contract with the consumer, or to “[c]omply with a legal obligation,”  Cal. Civ. Code § 1798.105(d).  And the CCPA does not restrict a covered business’s ability to “[c]omply with federal, state, or local laws,” such as legal retention requirements.  Cal. Civ. Code § 1798.145(a)(1).

These are precisely the kind of factors used to establish retention periods in a well-constructed data retention schedule. And so, covered businesses that manage personal data under a legally validated retention schedule and that dispose of such data once no longer required can avoid uncertainty, inefficiency, and cost in handling CCPA consumer deletion requests.

CPRA explicitly requires retention schedules and data disposal

Effective January 1, 2023, the CPRA made sweeping changes to the CCPA. And regarding retention schedules and data disposal, while the CCPA was indirect, the CPRA says the quiet part out loud – loud and clear. Under the CPRA, covered businesses:

  • Must inform consumers how long the business intends to retain each category of PI the business collects, or if that is not possible, the criteria used to determine the retention period.
  • Must not retain PI for longer than is reasonably necessary and proportionate for the disclosed purpose(s) of collection or processing.

Cal. Civ. Code § 1798.100(a)(3) & (c).  Thus, for the first time under any U.S. comprehensive data privacy law, the CPRA explicitly and directly requires covered businesses to both (1) manage the CPRA’s broad range of PI under data retention schedule rules disclosed through notice to consumers, and (2) dispose of PI once it is no longer required for legal compliance or as reasonably necessary for the disclosed purposes for its collection and use.

The CPRA’s enactment also marked another important change in the impact of these requirements. Under the CCPA, the PI of covered businesses’ employees was exempt from the various consumer rights, including the deletion right. But coinciding with enactment of the CPRA, the employee PI exemption expired. So now, the CPRA’s retention schedule and data deletion requirements also apply to employee data.

The CPRA maintains consumers’ CCPA rights to request PI access and disposal, and it also adds additional consumer rights, such as to rectify inaccurate PI and to limit use and disclosure of sensitive PI.  As a result, the same practical incentives continue, as under the original CCPA, for covered businesses to carefully manage data retention and disposal.  Prudent businesses will still want to carefully manage retention of PI in light of the logistics, cost, and inefficiency involved in responding to verifiable requests.  And because of the deletion right’s safe harbors, covered businesses that dispose of PI under a legally-validated retention schedule once the PI is no longer needed to comply with legal retention requirements or the business’s needs for the consumer transaction or contract will be free of the cost, inefficiency, and unpredictability of selectively deleting the PI of individual consumers.

But because it also contains direct, explicit requirements for data minimization and storage limitation, the CPRA elevates data retention schedules and disposal of unnecessary data from prudent practice to direct, explicit compliance requirements.

The civil and administrative enforcement date for the CPRA and its newly finalized regulations is upon us – July 1, 2023. And California no longer stands alone in using comprehensive privacy laws to compel data minimization and data storage limitation. We’ll explore that next time, in Less Data #6.

There’s been a blogging blizzard about two recent cases interpreting the Illinois Biometric Information Privacy Act (BIPA). In early February 2023 the Illinois Supreme Court ruled in Tims v. Black Horse Carriers, Inc. that a five year limitations period applies to all BIPA claims. And just two weeks later, in Cothron v. White Castle Sys., Inc., the Court held that “a claim accrues under [BIPA] with every scan or transmission of biometric identifiers or biometric information without prior informed consent.”

These are indeed important rulings. Because BIPA authorizes private lawsuits for statutory damages without a showing of actual injury, these decisions add more fuel to the fire of BIPA class action litigation.

But there’s yet another recent BIPA case that’s important, regarding privacy laws that require data retention schedules and disposal of unnecessary data.

First, some background. With rare exceptions, U.S. privacy laws traditionally have not required either data minimization (only collect the sensitive data actually needed) or storage limitation (only keep such data while needed for its collection purposes). One of those early outliers was BIPA, first effective in 2008.

Our focus here is on a particular provision in BIPA, in Section 15(a):

A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information must comply with its established retention schedule and destruction guidelines.

740 ILL. COMP. STAT. 14/15(a)(emphasis added). 

BIPA thus included elements of data privacy seldom seen in other U.S. data privacy laws at that time – data is tied to the purpose(s) for collecting it (data minimization), and the regulated data must be disposed of after no longer necessary, pursuant to a written data retention schedule (storage limitation).

And this brings us to the Illinois Court of Appeals decision in Mora v. J&M Plating, Inc., 2022 IL App (2d) 210692, 2022 WL 17335861 (Ill. App. Ct. November 30, 2022). Mora dispels any notion that BIPA’s data retention schedule requirement is inconsequential.

Plaintiff Mora was hired by the defendant employer in July 2014 and began clocking into work by fingerprint scan in September 2014.  In May 2018, Mora’s employer published its biometric data privacy policy, which contained a retention and destruction schedule for biometric data. Mora signed the policy notice and consented to the collection and use of his biometric data. Mora was terminated from employment on January 7, 2021, and his biometric information was destroyed approximately two weeks after his termination.  Nine days later, Mora filed a class action lawsuit against his former employer, alleging BIPA violations.  Id. at *3.

There was no dispute over whether the employer’s policy complied with BIPA. The “J&M PLATING BIOMETRIC INFORMATION PRIVACY POLICY” contained the BIPA-required retention and destruction schedule and provided the mandated privacy notice.  Id. at fn. 2.  Plaintiff Mora consented in 2018 to the employer’s collection and use of his biometric data, and the employer disposed of Mora’s data in compliance with its BIPA policy after plaintiff was terminated.  Id. at *3.

The problem, and the reason why the Illinois Court of Appeals reversed the trial court’s dismissal of plaintiff’s lawsuit, was instead the lack of the BIPA-required retention schedule and privacy notice from 2014 to 2018.  According to the court, “defendant began collecting plaintiff’s biometric data in September 2014, and this triggered its obligation under [BIPA] section 15(a) to develop a retention-and-destruction schedule. Defendant did not have a schedule in place until May 2018, or nearly four years later. Thus, it violated section 15(a).”  And because a BIPA violation is sufficient to support an individual’s statutory cause of action, no showing of actual harm to Plaintiff was required.  Id. at *8.

What does this mean? Certainly, BIPA-covered businesses are on notice that they must comply with BIPA’s data retention schedule requirement before collecting any covered data. Yet more broadly, this case signals that any privacy law requiring data retention schedules may result in enforcement consequences for companies that fail to establish and maintain such data retention scheduling.

And BIPA is no longer an outlier – there’s a growing wave of state-level comprehensive consumer privacy laws that require data minimization and storage limitation, covering a far broader range of personal information than under BIPA. We’ll take a closer look at that next time, in Less Data #5.

We’ve already seen how new FTC regulations for GLBA-regulated financial institutions require retention schedules and disposal of unnecessary data as essential data security controls. The FTC is now also taking that position for all businesses under Section 5 of the FTC Act, as seen in a slew of recent FTC data security enforcement actions.

Two years ago I summarized the history of FTC enforcement on this issue. For decades the FTC has enforced reasonable data security under the authority of Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a)(1).  The FTC has pursued inadequate security practices of both large and well-known businesses and of small and obscure companies.  But the common theme is that the targeted business, according to the FTC, either deceptively or unfairly engaged in unreasonable data security practices for consumers’ personal information.

What was notable two years ago were the FTC’s Section 5 enforcement actions against InfoTrax Systems in late 2019 and SkyMed International in early 2021. In re InfoTrax Systems, L.C., No. C-4696 (F.T.C. December 30, 2019) (final complaint & consent order); In re SkyMed International, No. C-4732 (F.T.C. January 26, 2021) (final complaint & consent order). In each of these enforcement actions, the FTC alleged that the business “failed to have a policy, procedure, or practice for inventorying and deleting consumers’ personal information stored on [its] network that is no longer necessary….”  And in each consent order the FTC required “[p]olicies, procedures, and technical measures to systematically inventory Personal Information in [its] control and delete Personal Information that is no longer necessary….”

I ended that 2021 post by observing “[i]f the FTC’s position in SkyMed and Infotrax takes hold more broadly, the repercussions for over-retention will be sweeping in scope.”

Sweeping indeed. In a flurry of 2022 and 2023 enforcement actions, the FTC has now doubled-down on its position that reasonable data security requires data retention schedules and disposal of unnecessary data:

Residual Pumpkin (and later its purchaser Planetart) operated the platform CafePress.com, on which consumers purchased customized t-shirts, coffee mugs, and similar merchandise from other consumers or “shopkeepers.”  CafePress’s operators routinely collected information from consumers and shopkeepers——including names, email addresses, telephone numbers, birth dates, gender, photos, social media handles, security questions and answers, passwords, PayPal addresses, the last four digits and expiration dates of credit cards, and Social Security or tax identification numbers of shopkeepers, storing this sensitive personal information in clear text, except for passwords, which were encrypted.

In its 2021 Section 5 enforcement action complaint, the FTC alleged that CafePress’s operators failed to protect the personal information of buyers and sellers stored on its network and to adequately respond to multiple security breaches.  Among other inadequate security practices,  CafePress’s operators “created unnecessary risks to Personal Information by storing it indefinitely on its network without a business need.”

The FTC approved a settlement and consent agreement with CafePress’s operators on June 23, 2022.  The consent order mandates that CafePress’s operators establish, implement, and maintain a comprehensive information security program to protects the privacy, security, confidentiality, and integrity of collected personal information, including “[p]olicies and procedures to minimize data collection, storage, and retention, including data deletion or retention policies and procedures….”  the FTC also assessed a civil penalty of $500,000.

Drizly, an Uber subsidiary, operates an e-commerce platform through which local retailers sell alcohol online to adult customers.  The Drizly platform collects and stores both personal information that consumers provide and information that it automatically obtains from consumers’ computers and mobile devices.

In its 2022 Section 5 enforcement action complaint against both Drizly and its cofounder and CEO Rellas, the FTC alleged that data security failures led to a data breach exposing personal information of 2.5 million consumers.  Among other alleged security failures, Drizly failed to “[h]ave a policy, procedure, or practice for inventorying and deleting consumers’ personal information stored on its network that was no longer necessary.”

The FTC finalized the settlement and consent agreement with Drizly and Rellas on January 10, 2023.  The consent order mandates that Drizly destroy any collected personal data not necessary to provide products or services to consumers, to document and report to the Commission what data it destroyed, and to refrain from collecting or storing personal information unless it is necessary for specific purposes outlined in a retention schedule. And to punctuate the FTC’s resolve, the consent order also requires Rellas to implement an information security program at future companies if he moves to a business that collects consumer information from more than 25,000 individuals, or where he is a majority owner, CEO, or senior officer with information security responsibilities.

  • In re Chegg, Inc., No. C-4782 (F.T.C. January 25, 2023) (complaint & consent order)

Chegg markets and sells direct-to-student educational products and services, primarily to high school and college students.  Chegg collects sensitive personal information from users, such as information about users’ religious denomination, heritage, birthdate, parents’ income range, sexual orientation, and disabilities for Chegg’s scholarship search service, and users’ images and voice in connection with Chegg’s online tutoring services.  As an employer, Chegg also collects such personal information as employees’ names, birth dates, Social Security numbers, and financial information.

The FTC alleged In its Section 5 enforcement action complaint that Chegg’s poor data security practices resulted in four separate data breaches and the unauthorized publication of 40 million customers’ personal information.  Among other alleged security lapses, Chegg “failed to have a policy, process, or procedure for inventorying and deleting users’ and employees’ personal information stored on Chegg’s network after that information is no longer necessary….”

On January 25, 2023, The FTC approved a settlement and consent agreement with Chegg.  The consent order requires Chegg to establish, implement, and maintain, a comprehensive information security program that protects the security, availability, confidentiality, and integrity of specified personal information of customers under Respondent’s control, including, among other security controls,  “[p]olicies and procedures to minimize data collection, storage, and retention, including data deletion or retention policies and procedures….”  The consent order further requires Chegg to:

“Document and adhere to a retention schedule for Covered Information [meaning types of consumer personal information as defined in the consent order]. Such schedule shall set forth: (1) the purpose or purposes for which each type of Covered Information is collected; (2) the specific business needs for retaining each type of Covered Information; and (3) a set timeframe for deletion of each type of Covered Information (absent any intervening deletion requests from consumers) that precludes indefinite retention of any Covered Information….”

The FTC is also honing in upon unnecessary data retention in its recent privacy enforcement actions under FTC Act Section 5, punctuated by millions of dollars in civil penalties:

GoodRx Holdings, Inc. is a “consumer-focused digital healthcare platform” that advertises, distributes, and sells health-related products and services directly to consumers.  The FTC investigated GoodRx’s sharing of customer personal and health information with third party social media platforms and advertisers, as violations of FTC Act Section 5 and also of the FTC’s Health Breach Notification Rule.  The matter was resolved with a Stipulated Order for Permanent Injunction, Civil Penalty Judgment, and Other Relief filed in February 2023 in the United Stated District Court for the Northern District of California. 

Among the order’s various requirements, GoodRx must identify and instruct all entities that received personal information of GoodRx’s customers to delete all such information wrongfully received from GoodRx and to confirm such deletion in writing.  GoodRx must also establish, implement, and maintain a comprehensive privacy program that protects the privacy, security, availability, confidentiality, and integrity of the consumers’ personal information.  One mandated safeguard for the privacy program is that GoodRx must establish and maintain a data retention policy that includes:

“a retention schedule that limits the retention of Covered Information for only as long as is reasonably necessary to fulfill the purpose for which the Covered Information was collected; provided, however, that such Covered Information need not be destroyed, and may be disclosed, to the extent requested by a government agency or required by law, regulation, or court order;” and

“a requirement that each Covered Business document, adhere to, and make publicly available … a retention schedule for Covered Information, setting forth: (1) the purposes for which such information is collected; (2) the specific business need for retaining each type of Covered Information; and (3) a set timeframe for Deletion of each type of Covered Information (absent any intervening Deletion requests from consumers) that precludes indefinite retention of any Covered Information.”

The Stipulated Order also assessed a civil penalty against GoodRx of $1,500,000.

BetterHelp offers online counseling services. Consumers fill out a questionnaire with sensitive mental health information and also provide their name, email address, birth date, and other personal information. BetterHelp promised consumers that it would not use or disclose their personal health data except for limited purposes, such as to provide counseling services. But according to the FTC, BetterHelp provided consumers’ email addresses, IP addresses, and health questionnaire information to such social media platforms as Facebook, Snapchat, Criteo, and Pinterest for advertising purposes, which, along with other alleged data security and privacy program shortcomings, violated Section 5 of the FTC Act.

On March 2, 2023, the FTC approved a consent order with BetterHelp, subject to a thirty day public comment period.  The terms of the consent order mirror those in GoodRx summarized above, including the requirement that BetterHelp instruct entities to delete customer information wrongfully received from BetterHelp and to confirm such deletion, and also the same requirements to document, adhere to, and publish a retention schedule for consumers’ personal information “that precludes indefinite retention of any Covered Information.” 

The FTC also assessed a civil penalty against BetterHelp of $7,800,000.

The FTC is not being subtle about this. In case the message hasn’t landed, a February 2023 FTC blog post laid out three key elements for systemically addressing the security and privacy risks of complex data systems. Beyond multi-factor authentication and encrypted/ authenticated system connections, what is the third crucial element? You guessed it:

(3) Requiring companies to develop a data retention schedule, publish it, and then stick to it

A final provision is a requirement to develop a data retention schedule, publish it, and then stick to it. This embraces the premise that the most secure data is the data that’s not stored at all. Further, implementing this requirement inevitably requires companies to have a strong internal catalogue of all the data they store. This provides other benefits, such as ensuring that they’ll be able to comprehensively comply with requests from users to delete data and have the information needed to prioritize protections based on the types of data they’re storing. 

The FTC has updated its data security regulations for the financial institutions it regulates under the Gramm-Leach-Bliley Act (GLBA). The FTC’s revised requirements for information security programs, effective June 1, 2023, will now mandate data retention policies and disposal of unnecessary customer information.

To appreciate what this means, we must take a quick look at how we got here. GLBA, enacted back in 1999, required financial institution regulators to establish standards for safeguarding the security and confidentiality of customer data.  15 U.S.C. § 6801(b).  The regulators obliged, with varying approaches typical of our idiosyncratic U.S. financial regulatory ecosystem.  The federal banking agencies (FRB, OCC, & FDIC) promulgated the Interagency Guidelines Establishing Information Security Standards, see 12 C.F.R. Part 30, App. B, with detailed, granular security controls requirements.  The NCUA adopted similarly specific safeguards for credit unions.  12 C.F.R. Part 748, App. A.    In contrast, the SEC (Regulation S-P, 17 C.F.R. § 248.30(a)) and the FTC (16 C.F.R. Part 314) took a high-level approach with their respective standards, requiring safeguards reasonably designed to ensure security and confidentiality and to protect against anticipated threats and unauthorized access or use.  For the insurance industry, GLBA security standards were left to state departments of insurance, consistent with federal deference to state-level regulation of insurance.

The key point here is that no federal GLBA regulator established security standards that directly required either data retention scheduling or the disposal of customer data no longer required for legal compliance or business purposes.  The banking agencies’ and NCUA’s standards spoke only to the proper means of disposal, not when customer data must be disposed of. And the SEC and FTC standards were silent on these topics.

Until now.

In 2021 the FTC took a fresh look at its Safeguards Rule, 16 C.F.R. Part 314, which was essentially untouched since first promulgated back in 2003. The resulting amendments updated the Rule to better address the current cyber-risk environment. And the amended Rule is more specific and granular in its required elements for the mandated information security program.

The significant point here is that the updated FTC Safeguards Rule for the first time adds data retention schedules and disposal of unnecessary data as required elements of a compliant security program for customer information. Entities subject to the amended Safeguards Rule must, effective June 1, 2023:

  • Develop, implement, and maintain procedures for the secure disposal of customer information in any format no later than two years after the last date the information is used in connection with the provision of a product or service to the customer to which it relates, unless such information is necessary for business operations or for other legitimate business purposes, is otherwise required to be retained by law or regulation, or where targeted disposal is not reasonably feasible due to the manner in which the information is maintained; and
  • Periodically review your data retention policy to minimize the unnecessary retention of data. 16 C.F.R. § 314.4(c)(6).

This focus, on data retention schedules and data disposal as essential security controls for financial institutions, echoes a similar recent trend in state-level insurance laws under GLBA, discussed here, and also the New York DFS cybersecurity regulations for financial institutions, mentioned in Less Data #1. Yet it is also aligns with the FTC’s current view that retention schedules and data disposal are crucial to data security for all types of businesses. For example, the FTC’s 2016 guidance document Protecting Personal Information:  A Guide for Business stressed the “Scale Down” principle, which is to keep only what you need for your business:

“If you don’t have a legitimate business need for sensitive personally identifying information, don’t keep it. In fact, don’t even collect it. If you have a legitimate business need for the information, keep it only as long as it’s necessary. …  If you must keep information for business reasons or to comply with the law, develop a written records retention policy to identify what information must be kept, how to secure it, how long to keep it, and how to dispose of it securely when you no longer need it.”

So for some time now the FTC has been moving toward the position that data retention schedules and data disposal are essential for reasonable data security. This position is clearly reflected in the FTC’s amended GLBA Safeguards Rule. But how deeply has this position permeated the FTC’s actual enforcement of reasonable data security beyond the GLBA financial institution setting? We’ll explore that in Less Data #3.

As mentioned in the initial post in this series, data security laws are emerging with explicit requirements to dispose of unnecessary data. But will regulators take this seriously? The 2022 enforcement actions against EyeMed Vision Care LLC provide $ 5.1 million reasons to conclude yes.

First, some context. Carefully managing data retention and disposal is one of the most effective security safeguards for any business. You can’t have a breach of data your business no longer retains, right? But U.S. state laws mandating reasonable data security for personally identifiable information (PII) traditionally have not required that PII be disposed of once no longer needed. And similarly, data safeguards rules for the financial services sector under the Gramm-Leach-Bliley Act (GLBA) traditionally have not required either data retention policies or disposal of customer data once no longer required for legal compliance or business purposes. 

But this began to change in recent years:

  • Several states’ PII security laws now specifically require disposal of PII once no longer needed for business purposes (I summarized these developments in a 2021 post). A good example is New York’s SHIELD Act. As of 2020, the SHIELD Act requires businesses that own or license computerized data with PII of a New York resident to “develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity” of the PII.  To be deemed compliant, such businesses must “dispose of [PII] within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed.”  N.Y. Gen. Bus. Law § 899-bb(2)(b)(ii)(C)(4) (emphasis added).
  • New York also established sweeping new data security rules specifically for the financial services sector. The Cybersecurity Requirements for Financial Services Companies of the New York State Department of Financial Services (NYDFS) apply broadly to financial services businesses licensed or registered under New York’s Banking Law, Insurance Law, or Financial Services Law.  23 NYCRR § 500.1(c).  The NYDFS Cybersecurity Rules broke new ground by requiring covered entities to have “policies and procedures for the secure disposal on a periodic basis of any nonpublic information … that is no longer necessary for business operations or for other legitimate business purposes of the covered entity, except where such information is otherwise required to be retained by law or regulation, or where targeted disposal is not reasonably feasible due to the manner in which the information is maintained.” 23 NYCRR § 500.13.

So fine, we now have new data security laws requiring that businesses dispose of unnecessary data. But are regulators actually serious about this? Yes indeed – which brings us to EyeMed Vision Care LLC (EyeMed).

In re EyeMed Vision Care LLC, No. 21-071 (N.Y. January 18, 2022). The New York Attorney General conducted a SHIELD Act investigation of EyeMed in the wake of a data breach involving a hacker’s access to an EyeMed email account. The hacked account containing six years of sensitive personal data provided by 2.1 million EyeMed customers for vision benefits enrollment and coverage purposes.  The matter was settled in early 2022. The Assurance of Discontinuance included the Attorney General’s finding that “[i]t was unreasonable to leave personal information in the affected email account for up to six years rather than to copy and store such information in more secure systems and delete the older messages from the affected email account, particularly in light of the unreasonable protections for the affected email account at the time of the breach….”  Among other mandates, the Assurance requires EyeMed to “permanently delete customer Personal Information when there is no reasonable business or legal purpose to retain it.”  EyeMed was also assessed a penalty of $600,000.   

In re EyeMed Vision Care LLC (NYDFS October 18, 2022). EyeMed’s troubles were not over.  As an NYDFS licensee due to the insurance aspects of its business, EyeMed was also investigated by NYDFS under its cybersecurity regulations. The parties reached a settlement under an NYDFS consent order in October 2022.   Among other findings of cybersecurity failings, NYDFS found that “because EyeMed failed to implement a sufficient data minimization strategy and disposal process for the Mailbox, the compromised shared Mailbox contained old data that was accessible to the threat actor. Proper disposal processes minimize the amount of NPI accessible to an unauthorized third party during a Cyber Event.”  Thus, “[a]t the time of the Cyber Event, EyeMed did not have policies and procedures in place for the secure disposal on a periodic basis of NPI contained within the Mailbox that was no longer necessary for business operations or other legitimate business purpose, in violation of 23 NYCRR § 500.13.”  The NYDFS consent order required EyeMed to perform a compliant security risk assessment and establish compliant security controls.  NYDFS also assessed a civil penalty against EyeMed of $4,500,000, without recourse to tax treatment or insurance reimbursement.

EyeMed offers a cautionary tale. Not only do state-level data security laws increasingly require disposal of unnecessary data, but regulators appear willing and serious in enforcing retention schedule and data disposal mandates.

Two years ago I made a prediction: “For the 2020s, the dots already connect clearly – the new impetus for managing information retention and disposal will be data privacy and security compliance.  Buckle up.”

This was the last line of a 2021 blog series exploring then-recent developments in United States’ data privacy and security laws that had begun to transform retention schedules and data disposal from merely prudent practices into compliance requirements.

So, where do things stand now? The trend continues, and it is actually accelerating – less data is now even more than ever.

Managing data volumes has always been prudent for U.S. businesses.  But as a matter of pure legal compliance, U.S. federal and state laws have historically followed a “mandatory minimum” retention approach, requiring that businesses keep specified records for at least a required minimum retention period, but not compelling disposal.  With precious few exceptions, U.S. businesses have not been legally required to (1) manage data with retention schedules and (2) dispose of unnecessary data.  And U.S. privacy and data security laws have generally been silent on retention periods for protected information.

But that was then. Two years ago I mapped changes in U.S. data security and privacy laws that would now require data retention scheduling and disposal of unnecessary data, under:

But what I failed to anticipate was how rapidly the pace would quicken. Two years later, all of the changes noted above continue, but now with the accelerants of:

  • New state-level data security enforcement activity that compels data retention schedules and data disposal;
  • New GLBA data security rules requiring retention schedules and disposal of unnecessary data;
  • An upsurge in FTC data security enforcement actions that put data retention and disposal at center stage;
  • A new biometric privacy court ruling under BIPA on data retention schedule requirements; and
  • A growing wave of new comprehensive state consumer privacy laws mandating data minimization, data retention schedules, and disposal of unnecessary data.  

I’ll explore each of these in upcoming posts … stay tuned.