Selecting the right initial project(s), determining outcomes and measures, and preparing the business case are important groundwork for your Information Governance initiative, as discussed in Part 1.  But to secure resilient management support for an ongoing initiative, you’ll also want to tie the individual projects to strategic objectives for Information Governance at your organization.

Strategic IG Objectives

While a single successful project is fine, higher-level strategic objectives are needed to foster an ongoing information governance initiative.  The strategic objectives connect the dots of the benefits from individual projects, providing the 1 + 1 = 3.  Strategic IG objectives provide both a road map for next steps and also a narrative of impact worthy of ongoing executive support.

Strategic IG objectives usually focus on one or more of (1) reducing unnecessary data volumes, (2) retaining and using valuable, reliable data, (3) safeguarding protected and confidential data, and (4) preserving data as required for litigation. Each of these strategic objectives usually also align with some combination of (a) ensuring information compliance, (b) controlling information risk, and (c) maximizing information value.

INFORMATION GOVERNANCE STRATEGIC OBJECTIVES

Reduce Unnecessary Data Volumes

  • Compliance: Comply with regulatory and contractual requirements for disposing of information.
  • Risk: Dispose of information not required for legal compliance or business need and reduce creation of unnecessary information, to mitigate data security exposures and data volume litigation exposures.
  • Value: Realize operational cost-savings and increased productivity and efficiency by decreasing the amounts of unnecessary information.

Retain and Use Valuable, Reliable Data

  • Compliance: Comply with regulatory and contractual requirements for retaining and managing information.
  • Risk: Avoid loss of valuable information and protect information vital for continuing operations and enforcing legal rights.
  • Value: Maintain reliable information to support analysis for decision-making and ensure accessibility of reliable information for productivity and efficiency.

Safeguard Protected and Confidential Data

  • Compliance: Comply with regulatory and contractual requirements for privacy and security of protected information and for safeguarding confidential information.
  • Risk: Avoid unauthorized use or compromise of protected and confidential information and detect and respond effectively to breaches and other privacy or security incidents, to minimize reputation damage and legal exposures.
  • Value: Enhance reputation as trusted custodian of protected and confidential information.

Preserve Data for Litigation

  • Compliance: Comply with legal requirements for preserving and collecting data relevant to litigation or regulatory proceedings.
  • Risk: Reduce costs and inefficiencies in preservation and collection and reduce exposures for preservation failures.
  • Value: Achieve more efficient, timely, and accurate case assessment and valuation.

Unlike building a quantified business case for specific projects, the value of attaining strategic IG objectives is usually best expressed qualitatively, highlighting the significant general benefits of improving compliance, mitigating risk, and maximizing of information value.  But IG strategic objectives can easily be converted into SMART goals (Specific, Measurable, Achievable, Relevant, and Timely).  To do so, simply adopt the most compelling IG strategic objectives, which provide the strategic direction, and then graft onto them your SMART elements from the related, pending project(s).  For example:

“Reduce unnecessary data volumes [i.e., the strategic objective] by completing Phase 1 of Email Retention and Disposal Project by end of 3rd Q 2025, including implementation of (1) going forward storage and retention strategy for record-quality email, (2) new retention policy for non-record email, and (3) related updates to legal hold process [i.e., the initial project’s parameters, with incorporated project measures].”

Upon completion of the initial specific project, this same SMART goal can be updated with whatever is the next project to advance this strategic objective:

“Reduce unnecessary data volumes [i.e., the ongoing strategic objective] by completing Phase 2 of Email Retention and Disposal Project by end of 1st Q 2026, including processing of legacy email troves isolated in Phase 1 [i.e., the subsequent project’s parameters, with incorporated project measures].”

So with this approach you have the clarity of one or more specific, concrete projects, each with outcomes, measures, and a business case, that are now tied to strategic objectives for governing compliance, cost, risk, and value for your organization’s information. 

That’s powerful, but there’s still something missing – how is all of this relevant to what drives your organization?  To tap into relevance, you will want to align your IG initiative with your organization’s business model or brand, discussed next time.

Management support is crucial for success with Information Governance initiatives. This is not merely a question of initial project and budget approvals. Most Information Governance initiatives involve behavioral changes in how data is handled, and in many instances, aspects of organizational culture may be impacted. No matter the ultimate benefits, any initiative involving behavioral change will require committed support by management to overcome initial push-back. And because effective Information Governance is an ongoing business process, rather than a one-off project, continuing tone at the top is essential.

Attention is always in short supply in organizations – executive focus even more so. Given that reality, your IG initiative will more likely secure the ongoing support it needs if the initiative (1) focuses first on a concrete, measurable project; (2) advances higher-level, strategic objectives for governing the organization’s information, and (3) aligns with the organization’s business model. These three elements will provide both the foundation for your initiative and the fuel for attaining it.  They are also invaluable in demonstrating how the initiative will be relevant to the organization’s success.

The Project(s) at Hand
In most organizations, abstract notions alone are simply not compelling enough to secure resources and drive change. So, what do you specifically and concretely want to accomplish now, in the short run?  What would be a meaningful improvement in governing information compliance, cost, risk, and value, but not such a time-consuming, against-the-odds effort that will squander momentum or risk early failure?  And what project will involve active participation of some or most of those you want to be involved in your ongoing initiative, to foster collaboration and ownership?

Common projects under Information Governance initiatives include one or more of the following: (a) reducing email volumes, (b) controlling unstructured data in file shares, (c) mitigating legacy troves of paper or digital records, (d) applying security controls to protected data and repositories, (e) controlling data compliance and risk with service providers, (f) preparing for data breach response scenarios, or (g) simplifying and improving legal hold processes.

Proper framing of a specific IG project clarifies who should be involved, when to start, what resources are needed, and what project success will look like.  Specific projects also tap into a sense of urgency, to get and keep things moving.

A quantified IG business case is best done in the context of specific projects, based on the particular project’s scope, expected outcomes, and the data targeted. What measures are pertinent in the business case will depend upon the project’s nature and purpose.  For example, let’s say your initial project will focus upon gaining control of excessive, uncontrolled email volumes.  For that project, one can quantify measurable hard cost savings (such as from reduced storage costs and allocated system support costs) and soft cost savings (such as from faster information retrieval, improved productivity, and business process efficiencies).  Remember to consider the costs of expected growth in email volumes over time, comparing the status quo approach to cost reductions to be achieved.

Risk mitigation can also be quantified, such as for an email volume reduction project.  The value of potential ediscovery costs and data security exposures can be estimated based on the data volumes within project scope.  For example, though there are many variables in calculating ediscovery costs, processing costs can range from $25 to $100 per gigabyte, before review fees and production costs. Considering that data volumes in IG project-targeted repositories may range from hundreds of gigabytes up to multiple terabytes, the ediscovery cost of unnecessarily retained data looms large indeed. 

As for quantifying data breach costs, the 2024 IBM/Ponemon annual report Cost of a Data Breach documents the high cost of breaches, with significant variations per industry. Other sources indicate an average cost of $169 per compromised record, though the evolution in attack vectors, such as the rise in ransomware, have made it more difficult to reliably tie the overall costs of data breaches to the number of compromised records. But what remains true is that data breaches are expensive, and there cannot be a breach of data that has already been compliantly disposed of.

Selecting the right initial project(s), determining outcomes and measures, and preparing the business case are important groundwork for your IG initiative.  But to help secure resilient management support for an ongoing initiative, you’ll also want to tie the individual projects to strategic objectives, discussed next time.

Charging Elephant

Apparently, today is Global information Governance Day. I frankly wasn’t paying attention, because every day is information governance day here. But no snark is meant by this – it’s good to turn such “occasions” into a nudge to revisit our perspectives and refocus on our priorities.

Our firm’s elephant icon is a nod to The Blind Men and the Elephant, the familiar, age-old parable for how we often do not see the big picture, but instead only the parts we directly encounter. And so it goes for organizations’ data. Individual company functions and departments often have their own, limited perspectives on information, seeing only the risks and opportunities with which they are directly familiar. And when considering a new business opportunity, hoped-for value tends to obscure resulting cost and potential exposures. Such limited perspectives yields limited perception – not a good thing for identifying, understanding, and controlling organizational risk.

I actually prefer a slightly different version, The Blind Elephants and the Man:

One day, six blind elephants were in a heated argument about what Man was like. To resolve their dispute, they sought out and found a man. The first elephant “felt” the man and then proclaimed “Man is flat.” Each of the other elephants, in turn, felt the man, and they all agreed.

The moral? Limited perspective not only yields limited perception – it can also lead to very bad results.

“Information Governance” has become an overused buzz-phrase, often trotted out as marketing mumbo-jumbo for selling technology tools.  In all the hype one can easily lose track of what it really means.  At its heart, Information Governance is no more – and no less – than making sure the organization sees the big picture of information compliance, cost, risk, and opportunity when making strategic decisions.

The Information Governance perspective is a ready-made, scalable resource. Any organization can make meaningful headway, right away, by simply adopting an inclusive IG perspective when addressing information matters, before investing in significant organizational changes and expensive technology tools.

What does this mean? Simply this – whenever any information-related issue is dealt with or decision will be made by your organization, be sure to ask the following:

  • What information is involved, owned by whom, and in whose custody?
  • What privacy requirements and risks are involved?
  • What data security requirements and risks are involved?
  • What records & information management requirements and risks are involved?
  • What litigation preservation and discovery repercussions and risks are involved?

I can’t think of an organizational decision related to information that wouldn’t benefit from this exercise, be it adopting (or walking back) a remote work policy; or reconfiguring external access to IT systems; or pushing data, systems, or platform out to the cloud; or selecting vendors that will have system access; or outsourcing business operations to a service provider; or hiring personnel from a competitor; or acquiring or divesting a business unit; or moving the corporate headquarters; or engaging in big data analytics; or monetizing data; or exploring use cases for AI; or adding Internet of Things elements to the business model; or … you name it.

Of course more questions can be asked, and the above questions should include consideration of legal requirements, contractual relationships, and business needs. But the point is to make ourselves look beyond our comfortable silos, to see the big picture of information compliance, cost, risk, and value.

Like many worthwhile things, this is simple, but not necessarily easy. We tend to shy away from questions we don’t readily know how to answer, which helps explain why we end up in silos. Answering such questions may require input from various stakeholders, with different perspectives, and their input will need to be synthesized to inform the ultimate decision. But asking these questions helps bring into the decision-making process those who have a stake in the decision and its repercussions.

The Information Governance perspective, in and of itself, will yield better decision-making by your organization on any information-related issue or opportunity. It’s worth the effort to see the entire elephant … before anyone gets stepped on.

I’ve been waiting, and then it arrived, in a client’s updated Outside Counsel Guidelines:

The Firm and its personnel or subcontractors shall not use any External Generative AI Tools in the performance of any services in relation to a Matter … or use External Generative AI Tools to generate any deliverable, document or content for [Client], regardless of the intended end use.

No problem. We don’t use generative AI tools in our client work. 

Don’t get me wrong – we’re not Luddites here, and I’m not dreading a Skynet singularity anytime soon.  We’re a law firm focused on Information Governance, and so we need to remain up to speed on the impacts of a wide range of information technologies.  And the promise of AI is indeed breathtaking.  Sam Altman had a lot on his plate last week, but his Hard Fork interview of November 15, just two days before all the crazy erupted at OpenAI, is a must-listen on the transformative potential for AI, with due regard for attendant risks.

Yet we each choose for ourselves what tools we will use – or not use – when lawyering for clients.  I don’t presume to know whether other lawyers should use generative AI tools, but I do know my answer.  And it’s no for me and my law firm, for three reasons.

First, generative AI is not dependable enough for my taste in doing client work.  And that’s not just the idiocy of filing a brief generated by AI without checking its accuracy.  Simply put, the reliability, or as Altman prefers, the “controllability,” of generative AI is not there yet, or at least not for me.  Hallucination at three percent or closer to 27 percent is well beyond my comfort level – my professional goal remains to hallucinate less than one percent of the time. 

Second, after decades at a big firm, I designed my small firm to be simple, stripping away anything that has more risk or cost than value to me. For example, we don’t use Wi-Fi, because Wi-Fi is unnecessary for us, it has security vulnerabilities, and the requisite safeguards would complicate our firm’s annual ISO 27001 data security audits. For me, using generative AI tools would bring more risk and uncertainty than value to my law practice.

Last, I need to go to the source to ensure I know what I’m doing.  When advising clients on data retention and data security requirements under U.S. federal and state statutes and regulations, I need to personally read the laws and regulations, without any filter or intermediary. Otherwise, I’m not sure. And to be frank, it all sticks better in my head when I take the time to do my work at this deliberate pace, which puts the “sure” in “slowly but surely.”

I know that these are early days for generative AI.  In Gartner Hype Cycle terms, we are still on the Peak of Inflated Expectations, trending toward the Trough of Disillusionment.  I’ll be watching from the sidelines, as AI eventually reaches the Plateau of Productivity. For purely creative writing, including some blogging, I of course acknowledge AI’s kickstarting value. And Kevin O’Keefe nails it when he reminds us that “using AI in legal writing is like taking an open book exam—you still need to understand the concepts.”

So, while state ethics committees will continue to ponder the legal ethics of AI use (despite GPT-4 apparently scoring higher than most lawyers-to-be when taking the Multistate Professional Responsibility Exam), my choice will remain Organic Intelligence, not AI.

No snobbery about this – you do you, and I’ll do me. But in my own legal work for clients, I’m simply not content with using AI-generated content.

As we watch the tsunami of state comprehensive consumer privacy laws now spreading from California across the U.S., it’s time to revisit the flood zone of state-level PII breach notification statutes, which also flowed forth from California back in 2002. By 2018 that wave had reached every state, along with the District of Columbia, Puerto Rico, Guam, and the U.S. Virgin Islands.  Each state has its own unique approach. And the states continue to expand their requirements, especially their definitions of what constitutes PII and the timing and content of mandated notifications. Changes since 2018 are in bold below, reflecting how the tide continues to rise.

Remember, these laws are triggered by the affected individuals’ residency, not where the breach occurred. So, when businesses with employees and customers in many states suffer a data breach, they must stay above water with a wide variety of conflicting and evolving state-level PII breach notification laws. 

Scope of PII

State PII breach notification laws generally apply to a state resident’s name combined with another identifier useful for traditional identity theft, such as the individual’s Social Security number, driver’s or state identification number, or financial account number with access credentials. But an ever-growing number of states include additional “name +” combination elements in their PII definition (again, bold indicates changes since 2018):

  • Medical information (Alabama, Arkansas, Arizona, California, Colorado, Connecticut, DelawareD.C., Florida, Illinois, Maryland, Missouri, Montana, Nevada, North Dakota, Oregon, Pennsylvania, Puerto Rico, Rhode Island, South Dakota, Texas, VermontWashington, and Wyoming)
  • Health insurance information (Alabama, Arizona, California, Colorado, Connecticut, Delaware, D.C., Florida, Illinois, Maryland, Missouri, Montana, Nevada, North Dakota, Oregon, Pennsylvania, Rhode Island, Texas, VermontWashington, and Wyoming)
  • Unique biometric data (Arizona, Arkansas, California, Colorado, Connecticut, Delaware, D.C., Iowa, Illinois, Louisiana, Maryland, Nebraska, New Mexico, New York, North Carolina, Oregon, VermontWashington, Wisconsin, and Wyoming)
  • Genetic data (Arkansas (used for identification purposes), California, Delaware, D.C., Maryland, Vermont, and Wisconsin)
  • Shared secrets or security token for authentication (Wyoming)
  • Taxpayer ID or other taxpayer information (Alabama, Arizona, California, Connecticut, Delaware, D.C., Maryland, Montana, Puerto Rico, VermontVirginia (employee TIN plus withholding), and Wyoming)
  • IRS identity protection PIN (Arizona, Connecticut, and Montana)
  • Employee ID number with access code or password (North Dakota and South Dakota)
  • Email address or Internet account number, with security access information (Alabama, Delaware, Florida, Maryland, Nevada, New Jersey, Pennsylvania, Rhode Island, and Wyoming)
  • Digital or electronic signature (Arizona, North Carolina, North Dakota, and Washington)
  • Employment ID number combined with security access information (North Dakota and South Dakota)
  • Birthdate (North Dakota and Washington)
  • Birth or marriage certificate (Wyoming)
  • Mother’s maiden name (North Dakota)
  • Work-related evaluations (Puerto Rico)
  • Information collected by automated license plate recognition system (California)

And in Arizona, California, Colorado, Connecticut, the District of Columbia, Florida, Georgia, Illinois, Indiana, Maine, Maryland, Nebraska, New York, North Carolina, Oregon, South Dakota, Texas, Vermont, and Washington, notification requirements can attach to specified identification data even without the individual’s name (in some such states with the proviso that such information would sufficiently enable unauthorized account access or identity theft).

PII media & encryption/redaction safe harbors

All of the state breach notification laws apply to PII in electronic or computerized form. But in several states, including Alaska, Hawaii, Indiana, Iowa, Massachusetts, North Carolina, Rhode Island, Washington, and Wisconsin, a breach of PII in any medium, including paper records, can trigger notification requirements.

Effective encryption of PII is an explicit safe harbor from notification obligations in virtually every jurisdiction, but 22 states now add the condition that the encryption key must not have been compromised in the breach. Thirty-three states now explicitly provide “redaction” as a safe harbor (with six states adding the condition that the means to un-redact are uncompromised), as do 23 states if other means are used to render the information unreadable or unusable.

Notification requirements  

The mandated time frame for notifying affected individuals has commonly been the most “expeditious” or “expedient” time possible, “without unreasonable delay,” considering such factors as the need to determine the scope of the breach, to restore system integrity, and to identify the affected individuals. But increasingly, states are imposing or tightening outside deadlines for notifications:

  • 60 days:  Connecticut (formerly 90 days), Delaware, Louisiana, South Dakota, and Texas (formerly no day limit)
  • 45 days Alabama, Arizona, Indiana (formerly no day limit), Maryland, New Mexico, Ohio, Oregon, Rhode Island, Tennessee, Vermont, and Wisconsin
  • 30 days Colorado, Florida, Maine (formerly no day limit), and Washington (formerly 45 days)
  • 10 days:  Puerto Rico

Twenty-nine jurisdictions’ statutes now contain prescribed minimum content for breach notifications to individuals, and various states have unique content requirements for such notices.

Thirty-seven of the jurisdictions now require breach reporting to the state’s Attorney General or other designated state agencies, triggered at various specified thresholds of affected individuals, ranging from one to over 1,000. And a similar majority of the states require breach reporting to credit agencies, triggered at differing thresholds, from one to over 10,000.

And all but eight jurisdiction’s statutes now contain some notion of a “risk of harm” exclusion to notification duties, either imbedded in the statute’s breach definition, or as an independent exception to the duty to notify.

… and the changes keep flowing

The floodplain of these state PII breach notification statutes remains in motion. Notable trends since 2018 include the ongoing rise in states that include biometric data, genetic data, medical information, health insurance information, and taxpayer information as PII, and the continuing increase of states establishing, or shortening, deadlines for making notifications.  States are also becoming yet more directive in specific content requirements for notifications, such as the manner in which credit monitoring and identity theft protection services are offered.

The life raft of a preempting federal law for PII breach notification remains slim, largely because of states’ concerns about such preemption.  So, businesses must continue to wade through the various compliance requirements in these state laws.  Yes, it continues to be a struggle to stay above water. But keeping up with the changes is crucial — both for security incident response readiness, and also for compliantly defining the scope of information subject to the organization’s security safeguards and controls.

Well, turns out I was both right and wrong in my prediction from two years ago: “For the 2020s, the dots already connect clearly – the new impetus for managing information retention and disposal will be data privacy and security compliance.  Buckle up.” That prediction is indeed playing out, but far faster than I expected.

Again, we’ve always known that managing data volumes is prudent for U.S. businesses.  But as a matter of pure legal compliance, U.S. federal and state laws have traditionally followed a “mandatory minimum” retention approach, requiring that businesses keep specified records for at least a required minimum retention period, but not compelling disposal.  With precious few exceptions, U.S. businesses have not been legally required to (1) manage data with retention schedules and (2) dispose of unnecessary data.  And U.S. privacy and data security laws have generally been silent on retention periods for protected information.

But that was then. As noted two years ago, a wide range of new data security and privacy laws are transforming retention schedules and data disposal from merely prudent practices into compliance requirements. And since then, as explored in this blog series, the pace has quickened, with:

Managing data with retention scheduling and disposing of unnecessary data are now compliance requirements for data privacy and security.

What should you do about this?

  • Clarify what constitutes protected information, based on your business’s geographic footprint and scope of operations.
  • Understand where protected information resides, both in your business’s data systems and through your relationships with service providers and contractors.
  • Update and legally validate your business’s data retention schedule, with particular attention to legally required retention periods, including retention maximums, for records and data sets containing protected information.
  • With that foundation in place, ensure that your business’s policies, contracts, privacy notices, training, and compliance systems foster compliant practices for the safeguarding, timely disposal, and other processing of protected information.

But aren’t these the same things that have always been good to do?  Yes indeed.  Managing records and information (more broadly, Information Governance) has been perennially prudent, particularly as our digital age has multiplied the volume and velocity of business data.

Redundant, obsolete, or trivial/transitory data (ROT) is still stubbornly pervasive. It’s not merely unhelpful – ROT escalates cost, risk, and exposure. Here’s my current favorite image for making elimination of ROT a business priority, from talented Canadian RIM professional Christine (CD) Delay:

Courtesy of Christine (CD) Delay

Yet something else remains true. In the real world, what to do has never been as impactful as why to do it.  In the 2000s, a powerful impetus for managing information retention and disposal was the rise of ediscovery, triggering concerns about (1) explosive litigation costs due to unnecessarily retained data and (2) the specter of spoliation sanctions if information is managed poorly.  In the 2010s, an additional, new impetus was the fear of data breaches, with their resulting reputational damage, business interruption, regulatory implications, and legal exposures, all multiplied by retaining unnecessary data. And now, for the 2020s, the newest impetus for managing information retention and disposal is crystal clear – data privacy and security compliance

We’re witnessing a “rapid, unscheduled disassembly” (thanks SpaceX) of comprehensive consumer privacy laws across the United States. While these new state laws generally have a different, sleeker structure than California’s CCPA/CPRA, they share a similar impact – each such law compels or motivates covered businesses to delete unnecessary data.

Following California’s lead, comprehensive consumer privacy laws have now been enacted in Virginia (effective January 1, 2023), Colorado (effective July 1, 2023), Connecticut (effective July 1, 2023), Utah (effective December 31, 2023), and Iowa (effective January 1, 2025). Here’s how these new laws address data retention and the deletion of unnecessary data:

Data Minimization and Storage Limitation

  • Virginia Consumer Data Protection Act (VCDPA)
    Under the VCDPA, controllers must limit collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. Va. Code Ann. § 59.1-578(A)(1). Controllers may only process personal data for purposes either reasonably necessary to or compatible with the purposes for which such personal data is processed, as disclosed to the consumer, unless the consumer’s consent is obtained or as otherwise provided in the VCDPA. Va. Code Ann. § 59.1-578(A)(2).
  • Colorado Privacy Act (CPA)
    The CPA requires that controller’s collection of personal data must be adequate, relevant, and limited to what is reasonably necessary in relation to the specified purposes for which the data are processed. Colo. Rev. Stat. § 6-1-1308(3). A controller must not process personal data for purposes that are not reasonably necessary to or compatible with the specified purposes for which the personal data are processed, unless the controller first obtains the consumer’s consent. Colo. Rev. Stat. § 6-1-1308(4).
  • Connecticut Data Privacy Act (CTDPA)
    Under the CTDPA, controllers must limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. CTDPA § 6(a)(1). Controllers must, except as otherwise provided in CTDPA, not process personal data for purposes that are neither reasonably necessary to, nor compatible with, the disclosed purposes for which such personal data is processed, as disclosed to the consumer, unless the controller obtains the consumer’s consent. CTDPA § 6(a)(2).

In each of these laws, the definition of “process” includes the storage and deletion of consumers’ personal information, and so their processing limitation includes an obligation to not unnecessarily retain consumer data. And as with California’s CCPA/CPRA, the obligation to provide consumers a compliant privacy policy on how personal data will be “processed” requires notice of retention practices, which as a practical matter are based upon the business’s records retention policies and records retention schedules.

Defacto Deletion Impact

All five of these new laws provide deletion rights to consumers, but covered businesses are not required to delete data for which retention is required by records retention laws or regulations:

  • Virginia: The VCDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations. Va. Code Ann. § 59.1-582(A).
  • Colorado: The CPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations. Colo. Rev. Stat. § 6-1-1304(3)(a).
  • Connecticut: The CTDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or municipal ordinances or regulations. CTDPA § 10(a).
  • Utah: The UCPA does not restrict a controller’s or processor’s ability to comply with a federal, state, or local law, rule, or regulation. Utah Code Ann. § 13-61-304(1).
  • Iowa: The ICDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations. Iowa Code § 715D.7(1).

Thus, similar to the original CCPA, these laws reward covered businesses that carefully manage their data with retention schedules and that delete unnecessary data. Covered businesses that manage personal data under a legally validated retention schedule and that dispose of such data once retention is no longer legally required can avoid uncertainty, inefficiency, and cost in handling consumer deletion requests.

And on and on…

And it ain’t over yet, not even close. Comprehensive consumer privacy bills are percolating in many more state legislatures across the country. In April alone, three new comprehensive consumer privacy acts passed in state legislatures and were sent to governors in Indiana, Montana, and Tennessee. If signed into law, these three additional states’ laws will have the same double impact on data deletion as those of Virginia, Colorado, and Connecticut, by both (1) explicitly requiring data minimization and storage limitation, and (2) incenting covered businesses to use legally validated retention schedules and data deletion to curb inefficiency and cost in handling customer deletion requests:

  • Indiana Consumer Data Privacy Act (ICDPA)(effective January 1, 2026): A controller must limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer, and controller must not process personal data for purposes that are neither reasonably necessary for nor compatible with the disclosed purposes for which the personal data is processed, unless the controller obtains the consumer’s consent. Ind. Code § 25-15-4-1(1)&(2). “Processing” includes storage and deletion of personal information. Ind. Code § 25-15-2-21. And in handling consumer data requests, the ICDPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations, such as retention requirements. See Ind. Code § 25-15-8-1(a)(1).
  • Montana Consumer Data Privacy Act (MCDPA)(effective October 1, 2024): A controller must limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the personal data is processed, as disclosed to the consumer; and must not, except as otherwise provided in the MCDPA, process personal data for purposes that are not reasonably necessary to or compatible with the disclosed purposes for which the personal data is processed as disclosed to the consumer unless the controller obtains the consumer’s consent. MCDPA § 7(1)(a)&(2)(a). “Processing” includes storage and deletion of personal data. MCDPA § 2(17). And when handling consumer data requests, nothing in the MCDPA restricts a controller’s or processor’s ability to comply with federal, state, or municipal ordinances or regulations, such as retention requirements. See MCDPA § 11(1)(a).
  • Tennessee Information Privacy Act (TIPA)(effective July 1, 2025): A controller must limit the collection of personal information to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data is processed, as disclosed to the consumer; and, except as otherwise provided in TIPA, must not process personal information for purposes that are beyond what is reasonably necessary to and compatible with the disclosed purposes for which the personal information is processed, as disclosed to the consumer, unless the controller obtains the consumer’s consent. Tenn. Code Ann. § 47-18-3204(a)(1)&(2). “Processing” includes storage and deletion of personal information. Tenn. Code Ann. § 47-18-3201(18). And for responding to consumer data requests, TIPA does not restrict a controller’s or processor’s ability to comply with federal, state, or local laws, rules, or regulations, such as data retention requirements. See 47-18-3208(a)(1).

California set all of this in motion with the CCPA. But remember, this is not the first time that California has lit a match on data-related laws that then swept across the United States. Consider this: in 2003, only California had a state-level law requiring notification of individuals whose PII had been breached. By 2018, PII breach notifications were required by statute in all 50 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands.

And so, as comprehensive data privacy legislation ignites across the states in 2023 and beyond, the imperative will only escalate for businesses to manage their data with retention schedules and to dispose of unnecessary data.

Last month California finalized its updated regulations under the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA). With the CPRA, California has upped the ante on requiring data retention schedules and disposal of unnecessary data.

As always, to fully appreciate where we are, we need to remember from where we’ve come. With but rare exceptions, U.S. data privacy laws have not explicitly required data retention schedules, or data minimization (only collect data we need), or storage limitation (dispose of data when no longer needed). But this began to change in January 2020 with the CCPA, the United States’ first state-level comprehensive data privacy law. 

CCPA incented retention schedules and data disposal

The CCPA mandated data minimization through the vehicle of notice, by prohibiting covered businesses from collecting additional categories of PI or using collected PI for additional purposes beyond the purposes noticed at collection, without such notice to the consumer.  Cal. Civ. Code § 1798.100(b). Yet as originally enacted, the CCPA did not explicitly require either retention scheduling or, absent a consumer’s verifiable deletion request, data disposal. 

But the CCPA quietly did make managing data retention and disposal a practical priority, due to the repercussions of the CCPA’s deletion right.   Consumers’ ability under the CCPA to request deletion of their PI shifted decision-making power for data retention from the covered business to its consumers, leaving the business at their unpredictable mercy – some consumers might be fine with, or oblivious to, lengthy data retention, while others could insist, through verifiable deletion requests, that their PI be disposed of promptly.  The result is a costly and inefficient predicament for such businesses. 

Yet the CCPA’s deletion right has safe harbors.  A covered business can compliantly refuse a deletion request if retaining the consumer’s PI is necessary for such matters as completing the transaction with the consumer, performing a contract with the consumer, or to “[c]omply with a legal obligation,”  Cal. Civ. Code § 1798.105(d).  And the CCPA does not restrict a covered business’s ability to “[c]omply with federal, state, or local laws,” such as legal retention requirements.  Cal. Civ. Code § 1798.145(a)(1).

These are precisely the kind of factors used to establish retention periods in a well-constructed data retention schedule. And so, covered businesses that manage personal data under a legally validated retention schedule and that dispose of such data once no longer required can avoid uncertainty, inefficiency, and cost in handling CCPA consumer deletion requests.

CPRA explicitly requires retention schedules and data disposal

Effective January 1, 2023, the CPRA made sweeping changes to the CCPA. And regarding retention schedules and data disposal, while the CCPA was indirect, the CPRA says the quiet part out loud – loud and clear. Under the CPRA, covered businesses:

  • Must inform consumers how long the business intends to retain each category of PI the business collects, or if that is not possible, the criteria used to determine the retention period.
  • Must not retain PI for longer than is reasonably necessary and proportionate for the disclosed purpose(s) of collection or processing.

Cal. Civ. Code § 1798.100(a)(3) & (c).  Thus, for the first time under any U.S. comprehensive data privacy law, the CPRA explicitly and directly requires covered businesses to both (1) manage the CPRA’s broad range of PI under data retention schedule rules disclosed through notice to consumers, and (2) dispose of PI once it is no longer required for legal compliance or as reasonably necessary for the disclosed purposes for its collection and use.

The CPRA’s enactment also marked another important change in the impact of these requirements. Under the CCPA, the PI of covered businesses’ employees was exempt from the various consumer rights, including the deletion right. But coinciding with enactment of the CPRA, the employee PI exemption expired. So now, the CPRA’s retention schedule and data deletion requirements also apply to employee data.

The CPRA maintains consumers’ CCPA rights to request PI access and disposal, and it also adds additional consumer rights, such as to rectify inaccurate PI and to limit use and disclosure of sensitive PI.  As a result, the same practical incentives continue, as under the original CCPA, for covered businesses to carefully manage data retention and disposal.  Prudent businesses will still want to carefully manage retention of PI in light of the logistics, cost, and inefficiency involved in responding to verifiable requests.  And because of the deletion right’s safe harbors, covered businesses that dispose of PI under a legally-validated retention schedule once the PI is no longer needed to comply with legal retention requirements or the business’s needs for the consumer transaction or contract will be free of the cost, inefficiency, and unpredictability of selectively deleting the PI of individual consumers.

But because it also contains direct, explicit requirements for data minimization and storage limitation, the CPRA elevates data retention schedules and disposal of unnecessary data from prudent practice to direct, explicit compliance requirements.

The civil and administrative enforcement date for the CPRA and its newly finalized regulations is upon us – July 1, 2023. And California no longer stands alone in using comprehensive privacy laws to compel data minimization and data storage limitation. We’ll explore that next time, in Less Data #6.

There’s been a blogging blizzard about two recent cases interpreting the Illinois Biometric Information Privacy Act (BIPA). In early February 2023 the Illinois Supreme Court ruled in Tims v. Black Horse Carriers, Inc. that a five year limitations period applies to all BIPA claims. And just two weeks later, in Cothron v. White Castle Sys., Inc., the Court held that “a claim accrues under [BIPA] with every scan or transmission of biometric identifiers or biometric information without prior informed consent.”

These are indeed important rulings. Because BIPA authorizes private lawsuits for statutory damages without a showing of actual injury, these decisions add more fuel to the fire of BIPA class action litigation.

But there’s yet another recent BIPA case that’s important, regarding privacy laws that require data retention schedules and disposal of unnecessary data.

First, some background. With rare exceptions, U.S. privacy laws traditionally have not required either data minimization (only collect the sensitive data actually needed) or storage limitation (only keep such data while needed for its collection purposes). One of those early outliers was BIPA, first effective in 2008.

Our focus here is on a particular provision in BIPA, in Section 15(a):

A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information must comply with its established retention schedule and destruction guidelines.

740 ILL. COMP. STAT. 14/15(a)(emphasis added). 

BIPA thus included elements of data privacy seldom seen in other U.S. data privacy laws at that time – data is tied to the purpose(s) for collecting it (data minimization), and the regulated data must be disposed of after no longer necessary, pursuant to a written data retention schedule (storage limitation).

And this brings us to the Illinois Court of Appeals decision in Mora v. J&M Plating, Inc., 2022 IL App (2d) 210692, 2022 WL 17335861 (Ill. App. Ct. November 30, 2022). Mora dispels any notion that BIPA’s data retention schedule requirement is inconsequential.

Plaintiff Mora was hired by the defendant employer in July 2014 and began clocking into work by fingerprint scan in September 2014.  In May 2018, Mora’s employer published its biometric data privacy policy, which contained a retention and destruction schedule for biometric data. Mora signed the policy notice and consented to the collection and use of his biometric data. Mora was terminated from employment on January 7, 2021, and his biometric information was destroyed approximately two weeks after his termination.  Nine days later, Mora filed a class action lawsuit against his former employer, alleging BIPA violations.  Id. at *3.

There was no dispute over whether the employer’s policy complied with BIPA. The “J&M PLATING BIOMETRIC INFORMATION PRIVACY POLICY” contained the BIPA-required retention and destruction schedule and provided the mandated privacy notice.  Id. at fn. 2.  Plaintiff Mora consented in 2018 to the employer’s collection and use of his biometric data, and the employer disposed of Mora’s data in compliance with its BIPA policy after plaintiff was terminated.  Id. at *3.

The problem, and the reason why the Illinois Court of Appeals reversed the trial court’s dismissal of plaintiff’s lawsuit, was instead the lack of the BIPA-required retention schedule and privacy notice from 2014 to 2018.  According to the court, “defendant began collecting plaintiff’s biometric data in September 2014, and this triggered its obligation under [BIPA] section 15(a) to develop a retention-and-destruction schedule. Defendant did not have a schedule in place until May 2018, or nearly four years later. Thus, it violated section 15(a).”  And because a BIPA violation is sufficient to support an individual’s statutory cause of action, no showing of actual harm to Plaintiff was required.  Id. at *8.

What does this mean? Certainly, BIPA-covered businesses are on notice that they must comply with BIPA’s data retention schedule requirement before collecting any covered data. Yet more broadly, this case signals that any privacy law requiring data retention schedules may result in enforcement consequences for companies that fail to establish and maintain such data retention scheduling.

And BIPA is no longer an outlier – there’s a growing wave of state-level comprehensive consumer privacy laws that require data minimization and storage limitation, covering a far broader range of personal information than under BIPA. We’ll take a closer look at that next time, in Less Data #5.

We’ve already seen how new FTC regulations for GLBA-regulated financial institutions require retention schedules and disposal of unnecessary data as essential data security controls. The FTC is now also taking that position for all businesses under Section 5 of the FTC Act, as seen in a slew of recent FTC data security enforcement actions.

Two years ago I summarized the history of FTC enforcement on this issue. For decades the FTC has enforced reasonable data security under the authority of Section 5 of the FTC Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a)(1).  The FTC has pursued inadequate security practices of both large and well-known businesses and of small and obscure companies.  But the common theme is that the targeted business, according to the FTC, either deceptively or unfairly engaged in unreasonable data security practices for consumers’ personal information.

What was notable two years ago were the FTC’s Section 5 enforcement actions against InfoTrax Systems in late 2019 and SkyMed International in early 2021. In re InfoTrax Systems, L.C., No. C-4696 (F.T.C. December 30, 2019) (final complaint & consent order); In re SkyMed International, No. C-4732 (F.T.C. January 26, 2021) (final complaint & consent order). In each of these enforcement actions, the FTC alleged that the business “failed to have a policy, procedure, or practice for inventorying and deleting consumers’ personal information stored on [its] network that is no longer necessary….”  And in each consent order the FTC required “[p]olicies, procedures, and technical measures to systematically inventory Personal Information in [its] control and delete Personal Information that is no longer necessary….”

I ended that 2021 post by observing “[i]f the FTC’s position in SkyMed and Infotrax takes hold more broadly, the repercussions for over-retention will be sweeping in scope.”

Sweeping indeed. In a flurry of 2022 and 2023 enforcement actions, the FTC has now doubled-down on its position that reasonable data security requires data retention schedules and disposal of unnecessary data:

Residual Pumpkin (and later its purchaser Planetart) operated the platform CafePress.com, on which consumers purchased customized t-shirts, coffee mugs, and similar merchandise from other consumers or “shopkeepers.”  CafePress’s operators routinely collected information from consumers and shopkeepers——including names, email addresses, telephone numbers, birth dates, gender, photos, social media handles, security questions and answers, passwords, PayPal addresses, the last four digits and expiration dates of credit cards, and Social Security or tax identification numbers of shopkeepers, storing this sensitive personal information in clear text, except for passwords, which were encrypted.

In its 2021 Section 5 enforcement action complaint, the FTC alleged that CafePress’s operators failed to protect the personal information of buyers and sellers stored on its network and to adequately respond to multiple security breaches.  Among other inadequate security practices,  CafePress’s operators “created unnecessary risks to Personal Information by storing it indefinitely on its network without a business need.”

The FTC approved a settlement and consent agreement with CafePress’s operators on June 23, 2022.  The consent order mandates that CafePress’s operators establish, implement, and maintain a comprehensive information security program to protects the privacy, security, confidentiality, and integrity of collected personal information, including “[p]olicies and procedures to minimize data collection, storage, and retention, including data deletion or retention policies and procedures….”  the FTC also assessed a civil penalty of $500,000.

Drizly, an Uber subsidiary, operates an e-commerce platform through which local retailers sell alcohol online to adult customers.  The Drizly platform collects and stores both personal information that consumers provide and information that it automatically obtains from consumers’ computers and mobile devices.

In its 2022 Section 5 enforcement action complaint against both Drizly and its cofounder and CEO Rellas, the FTC alleged that data security failures led to a data breach exposing personal information of 2.5 million consumers.  Among other alleged security failures, Drizly failed to “[h]ave a policy, procedure, or practice for inventorying and deleting consumers’ personal information stored on its network that was no longer necessary.”

The FTC finalized the settlement and consent agreement with Drizly and Rellas on January 10, 2023.  The consent order mandates that Drizly destroy any collected personal data not necessary to provide products or services to consumers, to document and report to the Commission what data it destroyed, and to refrain from collecting or storing personal information unless it is necessary for specific purposes outlined in a retention schedule. And to punctuate the FTC’s resolve, the consent order also requires Rellas to implement an information security program at future companies if he moves to a business that collects consumer information from more than 25,000 individuals, or where he is a majority owner, CEO, or senior officer with information security responsibilities.

  • In re Chegg, Inc., No. C-4782 (F.T.C. January 25, 2023) (complaint & consent order)

Chegg markets and sells direct-to-student educational products and services, primarily to high school and college students.  Chegg collects sensitive personal information from users, such as information about users’ religious denomination, heritage, birthdate, parents’ income range, sexual orientation, and disabilities for Chegg’s scholarship search service, and users’ images and voice in connection with Chegg’s online tutoring services.  As an employer, Chegg also collects such personal information as employees’ names, birth dates, Social Security numbers, and financial information.

The FTC alleged In its Section 5 enforcement action complaint that Chegg’s poor data security practices resulted in four separate data breaches and the unauthorized publication of 40 million customers’ personal information.  Among other alleged security lapses, Chegg “failed to have a policy, process, or procedure for inventorying and deleting users’ and employees’ personal information stored on Chegg’s network after that information is no longer necessary….”

On January 25, 2023, The FTC approved a settlement and consent agreement with Chegg.  The consent order requires Chegg to establish, implement, and maintain, a comprehensive information security program that protects the security, availability, confidentiality, and integrity of specified personal information of customers under Respondent’s control, including, among other security controls,  “[p]olicies and procedures to minimize data collection, storage, and retention, including data deletion or retention policies and procedures….”  The consent order further requires Chegg to:

“Document and adhere to a retention schedule for Covered Information [meaning types of consumer personal information as defined in the consent order]. Such schedule shall set forth: (1) the purpose or purposes for which each type of Covered Information is collected; (2) the specific business needs for retaining each type of Covered Information; and (3) a set timeframe for deletion of each type of Covered Information (absent any intervening deletion requests from consumers) that precludes indefinite retention of any Covered Information….”

The FTC is also honing in upon unnecessary data retention in its recent privacy enforcement actions under FTC Act Section 5, punctuated by millions of dollars in civil penalties:

GoodRx Holdings, Inc. is a “consumer-focused digital healthcare platform” that advertises, distributes, and sells health-related products and services directly to consumers.  The FTC investigated GoodRx’s sharing of customer personal and health information with third party social media platforms and advertisers, as violations of FTC Act Section 5 and also of the FTC’s Health Breach Notification Rule.  The matter was resolved with a Stipulated Order for Permanent Injunction, Civil Penalty Judgment, and Other Relief filed in February 2023 in the United Stated District Court for the Northern District of California. 

Among the order’s various requirements, GoodRx must identify and instruct all entities that received personal information of GoodRx’s customers to delete all such information wrongfully received from GoodRx and to confirm such deletion in writing.  GoodRx must also establish, implement, and maintain a comprehensive privacy program that protects the privacy, security, availability, confidentiality, and integrity of the consumers’ personal information.  One mandated safeguard for the privacy program is that GoodRx must establish and maintain a data retention policy that includes:

“a retention schedule that limits the retention of Covered Information for only as long as is reasonably necessary to fulfill the purpose for which the Covered Information was collected; provided, however, that such Covered Information need not be destroyed, and may be disclosed, to the extent requested by a government agency or required by law, regulation, or court order;” and

“a requirement that each Covered Business document, adhere to, and make publicly available … a retention schedule for Covered Information, setting forth: (1) the purposes for which such information is collected; (2) the specific business need for retaining each type of Covered Information; and (3) a set timeframe for Deletion of each type of Covered Information (absent any intervening Deletion requests from consumers) that precludes indefinite retention of any Covered Information.”

The Stipulated Order also assessed a civil penalty against GoodRx of $1,500,000.

BetterHelp offers online counseling services. Consumers fill out a questionnaire with sensitive mental health information and also provide their name, email address, birth date, and other personal information. BetterHelp promised consumers that it would not use or disclose their personal health data except for limited purposes, such as to provide counseling services. But according to the FTC, BetterHelp provided consumers’ email addresses, IP addresses, and health questionnaire information to such social media platforms as Facebook, Snapchat, Criteo, and Pinterest for advertising purposes, which, along with other alleged data security and privacy program shortcomings, violated Section 5 of the FTC Act.

On March 2, 2023, the FTC approved a consent order with BetterHelp, subject to a thirty day public comment period.  The terms of the consent order mirror those in GoodRx summarized above, including the requirement that BetterHelp instruct entities to delete customer information wrongfully received from BetterHelp and to confirm such deletion, and also the same requirements to document, adhere to, and publish a retention schedule for consumers’ personal information “that precludes indefinite retention of any Covered Information.” 

The FTC also assessed a civil penalty against BetterHelp of $7,800,000.

The FTC is not being subtle about this. In case the message hasn’t landed, a February 2023 FTC blog post laid out three key elements for systemically addressing the security and privacy risks of complex data systems. Beyond multi-factor authentication and encrypted/ authenticated system connections, what is the third crucial element? You guessed it:

(3) Requiring companies to develop a data retention schedule, publish it, and then stick to it

A final provision is a requirement to develop a data retention schedule, publish it, and then stick to it. This embraces the premise that the most secure data is the data that’s not stored at all. Further, implementing this requirement inevitably requires companies to have a strong internal catalogue of all the data they store. This provides other benefits, such as ensuring that they’ll be able to comprehensively comply with requests from users to delete data and have the information needed to prioritize protections based on the types of data they’re storing.