This blog post is co-authored by Seyfarth Shaw and The Chertoff Group and has been cross-posted with permission.

What Happened

On July 26, the U.S. Securities & Exchange Commission (SEC) adopted its Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure final rule on a 3-2 vote. The final rule is a modified version of the SEC’s earlier Notice of Proposed Rulemaking (NPRM) released in March 2022. The final rule formalizes and expands on existing interpretive guidance requiring disclosure of “material” cybersecurity incidents.

Continue Reading SEC Publishes Public Company Cybersecurity Disclosure Final Rule

This post was originally published to Seyfarth’s Global Privacy Watch blog.

On July 10th, the European Commission issued its Implementing Decision regarding the adequacy of the EU-US Data Privacy Framework (“DPF”). The Decision has been eagerly awaited by US and Europe based commerce, hoping it will help business streamline cross-Atlantic data transfers, and by activists who have vowed to scrutinize the next framework arrangement (thereby maintaining their relevance). Regardless of the legal resiliency of the decision, it poses an interesting set of considerations for US businesses, not the least of which is whether or not to participate in the Framework.

For those who followed the development and demise of the Privacy Shield program and the Schrems II case, it has been apparent for some time that the fundamental objection of the activists and the Court of Justice of the EU (“CJEU”) to the original Privacy Shield was the perception that the US intelligence community had an ability to engage in disproportional data collection without any possibility of recourse by EU residents whose personal information may be swept into an investigation. The actual functioning of the program for the certifying businesses were much less controversial.

Since the structure of the program wasn’t the primary reason for Privacy Shield’s revocation, from a business perspective, the current DPF looks a lot like the old Privacy Shield. For businesses who made the decision to participate in the Privacy Shield program in the past, the operational burden shouldn’t be much different under the new DPF, if they have already taken steps to operationalize the requirements.

What is interesting about the new DPF is how it may impact a company’s decision to choose  between the Standard Contractual Clauses (“SCCs”) and the alternative adequacy mechanism for transfers. There is also some interest vis-à-vis the DPF and its interactions with state privacy laws.

DPF v. SCCs

One of the components of the new SCCs that were adopted in 2021 (which did not exist in the prior version of the SCCs) is the requirement for all SCCs to be accompanied by a transfer impact assessment (“TIA”)[1]. A TIA is designed to assess whether there are legal barriers to the enforcement of the SCCs in the relevant importing jurisdiction – in this case, the US. Many commentators, and some courts, have applied the Schrems II reasoning to argue that use of the SCCs as a transfer mechanism to the US is not effective in all circumstances, because the Foreign Intelligence Services Act (“FISA”) authorizes US intelligence to engage in bulk collection under section 702 and such programs are not proportional and do not have reasonable safeguards required under EU law.

Although the SCCs are still used to transfer European data to the US (mostly because after Privacy Shield was invalidated, practically speaking, they had been the only remaining transfer mechanism for many businesses), several commenters have taken the position that, if Schrems II is taken to its logical conclusion, then any use of SCCs in the US is effectively impossible, because US companies cannot live up to their promises in the SCCs. This was noted in an expert report commissioned by the German Conference of Independent Data Protection Supervisors to assess the broad reach of FISA section 702 programs. Needless to say, companies who undertake a TIA as part of their deployment of SCCs are also under some level of uncertainty as to the effectiveness since a TIA is not the opinion of a supervisory authority, but rather their own interpretation, and that of their legal counsel – which said expert report may cast doubt on.

The DPF is not plagued by such uncertainty. Specifically, recital 200 of the Decision expressly states the legal protections surrounding FISA programs are adequate and “essentially equivalent” to EU protections related to intelligence and national security. This is a momentous declaration, in our estimation, because as a consequence, participation by a company in the DPF seems to remover the need for a TIA for a transfer mechanism. Put another way, the recital 200 provides binding authority for the assertion that the primary motivation for a TIA (i.e. FISA section 702 programs) is now moot in that the DPF participants have sufficient safeguards (even in light of FISA 702) regardless of undertaking a TIA. Note that the removal of a TIA requirement only works for participants in the DPF and TIAs are still required when relying on the  SCCs as a transfer mechanism.

DPF v. State Law

Because the DPF establishes “essentially equivalent” controls for participants, the differences between the scope and requirements of EU privacy law and US state privacy law are brought into more apparent contrast. Looking across the two general frameworks, the differences in concepts, protective requirements, and other controls may actually motivate businesses who are already subject to the various state omnibus privacy laws, to skip participation in the DPF. This is mostly because the DPF is a bit more reasonable to businesses with respect to the exercise of individual rights than some state laws.

For example, the GDPR does not require the controller to comply in full with an access request if the response would “adversely affect the rights” of others, including, a business’ trade secrets or intellectual property[2]. The California Consumer Privacy Act has no such express limitation related to business’ data. That being said, there are a number of possible arguments available under other laws (trade secret, confidentiality obligations, etc.) that could justify a limiting a response to an access request. However, those limitations are not express in the California law – and they are in the GDPR and the DPF.

Similarly, the principles in the GDPR and DPF allow for a denial of an access request where responding to such request triggers an undue burden on the business. The California law’s limitation is a bit narrower than the GDPR/DPF limitation in this instance. California requires responsive disclosures to access requests unless the request is “manifestly unfounded or excessive” [3]. This standard narrower than the DPF standard of “…where the burden or expense of providing access would be disproportionate to the risks to the individual’s privacy…”[4].

Conclusion

This lack of alignment between DPF requirements and state law may lead to operational  confusion and uncertainty by US businesses interested or actively involved in the transfer of personal information from the EU. Regardless of the confusion related to the overlapping US and EU privacy laws, businesses who have previously participated in and are familiar with the Privacy Shield program may find it useful to also participate in the DPF. Additionally, for some business models, participation in the DPF can mean reduced administrative and legal costs as compared to putting in place and maintaining SCCs. However, it must be remembered that the DPF is not the same as compliance with US state privacy laws – even though some omnibus state privacy laws echo GDPR concepts. There are significant distinctions which have to be managed between the tactical implementation of a privacy program for US state law and a DPF compliance program.

Finally, even though there has been a commitment by some to challenge the DPF at the CJEU, the Commission’s approval of the DPF does not necessarily signal a “wait and see” approach. It is instead a time for companies to carefully evaluate and review their transfer activities, their regulatory obligations, and the most appropriate path forward.  All these years after Schrems II, it is at least nice to have a potential alternative to SCCs, in the right business conditions.


[1] Commission Implementing Decision (EU) 2021/914 Recitals 19 through 21

[2] GDPR, Article 15(4) and Recital 63.

[3] Cal. Civ. Code 1798.145(h)(3)

[4] Commission Implementing Decision for DPF Annex I, Section III.4; 8.b, c, e; 14.e, f and 15.d.

2023 has brought several states into the privacy limelight. As of Sunday, June 18, and with the signature of Texas governor Greg Abbott, the Texas Data Privacy and Security Act (“TDPSA”) was enacted, making the Lone Star state the tenth in the U.S. to pass a comprehensive data privacy and security law. The Act provides Texas consumers the ability to submit requests to exercise privacy rights, and extends to parents the ability exercise rights on behalf of their minor children.

The Texas Act provides the usual compliment of data subject rights relating to access, corrections, data portability, and to opt out of data being processed for purposes of targeted advertising, the sale of personal information, and profiling where a consumer may be significantly or legally effected. It also requires that covered businesses provide a privacy notice and other disclosures relevant to how they use consumer data.

Application Standard

Among the ten state-level comprehensive privacy bills, the TDPSA is the first to remove processing and profit thresholds from the applicability standard. Instead of using these thresholds to determine whether an entity is covered, the TDPSA applies to persons that: (1) Conduct business in Texas or produce products or services consumed by Texas residents; (2) Process or engage in the sale of personal data; and (3) Are not small business as defined by the United States Small Business Administration.[i]

Definitions and Obligations of Controllers and Processors

The TDPSA’s definition of “sale of personal data” aligns closely with that of the California Consumer Privacy Act (“CCPA”). It refers to the “the sharing, disclosing, or transferring of personal data for monetary or other valuable consideration by the controller to a third party.” The Act defines “process” as “an operation or set of operations performed” whether it be manually or automatically, on Texas consumers’ personal data or sets of data. This includes the “collection, use, storage, disclosure, analysis, deletion, or modification of personal data.” Unlike the CCPA, the law exempts data processed for business-to-business and employment purposes.

Covered businesses who are data controllers must provide consumers with “a reasonably accessible and clear” privacy notice. These notices include the categories of personal data processed by the controller, the purpose of processing personal data, the categories of data shared with third parties, and methods by which consumers can exercise their rights under the law. If a controller’s use of sensitive personal data or biometric data constitutes a sale, one or both of the following must be included:

  • “NOTICE: This website may sell your sensitive personal data.”
  • “NOTICE: This website may sell your biometric personal data.”

Processors, which are akin to “service providers” under the CCPA, are those people or businesses that process personal data on the behalf of the controller. Processors have a number of obligations under the TDPSA, including assisting controllers in responding to consumer rights requests and data security compliance. All processors will need to have a written data protection agreement (“DPA”) in place with a controller, which will include:

  1. “clear instructions for processing data;
  2. the nature and purpose of processing;
  3. the type of data subject to processing;
  4. the duration of processing;
  5. the rights and obligations of both parties; and
  6. a number of requirements that the processor shall follow under the agreement.

Processors will be required to ensure confidentiality of data, and at the controller’s direction, a processor must delete or return all of the information at the conclusion of the service agreement. However, this deletion requirement excludes data that must be retained pursuant to the processor’s records retention obligations.

Processors must also certify that they will make available to the controller all information necessary to demonstrate the processor ’s compliance with the requirements of this chapter, and that they will comply with a controllers assessment of their security practices. Lastly, should a processor engage any subcontractor, they will need another written contract that meets all of the written requirements set forth by the controller’s DPA.

Yet another state law DPA requirement brings into question whether businesses, particularly those on the national and multi-national level, are going to need separate addenda to service agreements that recite a la carte provisions that include separate definitions and commitments to comply with each state-level privacy law, as well as international data privacy laws such as the EU’s GDPR. Based on the new and upcoming privacy laws in the U.S., businesses can probably still operate using some form of uniform DPA that accounts for each of the different requirements. However, we may soon reach a point where the pages of all the appendices and addenda to comply with separate state requirements are greater than the typical service agreement contract.

Data Protection Assessment Requirement

The TDPSA mandates that controllers conduct and document data protection assessments. These assessments, which mirror those required by the Connecticut Data Privacy Act, require businesses to identify the purposes for which they are collecting and processing data, as well as the associated risks and benefits of that processing. Businesses will need to assess these benefits and risks for the following categories of processing activities involving personal data:

  1. Processing of personal data for targeted advertising;
    1. The sale of personal data;
    1. Processing for purposes of profiling if there is a reasonably foreseeable risk of unfair or deceptive treatment, injury to consumers (including financial and reputational risk), or physical intrusion to an individual’s solitude or seclusion;
    1. Processing of sensitive data; and
    1. Any processing activities that generally may pose a heightened risk of harm to consumers.

Enforcement

The TDPSA expressly states that it does not provide a private right of action. The Texas Attorney General holds exclusive power to field consumer complaints, investigate, issue written warnings, and ultimately enforce violations of the law. The AG may seek both injunctive relief and civil penalties of up to $7,500 in damages for each statutory violation.

The Texas AG also has the authority to investigate controllers when it has reasonable cause to believe that a business has engaged in, is engaging in, or, interestingly – is about to engage in – a violation of the TDPSA. While the Senate version suggested revising this authority to remove pre-violation (thought crime?) investigations, the language withstood its scrutiny and remained in the signed act. This was one of many suggested changes by the Senate prior to bill’s passing.

Back and Forth Between Texas Legislative Houses

As the TDPSA was drafted, a few notable revisions were made between the House and the Senate versions of the bill. However, most of the Senates proposed additions were not ultimately accepted. To start, the Senate added language that expressly excluded from its definition of biometric data “a physical or digital photograph or data generated from” photographs. Further, under the definition of “Sensitive data”, the Senate removed the House’s language that included sexual orientation data. Notably, the sexual orientation language has since been re-added to the finalized version of the law.

The House and Senate also went back and forth on some of the other definitions in the Act, including which entities fall are exempt. Under the “state agency” definition, the Senate broadened the language to include any branch of state government, rather than any branch that falls into the executive branch of government – which was on of the House versions. However, the Senate language made it into the finalized version. 

For the most part, the two legislative groups were in agreement as to which entities were exempt from the TDPSA. These include exemptions for institutions and data subject to Title V of the Gramm-Leach-Bliley Act,[ii] the HIPAA[iii] and HITECH[iv], non-profits and higher education institutions. The Senate sought to add electric utilities and power companies to this list, but the finalized version kept them off the exempt list.

The Senate’s revisions also proposed language allowing consumers to authorize a designated agent to submit rights requests (similar to what we see in the CCPA), but that language was not signed into law. The TDPSA does not let anyone act on another person’s behalf to exercise their privacy rights, other than parents acting on behalf of their minor children.

Conclusion

Section 541.055(e), which provides consumers the ability to have a third party authorized agent submit opt-out requests on their behalf, goes into effect January 1, 2025. Other than that rather narrow delayed measure, the rest of the TDPSA is set to go into effect on July 1, 2024. It will be one of the broadest reaching in the U.S., particularly because of its unique applicability standard. Its mandatory data protection assessments and requirement for written contracts with all processors make the law slightly less business friendly than the rest of the state privacy laws out there. But California is still in a league of its own on that score.


[i] The SBA defines small business generally as a privately-owned enterprise with 500 or fewer employees. Depending on the industry, however, the maximum number of employees may fluctuate and in some cases may not be factored in. Some businesses are defined as “small” according to their average annual revenue.

[ii] 15 U.S.C. Section 6801 et seq

[iii] 42 U.S.C. Section 1320d et seq.

[iv] Division A, Title XIII, and Division B, Title IV, Pub. L. No. 111-5

Seyfarth Synopsis: The U.S. District Court for the Northern District of Illinois recently denied Plaintiff’s motion to reconsider a prior dismissal of his privacy action due to untimeliness.  In a case titled Bonilla, et al. v. Ancestry.com Operations Inc., et al., No. 20-cv-7390 (N.D. Ill.), Plaintiff alleged that consumer DNA network Ancestry DNA violated the Illinois Right of Publicity Act (“IRPA”) when it uploaded his high school yearbook photo to its website.  The Court initially granted Ancestry’s motion for summary judgment, finding Plaintiff’s claims to be time-barred under the applicable one-year limitations period.  Upon reconsideration, Plaintiff  – unsuccessfully – made a first-of-its-kind argument that the Court should apply the Illinois Biometric Privacy Act’s five-year statute of limitations to the IRPA.

Background on the Bonilla Lawsuit

Ancestry DNA, most commonly known for its at-home DNA testing kits, also maintains a robust database of various historical information and images.  One subset of this online database is the company’s “Yearbook Database.”  This portion of the website collects yearbook records from throughout the country and uploads the yearbook contents – including students’ photos – to Ancestry.com.  On June 27, 2019, Ancestry DNA uploaded the 1995 yearbook from Central High School in Omaha, Nebraska to its Yearbook Database.

More than a year later, on December 14, 2020, Plaintiff Sergio Bonilla filed a lawsuit against Ancestry DNA over its publication of the Central High School yearbook.  Specifically, Plaintiff Bonilla – a current Illinois resident and former student of Central High School whose picture appeared in Ancestry’s database – alleged that Ancestry DNA improperly publicized his private information without obtaining his consent.  Plaintiff’s lawsuit asserted violations of the IRPA, as well as a cause of action for unjust enrichment.  Ancestry DNA filed a motion for summary judgment on the basis that Plaintiff’s action was not brought within the requisite one-year limitations period.  The Court agreed, thereby dismissing Plaintiff’s claims.

Court Denies Plaintiff’s Motion for Reconsideration

After the Illinois Supreme Court’s decision in Tims v. Black Horse Carriers (which held that BIPA is subject to a five-year statute of limitations – read our full summary HERE), Plaintiff filed a motion for reconsideration, contending that the Court should actually apply a five-year limitations period to IRPA actions, like it applies to BIPA.  To that end, Plaintiff emphasized that the IRPA (similar to BIPA) does not itself contain a statute of limitations.  Plaintiff also noted that both the IRPA and BIPA derived from legislative concerns centered on Illinois residents’ right to privacy.  Therefore, according to Plaintiff, the IRPA’s legislative purpose would be best served by applying the catch-all five-year limitations period of 735 ILCS 5/13-205. 

On reconsideration, the Court again rejected Plaintiff’s argument.  The Court first outlined relevant case law precedent, under which the only courts to address this issue previously held that the IRPA’s applicable statute of limitations is one year.  See Toth-Gray v. Lamp Liter, Inc., No. 19-cv-1327, 2019 WL 3555179, at *4 (N.D. Ill. July 31, 2019); see also Blair v. Nevada Landing P’ship, 859 N.E.2d 1188, 1192 (Ill. App. Ct. 2006). 

The Court then analyzed the Tims decision, which held that, “when the law does not specify a statute of limitations, ‘the five-year limitations period applies’ unless the suit is one for ‘slander, libel or for publication of a matter violating the right of privacy.’”  Here, the Court reasoned that an IRPA action squarely falls within the last category identified by the Court in Tims, as IRPA cases necessarily involve alleged violations of a party’s right to privacy.  Finally, the Court rejected Plaintiff’s contention that Tims controls this situation, instead holding that “[u]nlike the BIPA, the IRPA protects the publication of matters related to the right of privacy and, thus, falls under the one-year statute of limitations.”

Implications for Businesses

This decision establishes a welcome pro-business standard in the Illinois privacy law context.  Notably, the Illinois Supreme Court in Tims rejected the defense bar’s argument that BIPA violations were akin to privacy rights violations and subject to the one-year statute of limitations applicable to IRPA claims.  This Ancestry.com decision holds that the converse also is not true.  It is also the first court to reject expansion of the plaintiff-friendly five-year BIPA statute of limitations to claims beyond BIPA.

Though this decision was issued by an Illinois federal court – rather than the Illinois Supreme Court, which decided the recent Tims and Cothron v. White Castle System BIPA cases – it nonetheless offers some privacy protection for Illinois businesses that post or otherwise aggregate third parties’ content or information.  We will monitor whether defendants are able to expand the Bonilla decision into other related privacy law actions, or if Illinois courts will restrict its holding to actions brought under the IRPA.

For more information about the Illinois Right of Publicity Act, the Illinois Biometric Information Privacy Act, or how this decision may affect your business, contact the authors Danielle Kays and James Nasiri, your Seyfarth attorney, or Seyfarth’s Workplace Privacy & Biometrics Practice Group.

Seyfarth Synopsis: Federal judges are requiring attorneys to attest as to whether they have used generative artificial intelligence (AI) in court filings, and if so, how and in what manner it was used. These court orders come just days after two New York attorneys filed a motion in which ChatGPT provided citations to non-existent caselaw.[i]

There are many ways to leverage AI tools across the legal industry, including identifying issues in clients’ data management practices, efficiently reviewing immense quantities of electronically stored information, and guiding case strategy, but according to U.S. District Judge Brantley Starr of the Northern District of Texas, “legal briefing is not one of them.” Last Tuesday, May 30, Judge Starr became the first judge requiring all attorneys before his court to certify whether they used generative AI to prepare filings, and if so, to confirm any such language prepared by the generative AI was validated by a human for accuracy.[ii]

Judge Starr reasoned that:

These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.[iii]

Critically, the failure to submit such a certification would result in the court striking the filing and potentially imposing sanctions under Rule 11.

Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing. A template Certificate Regarding Judge-Specific Requirements is provided here.[iv]

Shortly thereafter, on June 2, Magistrate Judge Gabriel Fuentes of the Northern District of Illinois followed suit with a revised standing order that not only requires all parties to disclose whether they used generative AI to draft filings, but also to disclose whether they used generative AI to conduct legal research. Judge Fuentes deemed the overreliance on AI tools a threat to the mission of federal courts, and stated that “[p]arties should not assume that mere reliance on an AI tool will be presumed to constitute reasonable inquiry.”[v]

Mirroring the reasoning of Judge Starr, Judge Fuentes further highlights courts’ longstanding presumption that Rule 11 certifications are representations “by filers, as living, breathing, thinking human beings that they themselves have read and analyzed all cited authorities to ensure that such authorities actually exist and that the filings comply with Rule 11(b)(2).”[vi] Both Judges have made clear that in order to properly represent a party, attorneys must always be diligent in that representation and that reliance on emerging technology, as convincing and tempting as it may be, requires validation and human involvement.

While federal courts in Texas and Illinois were first to the punch, we don’t expect other jurisdictions to be far behind with court orders mirroring those of Judge Starr and Judge Fuentes.


[i] See Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), Order to Show Cause (S.D.N.Y. May 4, 2023); see also Use of ChatGPT in Federal Litigation Holds Lessons for Lawyers and Non-Lawyers Everywhere, https://www.seyfarth.com/news-insights/use-of-chatgpt-in-federal-litigation-holds-lessons-for-lawyers-and-non-lawyers-everywhere.html.

[ii] Id.

[iii] Hon. Brantley Starr, “Mandatory Certification Regarding Generative Artificial Intelligence [Standing Order],” (N.D. Tex.).

[iv] Id.

[v] Hon. Gabriel A. Fuentes, “Standing Order For Civil Cases Before Magistrate Judge Fuentes,” [N.D. Ill.]

[vi] Id.



You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions.  This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,[1] the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart.  The airline filed a motion to dismiss the case. The plaintiff’s lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff’s attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff’s counsel claimed. The Court directed plaintiff’s counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff’s lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions “may not be inclusive of the entire opinions but only what is made available by online database [sic].”[2]  Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis.[3]

In response, the Court stated that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations”[4], using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example.  The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that “there has been no such case before the Eleventh Circuit” and that the docket number shown in the plaintiff’s submission was for a different case.[5] The Court noted that “five [other] decisions submitted by plaintiff’s counsel . . . appear to be fake as well.” The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff’s counsel show cause as to why he should not be sanctioned for citing “fake” cases.[6]

At that point, plaintiff’s counsel revealed what happened.[7] The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer “had no reason to doubt.” The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT “provided its legal source and assured the reliability of its content.” He explained that he had never used ChatGPT for legal research before and “was unaware of the possibility that its content could be false.” The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he “had no intent to deceive this Court or the defendant.” The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded “[y]es,” one of the cases “is a real case,” and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis.[8]

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well.  For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply “Are the other cases you provided fake”?[9] Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI’s response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents.  But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software’s output.  Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.


[1] Case No. 22-cv-1461 (S.D.N.Y.).

[2] Id. at Dkt. No. 29. 

[3] Id.

[4] Id. at Dkt. No. 31. 

[5] Id.

[6] Id.

[7] Id. at Dkt. No. 32.

[8] Id.

[9] Id.

Tennessee and Montana are now set to be the next two states with “omnibus” privacy legislation. “Omnibus” privacy legislation regulates personal information as a broad category, as opposed to data collected by a particular regulated business or collected for a specific purpose, like health information, financial or payment card information. As far as omnibus laws go, Tennessee and Montana are two additional data points informing the trend we are seeing at the state level regarding privacy and data protection. Fortunately (or unfortunately depending on your point of view) these two states have taken the model which was initiated by Virginia and Colorado instead of following the California model.

Is There Really Anything New?

While these two new laws may seem to be “more of the same”, the Tennessee law contains some new interesting approaches to the regulation of privacy and data protection. While we see the usual set of privacy obligations (notice requirements, rights of access and deletion, restrictions around targeted advertising and online behavioral advertising, et cetera) in both the Tennessee and Montana laws, Tennessee has taken the unusual step of building into its law specific guidance on how to actually develop and deploy a privacy program in the Tennessee Information Protection Act (“TIPA”).

Previously, privacy compliance programs have been structured in a wide variety of ways, mostly as a result of the operational necessities of various businesses. With Tennessee’s new law, we now see a state attempting to standardize how businesses develop and implement privacy programs with more clearly defined NIST standards, as opposed to the traditional, but nebulous  concepts of “reasonableness” and “adequacy”.

NIST Privacy Framework

Tennessee’s law incorporates standardized compliance concepts by requiring the use of the National Institute of Standards and Technology (“NIST”) privacy framework entitled “a tool for improving privacy through enterprise risk management version 1.0”. More specifically, the TIPA states that “…a controller or processor shall create…” it’s privacy program using this framework. Unfortunately, it is unclear for now whether or not failure to use the NIST framework would actually constitute a violation of the law. One could potentially argue that if a program fulfills all of the obligations of the TIPA it should not matter what framework is used.

Part of the concern around a “mandatory” use of the NIST framework is that the framework is somewhat complicated to implement; and does not factor the size, capabilities, and processing risk activity of a particular organization. Since NIST intended the framework to cover a wide range of use cases and operational complexities, the framework is inherently complex. As a consequence, smaller and less mature organizations will likely struggle in implementing a privacy program under the NIST framework. This is particularly true since while NIST framework has various levels of maturity for a privacy program, the TIPA doesn’t articulate what “tier” of program maturity a controller needs to fulfill within the NIST framework to be compliant.

The whole issue of “mandatory v. permissive” use of the NIST framework is further muddied as a result of the TIPA giving an affirmative defense to controllers who use the NIST framework. If the NIST framework is oriented as an affirmative obligation, it would not be necessary to articulate the use of the NIST framework as an affirmative defense. In our opinion, Tennessee may have been better served by providing a safe harbor for privacy programs built under the NIST framework, as opposed to mandating that all programs must use the NIST framework. In any event, further clarity as to what constitutes “compliant” use of the NIST framework would be helpful.

Privacy Certification

Another useful concept which the TIPA introduced is the participation in a certification program  acting as evidence of compliance with the law. While not truly being a “safe harbor”, controllers that participate in the Asia Pacific Economic Cooperation Forum (“APEC”) Cross-Border Privacy Rules (“CBPR”) system may have their certification under these rules operate as evidence of compliance with the TIPA. Outside of one specific federal privacy law (i.e. COPPA), neither the federal nor state privacy laws have officially recognized certification schemes as providing evidence of compliance with the relevant law.

In the end, while there may be confusion in some of the components of the TIPA, Tennessee can be commended for attempting to provide more commercially viable guidance on how to comply with the TIPA, at least from the perspective of building out a privacy program. Additionally, this is the first time in the United states we have seen the use of privacy certification schemas as legally relevant evidence of compliance. Privacy certification systems have been around for some time, but they have almost never been capable of demonstrating legal compliance.

On March 15, 2023 the Securities and Exchange Commission (“SEC”) proposed three new sets of rules (the “Proposed Rules”) which, if adopted, would require a variety of companies to beef up their cybersecurity policies and data breach notification procedures. As characterized by SEC Chair Gary Gensler, the Proposed Rules aim to promote “cyber resiliency” in furtherance of the SEC’s “responsibility to help protect for financial stability.”[1]

In particular, the SEC has proposed:

  • Amendments to Regulation S-P which would, among other things, require broker-dealers, investment companies, and registered investment advisers to adopt written policies and procedures for response to data breaches, and to provide notice to individuals “reasonably likely” to be impacted within thirty days after becoming aware that an incident was “reasonably likely” to have occurred (“Proposed Reg S-P Amendments”).[2]
  • New requirements for a number of “Market Entities” (including broker-dealers, clearing agencies, and national securities exchanges) to, among other things: (i) implement cybersecurity risk policies and procedures; (ii) annually assess the design and effectiveness of these policies and procedures; and (iii) notify the SEC and the public of any “significant cybersecurity incident” (“Proposed Cybersecurity Risk Management Rule”).[3]
  • Amendments to Regulation Systems Compliance and Integrity (“Reg SCI”) in order to expand the entities covered by Reg SCI (“SCI Entities”) and add additional data security and notification requirements to SCI Entities (“Proposed Reg SCI Amendments”).[4]

As Commissioner Hester Peirce observed, each Proposed Rule “overlaps and intersects with each of the others, as well as other existing and proposed regulations.” [5] Therefore, while each of the Proposed Rules relates to similar cybersecurity goals, each must be considered in turn to determine whether a particular company is covered and what steps the company would need to undertake should the Proposed Rules become final.

Below we discuss each set of Proposed Rules in more detail and provide some takeaways and tips for cybersecurity preparedness regardless of industry.

Proposed Reg S-P Amendments

Reg S-P, adopted in 2000, requires that brokers, dealers, investment companies, and registered investment advisers adopt written policies and procedures regarding the protection and disposal of customer records and information.[6] But, as Chair Gensler explained in a statement in support of the Proposed Reg S-P Amendments, “[t]hough the current rule requires covered firms to notify customers about how they use their financial information, these firms have no requirement to notify customers about breaches,” and the Proposed Reg S-P Amendments look to “close this gap.”[7]

In particular, “[w]hile all 50 states have enacted laws in recent years requiring firms to notify individuals of data breaches, standards differ by state, with some states imposing heightened notification requirements relative to other states,” and the SEC seeks, through the Proposed Reg S-P Amendments, to provide “a Federal minimum standard for customer notification” for covered entities.[8] This includes a definition of “sensitive customer information” which is broader than that used in at least 12 states; a 30-day notification deadline, which is shorter than timing currently mandated by 15 states (plus 32 states which do not include a notification deadline or permit delayed notifications for law enforcement purposes); and required notification unless the covered institution finds no risk of harm, unlike 21 states which only require notice if, after investigation, the covered institution does find risk of harm.[9]

Furthermore, while Reg S-P currently applies to broker-dealers, investment companies, and registered investment advisors, the Proposed Reg S-P Amendments would expand the scope to transfer agents.[10] It also would apply customer information safeguarding and disposal rules to customer information that a covered institution receives from other financial institutions and to a broader set of information by newly defining the term “customer information” which, for non-transfer agents, would “encompass any record containing ‘nonpublic personal information’ (as defined in Regulation S-P) about ‘a customer of a financial institution,’ whether in paper, electronic or other form that is handled or maintained by the covered institution or on its behalf,” and for transfer agents, which “typically do not have consumers or customers” for purposes of Reg S-P, would have a similar definition with respect to “any natural person, who is a securityholder of an issuer for which the transfer agent acts or has acted as transfer agent, that is handled or maintained by the transfer agent or on its behalf.”[11]

Proposed Cybersecurity Risk Management Rule

The Proposed Cybersecurity Risk Management Rule will impact a variety of “different types of entities performing various functions” in the financial markets defined as “Market Entities,” including “broker-dealers, broker-dealers that operate an alternative trading system, clearing agencies, major security-based swap participants, the Municipal Securities Rulemaking Board, national securities associations, national securities exchanges, security-based swap data repositories, security-based swap dealers, and transfer agents.”[12]

As Chair Gensler explained, the Proposed Cybersecurity Risk Management Rule is designed to “address financial sector market entities’ cybersecurity,” by, among other things, requiring Market Entities to adopt written policies and procedures to address their cybersecurity risks, to notify the SEC of significant cyber incidents, and, with the exception of smaller broker-dealers, to disclose to the public a summary description of cybersecurity risks that could materially affect the entity and significant cybersecurity incidents in the current or previous calendar year.[13]

According to the SEC, these policies and procedures are “not intended to impose a one-size-fits-all approach to addressing cybersecurity risks,” and are designed to provide Market Entities “with the flexibility to update and modify their policies and procedures as needed[.]”[14] However, there are certain minimum policies and procedures that would be required, such as periodic assessments of cybersecurity risks,[15] controls designed to minimize user-related risks and prevent unauthorized system access,[16] periodic assessment of information systems,[17] oversight of service providers that receive, maintain, or process the entity’s information (including  written contracts between the entity and its service providers),[18] measures designed to detect, mitigate, and remediate cybersecurity threats and vulnerabilities,[19] measures designed to detect, respond to, and recover from cybersecurity incidents,[20] and an annual review of the design and effectiveness of cybersecurity policies and procedures (with a written report).[21] For most regulated entities, such measures are already in place.

Proposed Reg SCI Amendments

Finally, the SEC has proposed amendments to Reg SCI, a 2014 rule adopted to “strengthen the technology infrastructure of the U.S. securities markets, reduce the occurrence of systems issues in those markets, improve their resiliency when technological issues arise, and establish an updated and formalized regulatory framework” for the SEC’s oversight of these systems.[22]  Reg SCI applies to “SCI Entities,” which include self-regulatory organizations, certain large Alternative Trading Systems, and certain other market participants deemed to have “potential to impact investors, the overall market, or the trading of individual securities in the event of certain types of systems problems.”[23]

The Proposed Reg SCI Amendments would expand the definition of SCI Entity to include registered Security-Based Swap Data Repositories, registered broker-dealers exceeding a size threshold, and additional clearing agencies exempt from registration.[24] They also would broaden requirements to which SCI Entities are subject, including  required notice to the SEC and affected persons of any “systems intrusions,” which would include a “range of cybersecurity events.”[25]

Takeaways

While the Proposed Rules are not adopted as-of-yet, companies which could be covered should take the opportunity to reevaluate their cybersecurity practices and policies, both to mitigate as much as possible the risk of a cyber-attack and to be prepared to address an attack, including meeting all notification requirements, should one occur.

Among other things, best practices include:

  • A written cyber risk assessment which categorizes and prioritizes cyber risk based on an inventory of the information systems’ components, including the type of information residing on the network and the potential impact of a cybersecurity incident;
  • A cybersecurity vulnerability assessment to assess threats and vulnerabilities; determine deviations from acceptable configurations, enterprise or local policy; assess the level of risk; and develop and/or recommend appropriate mitigation countermeasures in both operational and nonoperational situations;
  • A written incident response plan that defines how the company will respond to and recover from a cybersecurity incident, including timing and method of reporting such incident to regulators, persons or other entities;
  • A business continuity plan designed to reasonably ensure continued operations when confronted with a cybersecurity incident and maintain access to information;
  • Tabletop exercises to review and test incident response and business continuity plans;
  • Annual review of policies and procedures.

As a next step, each of the Proposed Rules will be published on the Federal Register and open for comment for sixty days following this publication. Regardless of whether the Proposed Rules are adopted, they represent the SEC’s increasing awareness of, and desire to mitigate, cybersecurity incidents, and companies should be prepared accordingly.


[1] Gensler, Gary, Opening Statement before the March 15 Commission Meeting (SEC, March 15, 2023).

[2] See Press Release, SEC Proposes Changes to Reg S-P to Enhance Protection of Customer Information (SEC, March 15, 2023). The full text of the Proposed Reg S-P Amendments can be found here.

[3] See Press Release, SEC Proposes New Requirements to Address Cybersecurity Risks to the U.S. Securities Markets (SEC March 15, 2023). The full text of the Proposed Cybersecurity Risk Management Rule can be found here.

[4] See Press Release, SEC Proposes to Expand and Update Regulation SCI (SEC, March 15, 2023). The full text of the Proposed Reg SCI Amendments can be found here.
In addition, on March 15, 2023 the SEC re-opened comments on proposed cybersecurity risk management rules for investment advisors until May 22, 2023. For our analysis of these proposed rules, see How Fund Industry Can Prepare For SEC’s Cyber Proposal (Law360, March 4, 2022). The SEC is also presently considering comments on a different proposed rule mandating certain cybersecurity disclosures by public companies. See Carlson, Scott and Riley, Danny, SEC Proposes Mandatory Cybersecurity Disclosures by Public Companies (Carpe Datum Blog, April 14, 2022).

[5] Peirce, Hester, Statement on Regulation SP: Privacy of Consumer Financial Information and Safeguarding Customer Information (SEC, March 15, 2023).

[6] Proposed Reg S-P Amendments, supra n.2 at 1.

[7] Gensler, Gary, Statement on Amendments to Regulation S-P (SEC, March 15, 2023).

[8] Proposed Reg S-P Amendments, supra n.2 at 4.

[9] Id. at 4-6.

[10] Proposed Reg S-P Amendments, supra n.2, at 6-7.

[11] Id. at 74-75, 82.

[12] Proposed Cybersecurity Risk Management Rule, supra n. 3 at 9-10 (internal definitions of terms omitted).

[13] Gensler, Gary, Statement on Enhanced Cybersecurity for Market Entities (SEC, March 15, 2023).

[14] Proposed Cybersecurity Risk Management Rule, supra n. 3 at 103.

[15] Id. at 103-108.

[16] Id. at 109-112.

[17] Id. at 113-115.

[18] Id. at 115-116.

[19] Id. at 116-118.

[20] Id. at 118-124.

[21] Id. at 124-126.

[22] Proposed Reg SCI Amendments, supra n.4 at 10.

[23] Id. at 13-14.

[24] Id. at 24.

[25] Id. at 24-25.

Introduction

Employers need to be aware of the significant changes that are on the horizon when the California Privacy Rights Act (CPRA) becomes operative on January 1, 2023.

By way of background, in November of 2021, California residents voted to pass the CPRA, which affords California consumers heightened rights and control over their personal information.  California residents already have a number of rights under the California Consumer Privacy Act (CCPA), and the CPRA will provide even more rights to individuals — including employees — in California.

Currently, the only obligations that covered employers have under the CCPA is to provide a notice of collection and to reasonably safeguard personal information due to a partial exemption under CCPA for information collected in the context of employment.  However, this will change on January 1, 2023, when the partial exemption for employers under the CCPA will expire.  Although bills were proposed to extend the exemption for employers until at least January 1, 2026, the last day on which the California legislature could have passed those bills into law was August 31, 2022.

What’s New For Covered Employers In 2023 Under CPRA?

California employees of covered employers will have increased rights as of January 1, 2023, and accordingly, their employers will have increased compliance obligations.  The new rights for California employees will include, among others:

  1. the right to know: the employee’s right to notice regarding the type(s) of personal information that their employer collects, sells, shares, or discloses, as well as the right to make a request that the employer to disclose personal information it has collected about the employee;
  2. the right to rectification: the employee’s right to correct or rectify the personal information that their employer maintains;
  3. the right to deletion: the employee’s right to request that the employer delete the personal information that the employer has collected about them;
  4. the right to data portability: the employee’s right to request that their employer provide them with, or transmit to another entity, a copy of their personal information in a reasonable format;
  5. the right to limit use and disclosure of sensitive personal information: the employee’s right to request that their employer limit the use and disclosure of “sensitive personal information” to certain defined activities.

Employers will need to evaluate employee requests to exercise their rights to determine their obligations under the CPRA, as employers may have certain bases to deny employee rights requests.  For example, should an employee attempt to exercise their right to deletion, the employer could rightfully deny that request to the extent that certain personal information is required to carry out the employment relationship (to process payroll, provide benefits, etc), or because of statutory requirements that dictate the retention of certain employment related information.  Further, the right to rectification can also be significantly limited to certain personal information that can be verified.  However, in the wake of employee requests, covered employers must keep in mind that the CPRA prohibits discrimination against employees for exercising their rights under CPRA.

What Organizations Can Do to Prepare

In the coming months, there are a number of steps that employers can and should take to prepare for their new obligations under the CPRA.  Organizations should consider the following when determining whether their processes and procedures are CPRA ready:

Data Inventory: Employers need to assess the locations of personal information, including employee personal information, and create a data inventory.  Data inventories are helpful when an employer needs to identify the location(s) of employee data in response to an employee request under CPRA.  For example, an employer cannot delete data if it does not know where it is.  Employers should inventory not just their own data, but also data being held by third party service providers and contractors as these are also components of information required to be communicated when responding to access requests.

Records Retention: Employers might also assess their current records retention policies and schedules to ensure that they reflect retention periods appropriate for the states and/or jurisdictions in which they operate.  With privacy principles like data minimization and storage retention continuing to be adapted and grow, the importance of appropriate records retention is growing in parallel.

Review of Existing Practices: Employers should also review their current CCPA notices of collection, as well as current policies and procedures related to privacy and cybersecurity, to determine any changes that should be made under CPRA to address the processing of new or sensitive personal information, the processing of information for new purposes, the length of time the personal information will be maintained, and the categories of third parties that will have employee personal information.

Vendor Assessment: Employers should review any contracts they maintain with any vendor that processes personal information about their employees and ensure that the contracts meet CPRA requirements.

Conclusion

This is a significant change for employers with employees in California; for some it will require a re-assessment of how personal data is handled and maintained, along with changes to current policies and procedures, but for others it will require a complete overhaul of current privacy and cybersecurity activities.  These compliance initiatives cannot be put into place overnight; employers should expect it to take anywhere from three to six months to stand up a compliant privacy and cybersecurity program.  That said, while compliance will not be enforced until July 1, 2023, employers can and should help themselves by beginning to make these changes now.

At the end of May, 2022, the California Privacy Protection Agency (“Agency”) released a preliminary draft of proposed regulations for the California Privacy Rights Act (“CPRA”). The 66-page draft proposal only covers a few topics the Agency is seeking to cover. The issues covered in this draft of the regulations include data collection and processing restrictions, and some detailed requirements on the sale and sharing of personal information. Several notable topics were left out of the proposed regulations and still remain unresolved. Those unresolved items include specifics about soon-to-be required Privacy Risk and Impact Assessments, Automated Decision Making, Personal Data Retention, Cybersecurity Audits and Examinations, and the closely watched fate of the employee carve-out.

On June 8, 2022, after the draft release, the Agency conducted a board meeting where board members and authoring members of the California Attorney General’s Office discussed the proposed regulations as well as the upcoming formal rulemaking process. Deputy Attorney General Lisa Kim and Supervising Deputy Attorney General Stacey Schesser described at a high level what changes the proposed regulations brought to the CPRA. The Board also authorized the Agency’s Executive Director, Ashkan Soltani, to commence the formal rulemaking process.

As things look today, the Regulations are unlikely to be finished by the CPRA’s effective date of January 1, 2023, which will lead to other challenges. There are also a large number of question marks still in place on a lot of very important issues. Nonetheless, businesses and organizations operating in California should start to take notice that the train is beginning to leave the station to operationalize the CPRA.

Generally, the proposed Regulations act as a roadmap for businesses ahead of the 2023 enforcement date. Deputy AG Kim highlighted the main purpose behind the draft, and directed businesses to read the CCPA’s Initial Statement of Reasons, or ISOR, for an in-depth look at the “why” behind the proposed Regulations. Kim and Supervising Deputy AG Schesser pointed out the primary goals of the regulations:

  1. To update existing CPRA amendments to the CCPA, provide harmonization and clarity to minimize any confusion;
  2. To operationalize the existing CPRA amendments, so businesses will have a better idea on how to implement policies and procedures to comply with the law; and
  3. To reorganize and consolidate certain aspects of the law, making it more digestible.

While the formal rulemaking process has not yet commenced, a few comments were taken into consideration at the Board meeting regarding the draft regulations. Many of the concerns came from small businesses, and the Board was asked to extend the CPRA’s January 1, 2023, effective date anywhere between 6 and 12 months to allow businesses to prepare for the law. CPPA Board members urged the public, businesses and individual consumers alike, to participate in the formal comment period by sharing personal experiences and perceived challenges for rule makers to take into account. Below is a more detailed walkthrough of the proposed Regulations, and some of the key takeaways we flagged in our review:

Article 1: General Provisions

Under Article 1, the proposed regulations purport to rework some of the existing regulations to focus on being understandable to both consumers and businesses. For example, the concept of data minimization as restated through section 7002, requires a business’s “collection, use, retention, and sharing of a consumer’s personal information” be done so in a manner that is “reasonably necessary and proportionate” in order to achieve the businesses purpose in collecting the data in the first place. Section 7003 sets forth all of the requirements for businesses regarding consumer disclosures and communications being plain and understandable. The main idea of these sections was already present under the CCPA, but the intention of the newly released drafts is to restate the regulation’s language in order to help businesses better understand their responsibilities.

Another notable section is 7004, which addresses the idea that consent through so called “dark patterns” is not considered consent. “Dark patterns” are defined as a user “interface [that] has the effect of substantially subverting or impairing user autonomy, decision-making, or choice, regardless of a business’s intent.” Dark patterns may appear as manipulative language, consumer shaming, or even bundling consent options. The draft regulations include examples of what is not acceptable, such as pairing “Yes” to accept and “No, I like paying the full price” as options for an offer. Once again, Section 7004 follows the ongoing theme of transparency for the consumer, requiring businesses to provide easy-to-understand methods of obtaining consent. Note that this is also consistent with the FTC’s treatment of on-line disclosures and the doctrine of “deception”.

Section 7001 defines the terms used throughout the proposed regulations, and according to the ISOR, “assists businesses in implementing the law” while helping consumers to “enjoy the benefits of the rights provided [to] them by the CCPA.” Some of the noteworthy additions include definitions for concepts such as “disproportionate effort”, “frictionless manner”, and “unstructured data.” These definitions may, in theory, help businesses with the burden of compliance under the CCPA, but they lack an objective standard for what falls into these categories. For example, “frictionless manner” is defined as “a business’s processing of an opt-out preference signal that complies with the requirements set forth in section 7025, subsection (f).” 11 CCR § 7001(m). While these definitions technically explain “how” a business should be compliant under the law, the draft’s somewhat circular language could be problematic when it comes to actual business operations.

Article 2: Required Disclosures to Consumers

Article 2 lays out a proposal of how businesses make disclosures to consumers. When describing the proposals, Deputy AG Kim pointed out the new concept of an alternative opt-out link from Section 7015, which businesses could provide to consumers who want to opt out of the sale or sharing of their personal information or limit the businesses use of their sensitive personal information. The link would be imbedded in a business’ website, and it would direct consumers to a page where they will be further informed of these rights, as well as given the opportunity to exercise the rights. The alternative opt-out link is an example of how the proposed regulations operationalize some of the CCPA’s legal requirements. Other notable Article 2 highlights from the proposed regulations include an updated notice for consumers’ opt-out rights, allowing them to opt out of the sharing of personal data, as well as the sale of that information. Businesses could also use the alternative opt-out link to comply with this requirement. Businesses will also need to update their privacy policies. Under Section 7011 of the draft regulations, businesses have additional requirements, such as:

  • Stating whether or not the business discloses sensitive personal information for purposes other than those authorized by the CPRA. If that is the case, the business must provide notice information within their privacy policy. 11 CCR § 7011(e)(1)(K).
  • Providing an explanation of the new consumers’ rights added by the CPRA’s amendments to the CCPA, including the right to correct, right to limit, and the right to opt-out of sale and sharing of personal information. 11 CCR § 7011(e)(2). It should be noted that the practical effect of adding “share” (at least the way “share” is defined in the law) to the opt-out obligation is quite limited. The CCPA’s “sale” definition has the same practical effect as the CPRA’s “share” definition.
  • Providing information about how the business responds to and processes opt-out preference signals. 11 CCR § 7011(e)(3)(F). This is a very new concept and has some interesting side effects from a practical implementation perspective, as noted below.
Article 3: Business Practices for Handling Consumer Requests

According to Deputy AG Kim, Article 3 updates how consumers may submit requests to exercise their rights. The Article clarifies that the right to know and right to delete no longer relate to household information, and it provides businesses some timelines and ways to respond to consumer requests and it consolidates the already established exceptions to the consumer right to limit.

One of the most notable updates under Article 3 relates to opt-out preference signals (Section 7025), which is likely to be subject to heavy debate once the formal rulemaking process commences. Opt-out preference signals are defined as a “signal that is sent by a platform, technology, or mechanism, on behalf of a consumer, that clearly communicates the consumer choice to opt-out of the sale and sharing of personal information.” 11 CCR § 7001(r). This clearly includes the browser configuration options around “Do Not Track” (“DNT”) signals.

The CPRA had previously given businesses the option to recognize opt-out preference signals as a method for consumer privacy requests, but the proposed regulations, as written, would require businesses to recognize them. At this point, the proposed regulations are missing technical specifications for opt-out preference signals.

Ironically, the side effect of the DNT recognition requirement is that if a business is only engaging in cross-contextual behavioral advertising via cookies or similar technology on their website (and there isn’t any other “sharing” going on) then the recognition of DNT signals removes the need to post “Do Not Sell or Share my Information” links on the website. For businesses that only “sell” or “share” data by participating in an affiliate advertising network, this is a significant operational benefit. The draft regulations, as written, would effectively remove the requirement for “Do Not Sell” links on those businesses’ websites because the DNT signal is supposed to moot the need for such a link.

On top of the requirement to adhere to requests to delete, section 7022 of the draft regulations creates the obligation for businesses to notify third parties, service providers, and contractors of the consumer’s request to delete. If a business relies on a CCPA exception to refuse a consumer’s request to delete, they will still have to notify the applicable service providers, contractors, and third parties of the consumer’s request to delete any information not subject to a CCPA exception.

Section 1798.106 of the California Consumer Protection Act (CCPA) provides consumers with the right to correct inaccurate information. Section 7023 of the proposed regulations operationalizes the right to correct by setting forth the procedures for businesses to follow for consumer submissions and the handling of requests to correct. Other state laws also provide consumers the right to requests to correct, so the operationalized methods of the draft regulations will assist compliance efforts of businesses operating in other states. Regarding requests to opt-out of sale or sharing, section 7026 of the proposed regulations states that a notification or pop-up for cookies is not by itself an acceptable method for submitting requests to opt out of sale/sharing. According to the ISOR, this section of the regulation has been restructured to be “easier to read and understandable for businesses and consumers.”

Section 1798.121 of the CCPA provides consumers the right to request a business to limit its use and/or disclosure of their sensitive personal information. The draft regulations add a new section 7027 aimed at giving consumers with the ability to limit the use of sensitive personal to instances where that information is necessary for the business to provide goods and services and only for purposes that are reasonably expected by a consumer requesting those goods and services. According to the proposed regulations, businesses using or disclosing personal information must provide two or more designated methods for submitting requests to limit. At least one of the methods should reflect the manner in which the business primarily interacts with the consumer (Online, Brick and Mortar Store, etc.).

Article 4: Service Providers, Contractors, and Third Parties

Article 4 of the Draft Regulations highlights responsibilities for businesses regarding their relationship with third parties, service providers, and contractors. Section 7050 clarifies that a person who contracts with a business to provide cross-contextual behavioral advertising is a third party and not a service provider or contractor. 11 CCR § 7050(c). As a result, that transfer of personal information is subject to the right to opt-out of sharing.

Both sections 7051 and 7053 lay out the requirements that apply to vendor contracts. Notably, the draft proposals would create a new due diligence duty for businesses when working with contractors, service providers, and third parties. The regulation states that “[w]hether a business conducts due diligence of its service providers and contractors factors into whether the business has reason to believe that a service provider or contractor is using personal information in violation of the CCPA and these regulations.” 11 CCR § 7051. Furthermore, Section 7052 sets forth the duties of third parties such as recognizing opt-out preference signals and complying with consumer requests. The ISOR states that the listed responsibilities for a third party “benefits businesses by sharing the burden of communicating online requests to opt-out of sale/sharing”

According to Deputy AG Kim, Article 5 through Article 8 are all relatively unchanged. The differences come in where the statutory language lies, and the draft regulations work to align the language of the CCPA and the CPRA amendments.

Article 9: Investigations and Enforcement

Supervising Deputy AG Schesser discussed the additions made to Article 9, stating that the proposed provisions outline requirements for complaints made to the Agency. The proposed regulations also provide what the Agency needs to start its own investigations. Schesser briefly covered probable cause hearings, stating that the Agency may conduct probable cause hearings if there is evidence to support a reasonable belief that the CPRA was violated. (11 CCR §7303(a)). Other sections of the proposed regulations cover requirements for Sworn Complaints (Section 7300), CCPA Investigations (Section 7301), Stipulated Orders (Section 7303), and Agency Audits (Section 7304).

What’s Next?

The Agency said during its February 17, 2022 board meeting that the regulations are unlikely to be finalized on time. Many of the public comments at the June 8 board meeting echoed concern to the Agency to push the enforcement date back at least 6 months. This additional time would allow businesses, small and large, to adjust their privacy practices to be compliant ahead of the enforcement date. With that said, the Executive Director Soltani, was just recently authorized to commence the final rulemaking proceedings. The proceedings will commence when the Agency publishes a notice of proposed action in the California Regulatory Notice Register. After providing the notice, the public will be welcomed to comment on the proposed regulation for 45 days, which could even be extended should the Agency seek to make substantial changes. With penalties that can get up $7,500 per violations, and both the California Attorney General’s Office and the California Privacy Protection Agency having enforcement powers, businesses should be keeping a close eye on the Agency for further updates.

We do not recommend that organizations in California make any drastic compliance plans right now based on the current state of things. We do recommend that organizations subject to the CCPA/CPRA start looking at their vendor and service provider agreements. The draft regulations give pretty clear direction as to the kinds of things that will need to be included in these agreements, even if the actual text of the regulation isn’t final.

On compliance with the rest of the CPRA, there are simply too many unknowns at this point. However, this recent publication and initial public comment activity signals that the 2023 CPRA train is at least rumbling in the distance.