Corporations face unprecedented challenges in safeguarding sensitive data and mitigating privacy risks in an era marked by the rapid proliferation of Internet of Things, or IoT, devices.

Recent developments, including federal and state regulators’ heightened focus on privacy enforcement, highlight the importance of proactive risk management, compliance and data governance. As IoT and smart devices continue to hit the marketplace, heightened scrutiny for businesses’ data governance practices follows.

The Federal Trade Commission’s recent technology blog, “Cars & Consumer Data: On Unlawful Collection & Use”[1] underscores the agency’s commitment to enforcing consumer protection laws. Despite their blog’s focus on the car industry, the FTC’s message extends to all businesses, emphasizing its vigilance against illegal — or “unfair and deceptive” — collection, use and disclosure of personal data.

Recent enforcement actions are a stark reminder of the FTC’s proactive stance in safeguarding consumer privacy.

Geolocation data is a prime example of sensitive information subject to enhanced protections under the Federal Trade Commission Act. Much like mobile phones, cars can reveal consumers’ persistent, precise locations, making them susceptible to privacy infringements.

Continue Reading Careful Data Governance Is a Must Amid Enforcement Focus

On August 2, 2024, Illinois Governor J. B. Pritzker signed legislation reforming Illinois’ Biometric Information Privacy Act (BIPA). Senate Bill 2979 immediately amends BIPA to limit a private entities’ potential liability for collecting or sharing biometric data without consent.

The BIPA amendment followed a call for action  directed at the legislature from the Illinois courts. Previously, the question of damages liability had wound its way through appellate review in Illinois courts. This amendment changes the course of the Illinois Supreme Court interpretation of BIPA claim accrual, which had held that each unlawful collection or disclosure constituted a new BIPA claim but that damages were discretionary.

Now, with the passage of SB 2979, a private entity that collects or otherwise acquires biometric data in more than one instance for the same person commits only one violation of the Act. Additionally, a private entity that discloses biometric data from the same person to the same recipient commits only one violation of the Act, regardless of the number of times that data is disclosed. As a result, individuals are only entitled to a single recovery of statutory damages.

This reform has potential to reduce the top end liability private entities may face when it comes to BIPA claims.  However, many BIPA litigators are of the opinion that a single instance of harm was already “built in” to settlement valuations in prior cases, and that this new legislation will not do much to alter the approximate average valuation of $ 1500 per person that most plaintiff lawyers are putting on class settlement demands in BIPA lawsuits.  Additionally, even a single instance of alleged harm involving tens of thousands of employees or customers can still amount to significant damage claims. Businesses are still well-advised to be wary before deploying any biometric collection device or mechanism in Illinois without legal advice about appropriate consent and legal compliance obligations.

The European Union (EU)’s government organizations are just like any another entity trying to function in a world where global companies and even government entities are reliant on digital platforms for messaging and collaboration. For years, there has been debate about how platforms like Microsoft 365, formerly Office 365, could be deployed in a way that complies with the GDPR processing and transfer restrictions. And it turns out that even the European Commission (EC) itself can apparently get it wrong. In a surprising turn of events earlier this month, the European Data Protection Supervisor (EDPS) concluded its nearly three year investigation into the Commission’s own deployment and use of Microsoft 365, signaling a pivotal moment in the conversation about the GDPR privacy and security requirements for cloud-based messaging and document collaboration platforms.

The Catalyst for Change

The EDPS’s investigation, spurred by the well-known Irish DPC case initiated by Maximillian Schrems (C-311/18), widely referred to as the “Schrems II” case, and as part of the 2022 Coordinated Enforcement Action of the European Data Protection Board (EDPB), unearthed several critical issues with the Commission’s deployment of Microsoft 365. These findings reportedly involve the EC’s failure to ensure that personal data transferred outside the EU/EEA is afforded protection equivalent to that within the EU/EEA and a lack of specificity in contracts between the Commission and Microsoft regarding the types of personal data collected and the purposes for its collection.  Those contractual terms, and accompanying GDPR safeguards and commitments are of course the same terms that every other global company is using with Microsoft, posted on the Microsoft website and generally not open for negotiation or discussion.

The EDPS’s Verdict

The resolution to the findings is as unprecedented as the investigation itself. The EDPS issued the EC a reprimand and imposed corrective measures demanding the suspension of all data flows from the use of Microsoft 365 to Microsoft and its affiliates and sub-processors outside the EU/EEA not covered by an adequacy decision, effective December 9, 2024. Additionally, the Commission must demonstrate its compliance with Regulation (EU) 2018/1725, specifically regarding purpose limitation, data transfers outside the EU/EEA, and unauthorized data disclosures by the same date.

This decision, the first of its kind, raises important questions about the future of data protection enforcement within the EU – and the use and deployment of any cloud-based platform like Microsoft 365 by any company established in the EU. What mechanisms will be or can be employed to ensure compliance? How will this affect the technical and logistical operations of the European Commission and potentially other EU institutions and bodies as they transition data flows to new servers?

A Roadmap for Compliance

Despite the challenges and short term confusion this decision presents, it also offers a silver lining. The decision serves as a vital roadmap for compliance, setting a precedent for the level of transparency and security required in data processing and transfer activities. This move by the EDPS reinforces the EU’s stance on the importance of data protection, signaling to institutions, companies, and individuals alike that safeguarding personal data is paramount and non-negotiable.

The Path Forward

As we reflect on this decision, it is clear that the implications extend far beyond the confines of the European Commission and Microsoft 365. This decision serves as another wake-up call to all entities operating within the EU’s jurisdiction, emphasizing the need for stringent data protection measures and the importance of reevaluating current data handling practices. The fact that the EC itself is on the receiving end is a surprise, but plenty of other companies with operations in the EU know they are doing the exact same things that the EC just got reprimanded for doing.

Looking ahead, the decision by the EDPS is not just about compliance; it’s about setting a global standard for data protection that companies can understand and predictably follow. As we move forward, institutions and companies should take heed of the path to compliance outlined by the decision. Businesses can no longer assume things are safe by relying on the size and popularity of the Microsoft 365 ecosphere, and the contentment that everyone else is doing the same thing. This decision rocked the boat for everyone. If the EC themselves can get this wrong, what chance is there for the rest of us?

On Wednesday, March 20, Seyfarth attorneys Rebecca Woods, Owen Wolfe, Lauren Leipold, and Puya Partow-Navid will present and Ken Wilton will moderate, the first session of the 2024 Commercial Litigation Outlook webinar series: Charting the Course: AI’s Influence on Legal Practice and IP Protection.

Time of the event:
1:00 p.m. to 2:00 p.m. Eastern

About the Program

Our esteemed panel of experts will explore the intricate intersections of AI technology, legal practice, and intellectual property rights. As businesses worldwide adapt to transformative advancements and evolving regulatory frameworks, this session promises invaluable insights into the future of law and IP protection amidst the AI revolution.

  • Explore the transformative impact of AI on legal practice in 2024 and beyond
  • Obtain insights into forthcoming AI regulations and their implications for businesses operating in the US and EU.
  • Evaluate risk mitigation strategies to avoid potential liability when using AI platforms
  • Learn how to implement strategies for safeguarding intellectual property rights amidst advancing AI technology
  • Discuss the evolving role of legal education in preparing lawyers for an AI-enabled future and the shift towards human-centered AI.

Register here.

This is the first webinar as part of the 2024 Commercial Litigation Outlook series. For more information on the other upcoming webinars in the series, please see below.



Part 2 – Navigating Legal Minefields: Insights on Restrictive Covenants, eDiscovery, and Privacy Compliance

Thursday, April 11, 2024
1:00 p.m. to 2:00 p.m. Eastern
12:00 p.m. to 1:00 p.m. Central
11:00 a.m. to 12:00 p.m. Mountain
10:00 a.m. to 11:00 a.m. Pacific

About the Program

The second webinar in the series will examine the regulatory landscape surrounding non-compete agreements and will also address critical aspects in the realm of eDiscovery and Privacy litigation. Specifically covering the following:

  • Federal Attempts to Curb Non-Competes: Delve into the proposed FTC rule and the NLRB’s stance, analyzing their potential impacts and the legal challenges they may face.
  • State Initiatives: Uncover the latest legislative developments from states like California, Minnesota, and New York, examining how these changes could impact employers nationwide.
  • Judicial Scrutiny and Trends: Gain insights into recent court decisions regarding non-competes and confidentiality provisions, and understand their implications for businesses.
  • Regulatory Enforcement Surrounding Privacy Laws: Learn about the rising regulatory enforcement and litigation surrounding data privacy laws, including the impact of consumer awareness and state legislation on businesses.
  • Navigating the Risks of Privacy Litigation: Discover the latest developments in privacy litigation, including the surge in lawsuits related to website beacons, biometric data, and AI processing. Gain insights on compliance frameworks and preemptive risk assessments to mitigate litigation threats.
  • Advancements and Risks in eDiscovery Tools: Learn about the latest advancements in GenAI eDiscovery tools, including document summarization, subjective coding determinations, and GenAI syntax and querying. Understand the challenges and considerations of adopting GenAI in litigation, including defensible use of technology and negotiating discovery protocols.
  • Generative AI in eDiscovery Workflows: hear about the potential of Generative AI in eDiscovery workflows to streamline your business, increase productivity, and reduce inefficiencies amidst rising regulatory enforcement and litigation surrounding data privacy laws.

Moderator:

Rebecca Woods, Partner, Seyfarth Shaw

Speakers: 

Dawn Mertineit, Partner, Seyfarth Shaw
James Yu, Senior Counsel, Seyfarth Shaw
Jason Priebe, Partner, Seyfarth Shaw
Matthew Christoff, Partner, Seyfarth Shaw


Part 3 – Commercial Litigation Outlook: Insights and Predictions for Litigation Trends in 2024

Thursday, May 2, 2024
1:00 p.m. to 2:30 p.m. Eastern

In the third session of the series, we will dive into the dynamic landscape of further litigation trends set to shape the coming year. Our panelists will provide invaluable insights and practical advice to navigate these complex legal arenas effectively. Don’t miss this opportunity to stay ahead of the curve and arm yourself with the knowledge needed to thrive in the ever-evolving world of litigation. Join us to explore trends, predictions and recommendations in the following areas:

  • Antitrust
  • Consumer Class Actions
  • ESG
  • Franchise
  • Health Care
  • Securities & Fiduciary Duty

Moderator:

Shawn Wood, Partner, Seyfarth Shaw

Speakers:

Brandon Bigelow, Partner, Seyfarth Shaw
Kristine Argentine, Partner, Seyfarth Shaw
Gina Ferrari, Partner, Seyfarth Shaw
John Skelton, Partner, Seyfarth Shaw
Jesse Coleman, Partner, Seyfarth Shaw
Greg Markel, Partner, Seyfarth Shaw


We are committed to providing you with actionable insights and strategic guidance to stay ahead in an ever-changing legal environment. Don’t miss this opportunity to gain invaluable knowledge and network with industry experts.

To register for the entire series, please click here. For more information and to access our full Commercial Litigation Outlook 2024 publication, please click here.

This blog post is co-authored by Seyfarth Shaw and The Chertoff Group and has been cross-posted with permission.

What Happened

On July 26, the U.S. Securities & Exchange Commission (SEC) adopted its Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure final rule on a 3-2 vote. The final rule is a modified version of the SEC’s earlier Notice of Proposed Rulemaking (NPRM) released in March 2022. The final rule formalizes and expands on existing interpretive guidance requiring disclosure of “material” cybersecurity incidents.

Continue Reading SEC Publishes Public Company Cybersecurity Disclosure Final Rule

This post was originally published to Seyfarth’s Global Privacy Watch blog.

On July 10th, the European Commission issued its Implementing Decision regarding the adequacy of the EU-US Data Privacy Framework (“DPF”). The Decision has been eagerly awaited by US and Europe based commerce, hoping it will help business streamline cross-Atlantic data transfers, and by activists who have vowed to scrutinize the next framework arrangement (thereby maintaining their relevance). Regardless of the legal resiliency of the decision, it poses an interesting set of considerations for US businesses, not the least of which is whether or not to participate in the Framework.

For those who followed the development and demise of the Privacy Shield program and the Schrems II case, it has been apparent for some time that the fundamental objection of the activists and the Court of Justice of the EU (“CJEU”) to the original Privacy Shield was the perception that the US intelligence community had an ability to engage in disproportional data collection without any possibility of recourse by EU residents whose personal information may be swept into an investigation. The actual functioning of the program for the certifying businesses were much less controversial.

Since the structure of the program wasn’t the primary reason for Privacy Shield’s revocation, from a business perspective, the current DPF looks a lot like the old Privacy Shield. For businesses who made the decision to participate in the Privacy Shield program in the past, the operational burden shouldn’t be much different under the new DPF, if they have already taken steps to operationalize the requirements.

What is interesting about the new DPF is how it may impact a company’s decision to choose  between the Standard Contractual Clauses (“SCCs”) and the alternative adequacy mechanism for transfers. There is also some interest vis-à-vis the DPF and its interactions with state privacy laws.

DPF v. SCCs

One of the components of the new SCCs that were adopted in 2021 (which did not exist in the prior version of the SCCs) is the requirement for all SCCs to be accompanied by a transfer impact assessment (“TIA”)[1]. A TIA is designed to assess whether there are legal barriers to the enforcement of the SCCs in the relevant importing jurisdiction – in this case, the US. Many commentators, and some courts, have applied the Schrems II reasoning to argue that use of the SCCs as a transfer mechanism to the US is not effective in all circumstances, because the Foreign Intelligence Services Act (“FISA”) authorizes US intelligence to engage in bulk collection under section 702 and such programs are not proportional and do not have reasonable safeguards required under EU law.

Although the SCCs are still used to transfer European data to the US (mostly because after Privacy Shield was invalidated, practically speaking, they had been the only remaining transfer mechanism for many businesses), several commenters have taken the position that, if Schrems II is taken to its logical conclusion, then any use of SCCs in the US is effectively impossible, because US companies cannot live up to their promises in the SCCs. This was noted in an expert report commissioned by the German Conference of Independent Data Protection Supervisors to assess the broad reach of FISA section 702 programs. Needless to say, companies who undertake a TIA as part of their deployment of SCCs are also under some level of uncertainty as to the effectiveness since a TIA is not the opinion of a supervisory authority, but rather their own interpretation, and that of their legal counsel – which said expert report may cast doubt on.

The DPF is not plagued by such uncertainty. Specifically, recital 200 of the Decision expressly states the legal protections surrounding FISA programs are adequate and “essentially equivalent” to EU protections related to intelligence and national security. This is a momentous declaration, in our estimation, because as a consequence, participation by a company in the DPF seems to remover the need for a TIA for a transfer mechanism. Put another way, the recital 200 provides binding authority for the assertion that the primary motivation for a TIA (i.e. FISA section 702 programs) is now moot in that the DPF participants have sufficient safeguards (even in light of FISA 702) regardless of undertaking a TIA. Note that the removal of a TIA requirement only works for participants in the DPF and TIAs are still required when relying on the  SCCs as a transfer mechanism.

DPF v. State Law

Because the DPF establishes “essentially equivalent” controls for participants, the differences between the scope and requirements of EU privacy law and US state privacy law are brought into more apparent contrast. Looking across the two general frameworks, the differences in concepts, protective requirements, and other controls may actually motivate businesses who are already subject to the various state omnibus privacy laws, to skip participation in the DPF. This is mostly because the DPF is a bit more reasonable to businesses with respect to the exercise of individual rights than some state laws.

For example, the GDPR does not require the controller to comply in full with an access request if the response would “adversely affect the rights” of others, including, a business’ trade secrets or intellectual property[2]. The California Consumer Privacy Act has no such express limitation related to business’ data. That being said, there are a number of possible arguments available under other laws (trade secret, confidentiality obligations, etc.) that could justify a limiting a response to an access request. However, those limitations are not express in the California law – and they are in the GDPR and the DPF.

Similarly, the principles in the GDPR and DPF allow for a denial of an access request where responding to such request triggers an undue burden on the business. The California law’s limitation is a bit narrower than the GDPR/DPF limitation in this instance. California requires responsive disclosures to access requests unless the request is “manifestly unfounded or excessive” [3]. This standard narrower than the DPF standard of “…where the burden or expense of providing access would be disproportionate to the risks to the individual’s privacy…”[4].

Conclusion

This lack of alignment between DPF requirements and state law may lead to operational  confusion and uncertainty by US businesses interested or actively involved in the transfer of personal information from the EU. Regardless of the confusion related to the overlapping US and EU privacy laws, businesses who have previously participated in and are familiar with the Privacy Shield program may find it useful to also participate in the DPF. Additionally, for some business models, participation in the DPF can mean reduced administrative and legal costs as compared to putting in place and maintaining SCCs. However, it must be remembered that the DPF is not the same as compliance with US state privacy laws – even though some omnibus state privacy laws echo GDPR concepts. There are significant distinctions which have to be managed between the tactical implementation of a privacy program for US state law and a DPF compliance program.

Finally, even though there has been a commitment by some to challenge the DPF at the CJEU, the Commission’s approval of the DPF does not necessarily signal a “wait and see” approach. It is instead a time for companies to carefully evaluate and review their transfer activities, their regulatory obligations, and the most appropriate path forward.  All these years after Schrems II, it is at least nice to have a potential alternative to SCCs, in the right business conditions.


[1] Commission Implementing Decision (EU) 2021/914 Recitals 19 through 21

[2] GDPR, Article 15(4) and Recital 63.

[3] Cal. Civ. Code 1798.145(h)(3)

[4] Commission Implementing Decision for DPF Annex I, Section III.4; 8.b, c, e; 14.e, f and 15.d.

2023 has brought several states into the privacy limelight. As of Sunday, June 18, and with the signature of Texas governor Greg Abbott, the Texas Data Privacy and Security Act (“TDPSA”) was enacted, making the Lone Star state the tenth in the U.S. to pass a comprehensive data privacy and security law. The Act provides Texas consumers the ability to submit requests to exercise privacy rights, and extends to parents the ability exercise rights on behalf of their minor children.

The Texas Act provides the usual compliment of data subject rights relating to access, corrections, data portability, and to opt out of data being processed for purposes of targeted advertising, the sale of personal information, and profiling where a consumer may be significantly or legally effected. It also requires that covered businesses provide a privacy notice and other disclosures relevant to how they use consumer data.

Application Standard

Among the ten state-level comprehensive privacy bills, the TDPSA is the first to remove processing and profit thresholds from the applicability standard. Instead of using these thresholds to determine whether an entity is covered, the TDPSA applies to persons that: (1) Conduct business in Texas or produce products or services consumed by Texas residents; (2) Process or engage in the sale of personal data; and (3) Are not small business as defined by the United States Small Business Administration.[i]

Definitions and Obligations of Controllers and Processors

The TDPSA’s definition of “sale of personal data” aligns closely with that of the California Consumer Privacy Act (“CCPA”). It refers to the “the sharing, disclosing, or transferring of personal data for monetary or other valuable consideration by the controller to a third party.” The Act defines “process” as “an operation or set of operations performed” whether it be manually or automatically, on Texas consumers’ personal data or sets of data. This includes the “collection, use, storage, disclosure, analysis, deletion, or modification of personal data.” Unlike the CCPA, the law exempts data processed for business-to-business and employment purposes.

Covered businesses who are data controllers must provide consumers with “a reasonably accessible and clear” privacy notice. These notices include the categories of personal data processed by the controller, the purpose of processing personal data, the categories of data shared with third parties, and methods by which consumers can exercise their rights under the law. If a controller’s use of sensitive personal data or biometric data constitutes a sale, one or both of the following must be included:

  • “NOTICE: This website may sell your sensitive personal data.”
  • “NOTICE: This website may sell your biometric personal data.”

Processors, which are akin to “service providers” under the CCPA, are those people or businesses that process personal data on the behalf of the controller. Processors have a number of obligations under the TDPSA, including assisting controllers in responding to consumer rights requests and data security compliance. All processors will need to have a written data protection agreement (“DPA”) in place with a controller, which will include:

  1. “clear instructions for processing data;
  2. the nature and purpose of processing;
  3. the type of data subject to processing;
  4. the duration of processing;
  5. the rights and obligations of both parties; and
  6. a number of requirements that the processor shall follow under the agreement.

Processors will be required to ensure confidentiality of data, and at the controller’s direction, a processor must delete or return all of the information at the conclusion of the service agreement. However, this deletion requirement excludes data that must be retained pursuant to the processor’s records retention obligations.

Processors must also certify that they will make available to the controller all information necessary to demonstrate the processor ’s compliance with the requirements of this chapter, and that they will comply with a controllers assessment of their security practices. Lastly, should a processor engage any subcontractor, they will need another written contract that meets all of the written requirements set forth by the controller’s DPA.

Yet another state law DPA requirement brings into question whether businesses, particularly those on the national and multi-national level, are going to need separate addenda to service agreements that recite a la carte provisions that include separate definitions and commitments to comply with each state-level privacy law, as well as international data privacy laws such as the EU’s GDPR. Based on the new and upcoming privacy laws in the U.S., businesses can probably still operate using some form of uniform DPA that accounts for each of the different requirements. However, we may soon reach a point where the pages of all the appendices and addenda to comply with separate state requirements are greater than the typical service agreement contract.

Data Protection Assessment Requirement

The TDPSA mandates that controllers conduct and document data protection assessments. These assessments, which mirror those required by the Connecticut Data Privacy Act, require businesses to identify the purposes for which they are collecting and processing data, as well as the associated risks and benefits of that processing. Businesses will need to assess these benefits and risks for the following categories of processing activities involving personal data:

  1. Processing of personal data for targeted advertising;
    1. The sale of personal data;
    1. Processing for purposes of profiling if there is a reasonably foreseeable risk of unfair or deceptive treatment, injury to consumers (including financial and reputational risk), or physical intrusion to an individual’s solitude or seclusion;
    1. Processing of sensitive data; and
    1. Any processing activities that generally may pose a heightened risk of harm to consumers.

Enforcement

The TDPSA expressly states that it does not provide a private right of action. The Texas Attorney General holds exclusive power to field consumer complaints, investigate, issue written warnings, and ultimately enforce violations of the law. The AG may seek both injunctive relief and civil penalties of up to $7,500 in damages for each statutory violation.

The Texas AG also has the authority to investigate controllers when it has reasonable cause to believe that a business has engaged in, is engaging in, or, interestingly – is about to engage in – a violation of the TDPSA. While the Senate version suggested revising this authority to remove pre-violation (thought crime?) investigations, the language withstood its scrutiny and remained in the signed act. This was one of many suggested changes by the Senate prior to bill’s passing.

Back and Forth Between Texas Legislative Houses

As the TDPSA was drafted, a few notable revisions were made between the House and the Senate versions of the bill. However, most of the Senates proposed additions were not ultimately accepted. To start, the Senate added language that expressly excluded from its definition of biometric data “a physical or digital photograph or data generated from” photographs. Further, under the definition of “Sensitive data”, the Senate removed the House’s language that included sexual orientation data. Notably, the sexual orientation language has since been re-added to the finalized version of the law.

The House and Senate also went back and forth on some of the other definitions in the Act, including which entities fall are exempt. Under the “state agency” definition, the Senate broadened the language to include any branch of state government, rather than any branch that falls into the executive branch of government – which was on of the House versions. However, the Senate language made it into the finalized version. 

For the most part, the two legislative groups were in agreement as to which entities were exempt from the TDPSA. These include exemptions for institutions and data subject to Title V of the Gramm-Leach-Bliley Act,[ii] the HIPAA[iii] and HITECH[iv], non-profits and higher education institutions. The Senate sought to add electric utilities and power companies to this list, but the finalized version kept them off the exempt list.

The Senate’s revisions also proposed language allowing consumers to authorize a designated agent to submit rights requests (similar to what we see in the CCPA), but that language was not signed into law. The TDPSA does not let anyone act on another person’s behalf to exercise their privacy rights, other than parents acting on behalf of their minor children.

Conclusion

Section 541.055(e), which provides consumers the ability to have a third party authorized agent submit opt-out requests on their behalf, goes into effect January 1, 2025. Other than that rather narrow delayed measure, the rest of the TDPSA is set to go into effect on July 1, 2024. It will be one of the broadest reaching in the U.S., particularly because of its unique applicability standard. Its mandatory data protection assessments and requirement for written contracts with all processors make the law slightly less business friendly than the rest of the state privacy laws out there. But California is still in a league of its own on that score.


[i] The SBA defines small business generally as a privately-owned enterprise with 500 or fewer employees. Depending on the industry, however, the maximum number of employees may fluctuate and in some cases may not be factored in. Some businesses are defined as “small” according to their average annual revenue.

[ii] 15 U.S.C. Section 6801 et seq

[iii] 42 U.S.C. Section 1320d et seq.

[iv] Division A, Title XIII, and Division B, Title IV, Pub. L. No. 111-5

Seyfarth Synopsis: The U.S. District Court for the Northern District of Illinois recently denied Plaintiff’s motion to reconsider a prior dismissal of his privacy action due to untimeliness.  In a case titled Bonilla, et al. v. Ancestry.com Operations Inc., et al., No. 20-cv-7390 (N.D. Ill.), Plaintiff alleged that consumer DNA network Ancestry DNA violated the Illinois Right of Publicity Act (“IRPA”) when it uploaded his high school yearbook photo to its website.  The Court initially granted Ancestry’s motion for summary judgment, finding Plaintiff’s claims to be time-barred under the applicable one-year limitations period.  Upon reconsideration, Plaintiff  – unsuccessfully – made a first-of-its-kind argument that the Court should apply the Illinois Biometric Privacy Act’s five-year statute of limitations to the IRPA.

Background on the Bonilla Lawsuit

Ancestry DNA, most commonly known for its at-home DNA testing kits, also maintains a robust database of various historical information and images.  One subset of this online database is the company’s “Yearbook Database.”  This portion of the website collects yearbook records from throughout the country and uploads the yearbook contents – including students’ photos – to Ancestry.com.  On June 27, 2019, Ancestry DNA uploaded the 1995 yearbook from Central High School in Omaha, Nebraska to its Yearbook Database.

More than a year later, on December 14, 2020, Plaintiff Sergio Bonilla filed a lawsuit against Ancestry DNA over its publication of the Central High School yearbook.  Specifically, Plaintiff Bonilla – a current Illinois resident and former student of Central High School whose picture appeared in Ancestry’s database – alleged that Ancestry DNA improperly publicized his private information without obtaining his consent.  Plaintiff’s lawsuit asserted violations of the IRPA, as well as a cause of action for unjust enrichment.  Ancestry DNA filed a motion for summary judgment on the basis that Plaintiff’s action was not brought within the requisite one-year limitations period.  The Court agreed, thereby dismissing Plaintiff’s claims.

Court Denies Plaintiff’s Motion for Reconsideration

After the Illinois Supreme Court’s decision in Tims v. Black Horse Carriers (which held that BIPA is subject to a five-year statute of limitations – read our full summary HERE), Plaintiff filed a motion for reconsideration, contending that the Court should actually apply a five-year limitations period to IRPA actions, like it applies to BIPA.  To that end, Plaintiff emphasized that the IRPA (similar to BIPA) does not itself contain a statute of limitations.  Plaintiff also noted that both the IRPA and BIPA derived from legislative concerns centered on Illinois residents’ right to privacy.  Therefore, according to Plaintiff, the IRPA’s legislative purpose would be best served by applying the catch-all five-year limitations period of 735 ILCS 5/13-205. 

On reconsideration, the Court again rejected Plaintiff’s argument.  The Court first outlined relevant case law precedent, under which the only courts to address this issue previously held that the IRPA’s applicable statute of limitations is one year.  See Toth-Gray v. Lamp Liter, Inc., No. 19-cv-1327, 2019 WL 3555179, at *4 (N.D. Ill. July 31, 2019); see also Blair v. Nevada Landing P’ship, 859 N.E.2d 1188, 1192 (Ill. App. Ct. 2006). 

The Court then analyzed the Tims decision, which held that, “when the law does not specify a statute of limitations, ‘the five-year limitations period applies’ unless the suit is one for ‘slander, libel or for publication of a matter violating the right of privacy.’”  Here, the Court reasoned that an IRPA action squarely falls within the last category identified by the Court in Tims, as IRPA cases necessarily involve alleged violations of a party’s right to privacy.  Finally, the Court rejected Plaintiff’s contention that Tims controls this situation, instead holding that “[u]nlike the BIPA, the IRPA protects the publication of matters related to the right of privacy and, thus, falls under the one-year statute of limitations.”

Implications for Businesses

This decision establishes a welcome pro-business standard in the Illinois privacy law context.  Notably, the Illinois Supreme Court in Tims rejected the defense bar’s argument that BIPA violations were akin to privacy rights violations and subject to the one-year statute of limitations applicable to IRPA claims.  This Ancestry.com decision holds that the converse also is not true.  It is also the first court to reject expansion of the plaintiff-friendly five-year BIPA statute of limitations to claims beyond BIPA.

Though this decision was issued by an Illinois federal court – rather than the Illinois Supreme Court, which decided the recent Tims and Cothron v. White Castle System BIPA cases – it nonetheless offers some privacy protection for Illinois businesses that post or otherwise aggregate third parties’ content or information.  We will monitor whether defendants are able to expand the Bonilla decision into other related privacy law actions, or if Illinois courts will restrict its holding to actions brought under the IRPA.

For more information about the Illinois Right of Publicity Act, the Illinois Biometric Information Privacy Act, or how this decision may affect your business, contact the authors Danielle Kays and James Nasiri, your Seyfarth attorney, or Seyfarth’s Workplace Privacy & Biometrics Practice Group.

Seyfarth Synopsis: Federal judges are requiring attorneys to attest as to whether they have used generative artificial intelligence (AI) in court filings, and if so, how and in what manner it was used. These court orders come just days after two New York attorneys filed a motion in which ChatGPT provided citations to non-existent caselaw.[i]

There are many ways to leverage AI tools across the legal industry, including identifying issues in clients’ data management practices, efficiently reviewing immense quantities of electronically stored information, and guiding case strategy, but according to U.S. District Judge Brantley Starr of the Northern District of Texas, “legal briefing is not one of them.” Last Tuesday, May 30, Judge Starr became the first judge requiring all attorneys before his court to certify whether they used generative AI to prepare filings, and if so, to confirm any such language prepared by the generative AI was validated by a human for accuracy.[ii]

Judge Starr reasoned that:

These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.[iii]

Critically, the failure to submit such a certification would result in the court striking the filing and potentially imposing sanctions under Rule 11.

Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing. A template Certificate Regarding Judge-Specific Requirements is provided here.[iv]

Shortly thereafter, on June 2, Magistrate Judge Gabriel Fuentes of the Northern District of Illinois followed suit with a revised standing order that not only requires all parties to disclose whether they used generative AI to draft filings, but also to disclose whether they used generative AI to conduct legal research. Judge Fuentes deemed the overreliance on AI tools a threat to the mission of federal courts, and stated that “[p]arties should not assume that mere reliance on an AI tool will be presumed to constitute reasonable inquiry.”[v]

Mirroring the reasoning of Judge Starr, Judge Fuentes further highlights courts’ longstanding presumption that Rule 11 certifications are representations “by filers, as living, breathing, thinking human beings that they themselves have read and analyzed all cited authorities to ensure that such authorities actually exist and that the filings comply with Rule 11(b)(2).”[vi] Both Judges have made clear that in order to properly represent a party, attorneys must always be diligent in that representation and that reliance on emerging technology, as convincing and tempting as it may be, requires validation and human involvement.

While federal courts in Texas and Illinois were first to the punch, we don’t expect other jurisdictions to be far behind with court orders mirroring those of Judge Starr and Judge Fuentes.


[i] See Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), Order to Show Cause (S.D.N.Y. May 4, 2023); see also Use of ChatGPT in Federal Litigation Holds Lessons for Lawyers and Non-Lawyers Everywhere, https://www.seyfarth.com/news-insights/use-of-chatgpt-in-federal-litigation-holds-lessons-for-lawyers-and-non-lawyers-everywhere.html.

[ii] Id.

[iii] Hon. Brantley Starr, “Mandatory Certification Regarding Generative Artificial Intelligence [Standing Order],” (N.D. Tex.).

[iv] Id.

[v] Hon. Gabriel A. Fuentes, “Standing Order For Civil Cases Before Magistrate Judge Fuentes,” [N.D. Ill.]

[vi] Id.



You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions.  This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,[1] the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart.  The airline filed a motion to dismiss the case. The plaintiff’s lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff’s attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff’s counsel claimed. The Court directed plaintiff’s counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff’s lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions “may not be inclusive of the entire opinions but only what is made available by online database [sic].”[2]  Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis.[3]

In response, the Court stated that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations”[4], using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example.  The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that “there has been no such case before the Eleventh Circuit” and that the docket number shown in the plaintiff’s submission was for a different case.[5] The Court noted that “five [other] decisions submitted by plaintiff’s counsel . . . appear to be fake as well.” The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff’s counsel show cause as to why he should not be sanctioned for citing “fake” cases.[6]

At that point, plaintiff’s counsel revealed what happened.[7] The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer “had no reason to doubt.” The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT “provided its legal source and assured the reliability of its content.” He explained that he had never used ChatGPT for legal research before and “was unaware of the possibility that its content could be false.” The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he “had no intent to deceive this Court or the defendant.” The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded “[y]es,” one of the cases “is a real case,” and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis.[8]

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well.  For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply “Are the other cases you provided fake”?[9] Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI’s response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents.  But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software’s output.  Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.


[1] Case No. 22-cv-1461 (S.D.N.Y.).

[2] Id. at Dkt. No. 29. 

[3] Id.

[4] Id. at Dkt. No. 31. 

[5] Id.

[6] Id.

[7] Id. at Dkt. No. 32.

[8] Id.

[9] Id.