Now in its sixth year, Seyfarth’s Commercial Litigation Outlook provides a clear view into the forces reshaping business disputes in 2026. This year’s analysis highlights a risk landscape defined by accelerating technological change, an increasingly fragmented regulatory environment, and growing economic pressures across multiple industries.

According to the Outlook, artificial intelligence is creating new categories of legal risk, from the challenges of authenticating AI‑generated content to navigating the use of algorithmic tools while courts and regulators rapidly reset expectations around emerging technology. At the same time, state‑level regulation continues to expand, particularly around non‑competes, privacy, and biometrics, creating a compliance patchwork that requires businesses to adapt strategies by jurisdiction. Coupled with elevated interest rates, rising debt, and post‑pandemic strain, especially in real estate, health care, and franchise sectors, the commercial litigation environment remains fluid, fast‑moving, and resistant to neat predictions. Against this backdrop, eDiscovery, information governance, and cybersecurity response functions play increasingly central roles in managing litigation risk and staying ahead of shifting expectations.


Authored by Jay Carle, Matthew Christoff, and Danny Riley, this year’s eDiscovery & Innovation article spotlights one of the most significant and fast‑moving risks in the discovery landscape: the rise of AI‑enabled notetaking and meeting‑summarization tools. As generative AI capabilities become embedded directly into videoconferencing platforms, these tools now routinely record meetings, create transcripts with speaker attribution, and auto‑generate summaries—often by default. The result is a sudden proliferation of new, unvetted records that can capture sensitive, strategic, or privileged conversations. The article warns that these tools exponentially increase the risk of inadvertent disclosure, while also creating evidentiary challenges when transcripts or summaries are later used to establish what was said, by whom, and with what intent.

The article also highlights that litigation risk is expanding beyond the developers of these tools to the organizations deploying them. AI notetakers raise overlapping consent, privacy, wiretap, and biometric concerns, and courts will increasingly scrutinize whether companies can demonstrate how meeting data was captured, stored, and controlled. As with prior waves of privacy litigation, the differentiator will be operational discipline: organizations that implement clear governance around meeting recording, restrict distribution of AI‑generated outputs, and define authoritative versions of records will be far better positioned to defend against disclosure missteps, authenticity disputes, and statutory claims.

Click here to download the 2026 Commercial Litigation Outlook.

Continue Reading The Changing Discovery Landscape: Takeaways from Seyfarth’s 2026 Commercial Litigation Outlook

When Judge Jed Rakoff ruled in United States v. Heppner (S.D.N.Y. Feb. 17, 2026)  that documents a criminal defendant created through exchanges with Anthropic’s Claude platform weren’t protected by attorney-client privilege or the work product doctrine, the decision generated significant attention across the legal community. Many practitioners read that ruling as a sweeping statement: using AI tools waives privilege. While great for headlines, that is an overstatement of what Heppner actually holds, and the Warner case, which was decided a week earlier in the Eastern District of Michigan, shows why the distinction matters.

The Heppner Decision: Narrower Than It Appears
In Heppner, the trial judge ruled that documents a criminal defendant created through his own exchanges with Anthropic’s Claude platform and sent to his attorney afterwards were protected by neither attorney-client privilege nor the work product doctrine. The ruling rested on several specific facts. Heppner used a public consumer AI tool that explicitly disclaims providing legal advice and whose privacy policy authorizes data collection, model training, and disclosure to third parties including government authorities. He did so on his own initiative, without direction from his counsel. And the government had already seized the documents pursuant to a search warrant before the privilege question even arose.

On privilege, the court identified three independent deficiencies: Claude is not a lawyer, so there was no attorney-client communication; the platform’s terms defeated any reasonable expectation of confidentiality; and Heppner’s purpose was not to obtain legal advice from Claude, which disclaims that capacity. On work product, the court found the documents were not prepared by or at the direction of counsel and did not reflect counsel’s strategy. Judge Rakoff noted the analysis might differ if counsel had directed the AI use because the platform could then arguably function as an agent of counsel.

Most importantly, Heppner doesn’t hold that using AI tools automatically waives privilege. It holds that a non-lawyer querying a public AI tool which isn’t a lawyer and offers no confidentiality, doesn’t satisfy the foundational requirement for attorney-client privilege in the first place. Legal privilege requires confidential communication with a lawyer for the purpose of obtaining legal advice. Heppner is important and worthy of some attention, but it is not the final word on lawyers (and those acting at the direction of lawyers) and the content of AI prompts and results. There is still a lot more to analyze on an individual application and individual basis. But as a bottom line, if a party or witness is talking to a machine and not a lawyer, the privilege analysis doesn’t even get off the ground.

Warner: The Civil Counterweight
Look back one week. In Warner, a federal magistrate judge reached a different result in a civil case. A pro se party had used ChatGPT to prepare legal briefs in anticipation of litigation. When opposing counsel sought discovery of those materials, the court the court denied the request, holding the materials were not discoverable work product under Rule 26(b)(3) and independently not relevant or proportional under Rule 26(b)(1). Critically, the court also held that using AI didn’t waive work product protection, because AI tools are “tools, not persons,” and waiver requires disclosure to an adversary or in a way likely to reach one – a standard that AI use alone doesn’t meet. The court didn’t mince words with defense counsel either, stating that their “preoccupation with Plaintiff’s use of AI needs to abate” and agreeing with the plaintiff that the request was a “fishing expedition” that, if endorsed, “would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed.”

One key difference here involves civil procedure vs. criminal procedure rules. Rule 26(b)(3) protects materials prepared in anticipation of litigation by a party or its representative – it doesn’t require that a lawyer prepare the materials, only that they were created in anticipation of litigation. The pro se litigant’s use of AI fell squarely within that protection, and the court saw no reason to treat AI-assisted drafting differently from any other tool a litigant might use to prepare her case.

The Real Distinction: It’s Not the AI, It’s How You Use It
This is the critical point most commentary misses. Heppner and Warner reach opposite conclusions not because one case says AI can never be privileged while the other says it always is. They reach opposite conclusions because of the specific circumstances in which the AI tools were used and the materials were sought. In Heppner, a represented defendant used a public AI platform on his own initiative, without counsel’s direction, through a service whose terms disclaimed both legal advice and confidentiality. Those materials were then seized by the FBI pursuant to a search warrant. In Warner, a pro se litigant used AI as part of her own litigation preparation, and opposing counsel tried to compel production through a discovery request.

Mr. Heppner’s computer and AI activity information was already seized and in the hands of the government, while Ms. Warner was resisting a written discovery request for information in her possession, custody or control. The procedural context matters enormously, and lawyers discussing AI privilege need to know the circumstances under which the materials were created and how they ended up in dispute.

Extrapolating from Warner: Lawyers Using AI Tools
If a pro se party’s use of ChatGPT to prepare litigation materials qualified for work product protection in a civil case, the same logic should apply – and arguably applies even more strongly –  when a lawyer uses AI tools.  A lawyer directing the use of an AI tool as part of legal representation has more deliberation and control than a pro se litigant. As long as the materials are created in anticipation of litigation and not disclosed to an adversary, they should receive the same protection Warner afforded.

The use of the AI tool itself doesn’t waive privilege or work product protection. What matters is whether the materials are created in anticipation of litigation and kept confidential. This is the real area where practitioners need to focus, because waiver is a real concern.

The Real Risk: Public and Commercial AI Tools
There is genuine waiver exposure when using public or commercial-level AI tools. That’s because, as the Heppner decision went at lengths to mention, the platforms and their terms make it clear that user information is not private or secured, and users have no privacy guarantee. Essentially, when you input confidential client information into ChatGPT or similar consumer tools, you’re disclosing that information to a third party without any contractual protection or confidentiality agreement. If that information is later exposed through a data breach, logging, or litigation (like the ongoing OpenAI New York class action litigation that has resulted in preservation obligations for massive volumes of ChatGPT prompts and results for millions of users) you’ve potentially waived privilege through disclosure, not through the mere act of using an AI tool.

The distinction is crucial: using an AI tool doesn’t waive privilege. Disclosing confidential client information through an unsecured channel does.

Practical Implications
Lawyers and businesses using AI in their practice should focus on:

  1. Using enterprise AI tools or tools with explicit confidentiality agreements rather than public consumer tools.
  2. Implementing siloed or secure instances where AI interactions involving legal matters are segregated from general business operations.
  3. If AI is part of the litigation workflow, counsel should direct its use and maintain clear documentation that materials were created in anticipation of litigation, especially in civil matters where work product protections are broader.
  4. Not assuming that sharing AI outputs with counsel after the fact creates privilege. Heppner held that non-privileged materials don’t become privileged merely because they are later shared with an attorney. The time to protect information is before it enters the AI platform, not after.
  5. Avoiding disclosure of confidential client information to public AI platforms where you cannot control downstream use or exposure.
  6. Updating AI governance and acceptable use policies to specify which platforms are approved, what information may be entered, and what protocols apply when AI-generated materials touch on litigation, investigations, or regulatory matters.

The one undeniable realization in both decisions is that AI prompts and results are undeniably ESI, and therefore subject to preservation, civil discovery, criminal search and subpoena production.

Neither case ends the conversation about whether AI use is categorically safe or unsafe for privilege. It’s that privilege analysis turns on the same factors it always has: whether there’s a confidential communication with a lawyer for the purpose of obtaining legal advice, whether materials are created in anticipation of litigation, and whether confidentiality is maintained. The AI tool itself is neutral, and AI is not a lawyer – it’s a powerful technology, but it is still a technology application like Westlaw or Google or an email or text messaging platform. How you use it, who is using it, and why determines whether privilege applies. Then, assuming it IS privileged, the efforts you take to secure the content from publication or disclosure determines whether your privilege is waived.

Introduction

Robotics and artificial intelligence are converging at an unprecedented pace. As robotics systems increasingly integrate AI-driven decision-making, businesses are unlocking new efficiencies and capabilities across industries from manufacturing and logistics to healthcare and real estate.

Yet this convergence introduces complex legal and regulatory challenges. Companies deploying AI-enabled robotics must navigate issues related to data privacy, intellectual property, workplace safety, liability, and compliance with emerging AI governance frameworks.

The Shift: Robotics as an AI Subset

Traditionally, robotics was viewed as a standalone discipline focused on mechanical automation. Today, robotics is increasingly powered by machine learning algorithms, natural language processing, and predictive analytics—hallmarks of AI technology.

This evolution raises critical questions for legal teams:

  • Who owns the data generated by AI-enabled robots?
  • How do we allocate liability when autonomous systems make decisions without human intervention?
  • What contractual safeguards should be in place when outsourcing robotics solutions to third-party vendors?

As robotics increasingly incorporates AI functionality, traditional contract structures for hardware procurement and service agreements require significant updates. This evolution introduces new risk categories that must be addressed through precise drafting and negotiation.

Continue Reading The AI-Driven Evolution of Robotics

On Friday, October 17, 2025, U.S. District Court Judge Vince Chhabria issued a biting Order granting defendant Eating Recovery Center, LLC’s (“ERC”) motion for summary judgment on the plaintiff Jane Doe’s California Invasion of Privacy Act (CIPA) claims, a law enacted in 1967 to address the increasing use of wiretapping to eavesdrop on private phone conversations. In particular, Judge Chhabria found it “undisputed” that the alleged Meta Pixel did not read, attempt to read or attempt to learn the contents of Doe’s communications with ERC while the communications were in transit as is required by the statute, and thus Doe’s CIPA claims failed.

More notable were Judge Chhabria’s thoughts on the state of recent plaintiffs’ attempts to apply CIPA’s “already obtuse language” to website activity and online technologies. Calling the statute “a total mess,” Judge Chhabria opined that it “was a mess from the get-go, but the mess gets bigger and bigger as the world continues to change.” As a result, Courts are now faced with the “borderline impossible” task of determining whether website operators’ conduct falls under the ambit of the CIPA statute.

He further noted that the CIPA language at issue is “ambiguous,” acknowledging that there was at least an interpretation wherein ERC’s alleged online conduct violates CIPA. However, because CIPA is a criminal statute imposing criminal liability and punitive civil penalties, the “Rule of Lenity” of applies, even when invoked in a civil action. Under the Rule of Lenity, Courts must narrowly construe civil statutes that impose punitive civil penalties. That narrower interpretation does not cover ERC’s alleged conduct.

In his final call to action, Judge Chhabria called on the California Legislature to “step up” and “bring CIPA into modern age” to address whether such online activity should be covered by the statute. California courts are consistently issuing conflicting rulings in CIPA cases, which leaves businesses and practitioners equally confused. Judge Chhabria urged the Legislature to not only go back to the drawing board, but to “erase the board entirely and start writing something new.” 

Senate Bill 690, which failed to advance out of committee in the California State Assembly, would not have erased the drawing board entirely but did attempt to clarify that CIPA would not apply when used for “a commercial business purpose.” The bill unanimously passed the Senate in June 2025; however, as a result of being stalled in the Assembly, will not move forward until 2026 at the earliest (if at all).

Key Considerations

With the ongoing uncertainty surrounding CIPA exposure, companies should give careful thought to their cookie banner / consent management practices, including conducting regular testing to ensure operation is consistent with expectations. 

If you have any questions about this post, please contact the authors or another member of the Firm’s DATA Law practice. 

On July 24, 2025, the California Privacy Protection Agency (“CPPA”) unanimously voted to adopt a package of Proposed Regulations for the California Consumer Privacy Act (“CCPA”), marking a significant development in California privacy law. These cover Automated Decision-making Technology (“ADMT”), mandatory Cybersecurity Audits, Risk Assessments, and clarifications for the CCPA’s applicability to Insurance Companies. The package will move into its final review stage before formal enactment, once filed with the California Office of Administrative Law.

CCPA Steering Toward Operational Compliance

This is a clear signal that privacy compliance expectations in California are trending toward a more operational phase. The new rules are designed to give Californians greater control over how their personal information is used while pushing businesses toward higher levels of transparency and accountability, especially when automated decision-making and high-risk data processing is involved. For companies, this is more than just a theoretical update – it’s a clarion call to ensure these requirements are built into day-to-day governance, technology and process design, and vendor management practices.

Continue Reading California Privacy Protection Agency (CPPA) Finally Voted to Adopt Much Debated Update to CCPA Regulations: What Your Business Should Know

The UK’s Data (Use and Access) Act received Royal Assent last Thursday, June 19th, bringing into law some significant changes to the country’s post Brexit data protection framework, among an array of other, related rules (on matters ranging from financial conduct to smart meters and “underground assets,” which is more to do with pipes than spies, unfortunately). The Act is more of a selective nip and tuck than a complete makeover, intended to foster innovation by reducing and simplifying compliance burdens, while retaining the core principles and safeguards of UK GDPR and related regulations.

Implementation will be phased. If not reading further, the main takeaway is that it will be important to pay attention to further developments as most of the changes do not come into force until there is further implementing rulemaking.

This week (June 24th), the European Commission officially extended its “adequacy decision” for the UK until 27 December 2025 as previously promised, in order to allow the Commission to carry out its assessment of the adequacy of the new framework. Given further extension (to ensure continued free data flows between the EU and UK) necessarily depends on some parity between the rules in place in both markets, it’s nice to see both sides playing nicely together. Without renewal, there will be additional burdens for businesses that transfer personal data from the EU to the UK, including those that are headquartered in a third country like the US.

We round up some of the tweaks below:

  1. One Point Companies Should Immediately Evaluate: Complaints Handling. The Act specifies that controllers must facilitate complaints “by taking steps such as providing a complaint form which can be completed electronically and by other means.” Controllers must also acknowledge complaints within 30 days and act on them without undue delay. There is the notion that controllers may later be required to notify the regulator of the number of complaints received in a given period.
  2. A new Trust Framework for digital verification services (DVS) is to be implemented. Although this is yet to be formalized, it will result in new enhanced rules to replace the current voluntary Digital Identity and Attributes Trust Framework overseen by the Department for Science, Innovation and Technology. A publicly available register of compliant DVS providers will be set up and a trust mark will be introduced to help users identify certified and trustworthy digital identity providers. Registered providers will be able to directly verify personal information with public authorities via an “information gateway.” For DVS providers, there will be some additional work required to get registered and stay compliant. For companies that want to utilize DVS providers, however, this will eventually be a welcome streamlining of certain verification processes, such as KYC, age verification and employer right to work checks, particularly when contrasted with undertaking these processes in-house. Happily, there is also recognition of overseas electronic signatures (provided certain criteria are met) which should help with related friction in international transacting (e.g., for overseas companies utilizing overseas signature products) – although globally speaking, the UK has always been relatively sensible on this front.
  3. Some additional welcome clarity and flexibility for essential aspects of the UK GDPR, including:
    • Introduction of a New Lawful Basis: “Recognised Legitimate Interests.” This will be significant for some specific use cases (e.g., detecting, investigating and preventing crime), because this basis does not require the controller to balance the legitimate interests being relied on by the controller against the interests of the data subject whose personal data is being used, if such legitimate interests are “recognized” at law.
    • New Examples of the Ever Nebulous “Legitimate Interests”: including direct marketing, intra-group transmission of personal data of clients, employees or others, where necessary for internal administrative purposes or for ensuring the security of network and information systems – which are particularly helpful for US multinationals where business processes and decision-making is heavily matrixed or centralized.
    • Flexibility as to Seeking Consent for Scientific Research Purposes: Data subjects can give broad consent and organizations may not need to provide additional privacy notices or seek additional consent for the additional processing purpose of scientific research, (any research that can be reasonably described as scientific, whether publicly or privately funded or carried out as a commercial or non-commercial activity). We can expect this to be a favorite of business engaging in any kind of data heavy R&D.
    • Permitting Use of Tracking Technologies and Cookies without Consent: Consent is not required where strictly necessary to protect information related to the services requested, ensure security of the user terminal, prevent or detect fraud or technical faults and to enable automatic authentication of the user’s identity or maintain records of selections made or information provided by the user on the website. Note that fines related to unauthorized direct marketing activities have been increased to UK GDPR levels (from the relatively more modest levels set by PECR).
    • Increased Clarity with Regard to Automated Decision-Making (ADM): The Act provides for rules to clarify what activity is regulated as ADM (e.g., it defines a decision “based solely on automated processing” as one where there is no meaningful human involvement, etc.) and arguably lifts some limitations for business relying on such decisions (e.g., in AI applications and algorithmic processing).
    • Clarity as to Extent of Search Required in Response to DSAR. The Act clarifies that the data subject is only entitled to information the controller is able to provide based on a reasonable and proportionate This was not previously addressed, leading to frequent consternation among data controllers.
    • Increased Clarity as to the Existing Requirements for Transfers of Personal Data to Third Countries.

There are a few points of less clarity as well. Notably, with regard to:

  1. Artificial Intelligence (AI). The Secretary of State has nine months to publish a Report on the Use of Copyright Works in AI Systems. We remain on tenterhooks.
  2. Access to and Portability of Customer and Business Data / Smart Data Schemes. The Secretary of State has been given authority to regulate access and provision of customer and business data, including to third party recipients, including through standardized APIs or other means, in line with broader UK GDPR principles but with arguably broader coverage than under the corollary EU Regulation that will be applicable in the EU later this year (The EU Data Act). We will have to wait and see what these will actually look like.

Connect with your Seyfarth lawyer or a member of our global privacy team for guidance on these developments tailored to your business needs.

On June 3, 2025, the California Senate unanimously passed Senate Bill 690 (SB 690), a bill that seeks to add a “commercial business purposes” exception to the California Invasion of Privacy Act (CIPA).

After multiple readings on the Senate floor, SB 690 passed as amended, and will now proceed to the California State Assembly. SB 690, as originally drafted, was explicitly made retroactive to any cases pending as of January 1, 2026.  The most recent amendments on the Senate floor remove the retroactivity provisions, meaning the bill, if passed by the Assembly and signed by the Governor, will only apply prospectively.  The amendments to remove the retroactive provisions of SB 690 are not unexpected. Retroactive application provisions are traditionally frowned upon by the California legislature and may offend due process principles.

If passed, SB 690 would exempt the use of certain online tracking technologies from violating CIPA, provided they are used for a “commercial business purpose” and comply with existing privacy laws like the California Consumer Privacy Act (CCPA).  SB 690 could significantly impact prospective litigation under CIPA for online business activities.  Indeed, there may be the proverbial “rush to the courthouse” if plaintiffs and plaintiffs’ attorneys begin to feel that passage of SB 690 is forthcoming or likely, now that the bill will proceed to the State Assembly.

Businesses may want to consider engaging their government relations teams or contacting members of the California State Assembly to express their positions on the bill as it now passes to the other chamber of the California legislature.

On May 19, 2025, the California Senate Appropriations Committee, which handles budgetary and financial matters, held a hearing on California Senate Bill 690 (SB 690).  The proposed bill would amend the California Invasion of Privacy Act (CIPA) by adding an exception to the statute which has the effect of permitting use of tracking technologies for “commercial business purposes.”

The Appropriations Committee referred SB 690 to the Suspense File.  Generally, if the cost of a bill meets certain fiscal thresholds, the Appropriations Committee will refer the bill to the Suspense File.  Having met that threshold, SB 690 will now proceed to a vote-only Suspense Hearing to be held on May 23, 2025.  No testimony will be heard during the May 23, 2025 hearing.  SB 690 will then either move on to the Senate Floor, or be held in committee.  While referral to the Suspense File is not necessarily a death knell to SB 690, statistics show that a number of bills die quietly in the Suspense Hearing due, in part, to its non-public process. 

If passed, SB 690 would exempt the use of such online tracking technologies from violating CIPA, provided they are used for a “commercial business purpose” and comply with existing privacy laws like the California Consumer Privacy Act (CCPA).  SB 690 could significantly impact current litigation under CIPA for online business activities. Not only will plaintiffs be far less likely to file new lawsuits alleging violations of CIPA, but SB 690’s provisions are explicitly made retroactive to any cases pending as of January 1, 2026, which could lead to dismissals of ongoing lawsuits, as well.

Businesses may want to consider engaging their government relations teams or contacting members of the Senate Appropriations Committee to express their positions on the bill. 

This post was originally published to Seyfarth’s Global Privacy Watch blog.

California Senate Bill 690 (SB 690), introduced by Senator Anna Caballero, is continuing to proceed through the California state legislative process. The proposed bill would amend the California Invasion of Privacy Act (CIPA) by adding an exception to the statute which has the effect of permitting use of tracking technologies for “commercial business purposes.” CIPA, enacted in 1967, was originally established to prohibit the unauthorized recording of or eavesdropping on confidential communications, including telephone calls and other forms of electronic communication. However, over recent years CIPA claims in lawsuits have been used to target business’ online use of cookies, pixels, trackers, chatbots, and session replay tools on their websites. 

If passed, SB 690 would exempt the use of such online tracking technologies from violating CIPA, provided they are used for a “commercial business purpose” and comply with existing privacy laws like the California Consumer Privacy Act (CCPA).  SB 690 could significantly impact current litigation under CIPA for online business activities. Not only will plaintiffs be far less likely to file new lawsuits alleging violations of CIPA, but SB 690’s provisions are explicitly made retroactive to any cases pending as of January 1, 2026, which could lead to dismissals of ongoing lawsuits, as well.

On April 29, 2025, the Senate Public Safety Committee unanimously voted to advance SB 690, and it was subsequently re-referred to the Senate Appropriations Committee. A hearing before the Appropriations Committee is currently scheduled for May 19, 2025.

Seyfarth Shaw is proud to sponsor the 2025 Masters Conference, a premier boutique legal event hosted in cities across the U.S., as well as in Toronto and London. The conference will be held on Tuesday, May 20, 2025, at Seyfarth’s Chicago office and will feature keynote presentations, panel discussions, workshops, and networking opportunities.

Topics will include eDiscovery, Artificial Intelligence, Information and Data Governance, Legal Project Management, Forensics and Investigations, Knowledge Management, and Cybersecurity.

Seyfarth partners Jay Carle, Matthew Christoff, and Jason Priebe will share their insights as featured panelists throughout the day. Additional information about their panel topics is outlined below.

For more information and to register, click here.

Continue Reading Seyfarth to Sponsor and Present at 2025 Masters Conference