The European Union (EU)’s government organizations are just like any another entity trying to function in a world where global companies and even government entities are reliant on digital platforms for messaging and collaboration. For years, there has been debate about how platforms like Microsoft 365, formerly Office 365, could be deployed in a way that complies with the GDPR processing and transfer restrictions. And it turns out that even the European Commission (EC) itself can apparently get it wrong. In a surprising turn of events earlier this month, the European Data Protection Supervisor (EDPS) concluded its nearly three year investigation into the Commission’s own deployment and use of Microsoft 365, signaling a pivotal moment in the conversation about the GDPR privacy and security requirements for cloud-based messaging and document collaboration platforms.

The Catalyst for Change

The EDPS’s investigation, spurred by the well-known Irish DPC case initiated by Maximillian Schrems (C-311/18), widely referred to as the “Schrems II” case, and as part of the 2022 Coordinated Enforcement Action of the European Data Protection Board (EDPB), unearthed several critical issues with the Commission’s deployment of Microsoft 365. These findings reportedly involve the EC’s failure to ensure that personal data transferred outside the EU/EEA is afforded protection equivalent to that within the EU/EEA and a lack of specificity in contracts between the Commission and Microsoft regarding the types of personal data collected and the purposes for its collection.  Those contractual terms, and accompanying GDPR safeguards and commitments are of course the same terms that every other global company is using with Microsoft, posted on the Microsoft website and generally not open for negotiation or discussion.

The EDPS’s Verdict

The resolution to the findings is as unprecedented as the investigation itself. The EDPS issued the EC a reprimand and imposed corrective measures demanding the suspension of all data flows from the use of Microsoft 365 to Microsoft and its affiliates and sub-processors outside the EU/EEA not covered by an adequacy decision, effective December 9, 2024. Additionally, the Commission must demonstrate its compliance with Regulation (EU) 2018/1725, specifically regarding purpose limitation, data transfers outside the EU/EEA, and unauthorized data disclosures by the same date.

This decision, the first of its kind, raises important questions about the future of data protection enforcement within the EU – and the use and deployment of any cloud-based platform like Microsoft 365 by any company established in the EU. What mechanisms will be or can be employed to ensure compliance? How will this affect the technical and logistical operations of the European Commission and potentially other EU institutions and bodies as they transition data flows to new servers?

A Roadmap for Compliance

Despite the challenges and short term confusion this decision presents, it also offers a silver lining. The decision serves as a vital roadmap for compliance, setting a precedent for the level of transparency and security required in data processing and transfer activities. This move by the EDPS reinforces the EU’s stance on the importance of data protection, signaling to institutions, companies, and individuals alike that safeguarding personal data is paramount and non-negotiable.

The Path Forward

As we reflect on this decision, it is clear that the implications extend far beyond the confines of the European Commission and Microsoft 365. This decision serves as another wake-up call to all entities operating within the EU’s jurisdiction, emphasizing the need for stringent data protection measures and the importance of reevaluating current data handling practices. The fact that the EC itself is on the receiving end is a surprise, but plenty of other companies with operations in the EU know they are doing the exact same things that the EC just got reprimanded for doing.

Looking ahead, the decision by the EDPS is not just about compliance; it’s about setting a global standard for data protection that companies can understand and predictably follow. As we move forward, institutions and companies should take heed of the path to compliance outlined by the decision. Businesses can no longer assume things are safe by relying on the size and popularity of the Microsoft 365 ecosphere, and the contentment that everyone else is doing the same thing. This decision rocked the boat for everyone. If the EC themselves can get this wrong, what chance is there for the rest of us?

On Wednesday, March 20, Seyfarth attorneys Rebecca Woods, Owen Wolfe, Lauren Leipold, and Puya Partow-Navid will present and Ken Wilton will moderate, the first session of the 2024 Commercial Litigation Outlook webinar series: Charting the Course: AI’s Influence on Legal Practice and IP Protection.

Time of the event:
1:00 p.m. to 2:00 p.m. Eastern

About the Program

Our esteemed panel of experts will explore the intricate intersections of AI technology, legal practice, and intellectual property rights. As businesses worldwide adapt to transformative advancements and evolving regulatory frameworks, this session promises invaluable insights into the future of law and IP protection amidst the AI revolution.

  • Explore the transformative impact of AI on legal practice in 2024 and beyond
  • Obtain insights into forthcoming AI regulations and their implications for businesses operating in the US and EU.
  • Evaluate risk mitigation strategies to avoid potential liability when using AI platforms
  • Learn how to implement strategies for safeguarding intellectual property rights amidst advancing AI technology
  • Discuss the evolving role of legal education in preparing lawyers for an AI-enabled future and the shift towards human-centered AI.

Register here.

This is the first webinar as part of the 2024 Commercial Litigation Outlook series. For more information on the other upcoming webinars in the series, please see below.



Part 2 – Navigating Legal Minefields: Insights on Restrictive Covenants, eDiscovery, and Privacy Compliance

Thursday, April 11, 2024
1:00 p.m. to 2:00 p.m. Eastern
12:00 p.m. to 1:00 p.m. Central
11:00 a.m. to 12:00 p.m. Mountain
10:00 a.m. to 11:00 a.m. Pacific

About the Program

The second webinar in the series will examine the regulatory landscape surrounding non-compete agreements and will also address critical aspects in the realm of eDiscovery and Privacy litigation. Specifically covering the following:

  • Federal Attempts to Curb Non-Competes: Delve into the proposed FTC rule and the NLRB’s stance, analyzing their potential impacts and the legal challenges they may face.
  • State Initiatives: Uncover the latest legislative developments from states like California, Minnesota, and New York, examining how these changes could impact employers nationwide.
  • Judicial Scrutiny and Trends: Gain insights into recent court decisions regarding non-competes and confidentiality provisions, and understand their implications for businesses.
  • Regulatory Enforcement Surrounding Privacy Laws: Learn about the rising regulatory enforcement and litigation surrounding data privacy laws, including the impact of consumer awareness and state legislation on businesses.
  • Navigating the Risks of Privacy Litigation: Discover the latest developments in privacy litigation, including the surge in lawsuits related to website beacons, biometric data, and AI processing. Gain insights on compliance frameworks and preemptive risk assessments to mitigate litigation threats.
  • Advancements and Risks in eDiscovery Tools: Learn about the latest advancements in GenAI eDiscovery tools, including document summarization, subjective coding determinations, and GenAI syntax and querying. Understand the challenges and considerations of adopting GenAI in litigation, including defensible use of technology and negotiating discovery protocols.
  • Generative AI in eDiscovery Workflows: hear about the potential of Generative AI in eDiscovery workflows to streamline your business, increase productivity, and reduce inefficiencies amidst rising regulatory enforcement and litigation surrounding data privacy laws.

Moderator:

Rebecca Woods, Partner, Seyfarth Shaw

Speakers: 

Dawn Mertineit, Partner, Seyfarth Shaw
James Yu, Senior Counsel, Seyfarth Shaw
Jason Priebe, Partner, Seyfarth Shaw
Matthew Christoff, Partner, Seyfarth Shaw


Part 3 – Commercial Litigation Outlook: Insights and Predictions for Litigation Trends in 2024

Thursday, May 2, 2024
1:00 p.m. to 2:30 p.m. Eastern

In the third session of the series, we will dive into the dynamic landscape of further litigation trends set to shape the coming year. Our panelists will provide invaluable insights and practical advice to navigate these complex legal arenas effectively. Don’t miss this opportunity to stay ahead of the curve and arm yourself with the knowledge needed to thrive in the ever-evolving world of litigation. Join us to explore trends, predictions and recommendations in the following areas:

  • Antitrust
  • Consumer Class Actions
  • ESG
  • Franchise
  • Health Care
  • Securities & Fiduciary Duty

Moderator:

Shawn Wood, Partner, Seyfarth Shaw

Speakers:

Brandon Bigelow, Partner, Seyfarth Shaw
Kristine Argentine, Partner, Seyfarth Shaw
Gina Ferrari, Partner, Seyfarth Shaw
John Skelton, Partner, Seyfarth Shaw
Jesse Coleman, Partner, Seyfarth Shaw
Greg Markel, Partner, Seyfarth Shaw


We are committed to providing you with actionable insights and strategic guidance to stay ahead in an ever-changing legal environment. Don’t miss this opportunity to gain invaluable knowledge and network with industry experts.

To register for the entire series, please click here. For more information and to access our full Commercial Litigation Outlook 2024 publication, please click here.

This blog post is co-authored by Seyfarth Shaw and The Chertoff Group and has been cross-posted with permission.

What Happened

On July 26, the U.S. Securities & Exchange Commission (SEC) adopted its Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure final rule on a 3-2 vote. The final rule is a modified version of the SEC’s earlier Notice of Proposed Rulemaking (NPRM) released in March 2022. The final rule formalizes and expands on existing interpretive guidance requiring disclosure of “material” cybersecurity incidents.

Continue Reading SEC Publishes Public Company Cybersecurity Disclosure Final Rule

This post was originally published to Seyfarth’s Global Privacy Watch blog.

On July 10th, the European Commission issued its Implementing Decision regarding the adequacy of the EU-US Data Privacy Framework (“DPF”). The Decision has been eagerly awaited by US and Europe based commerce, hoping it will help business streamline cross-Atlantic data transfers, and by activists who have vowed to scrutinize the next framework arrangement (thereby maintaining their relevance). Regardless of the legal resiliency of the decision, it poses an interesting set of considerations for US businesses, not the least of which is whether or not to participate in the Framework.

For those who followed the development and demise of the Privacy Shield program and the Schrems II case, it has been apparent for some time that the fundamental objection of the activists and the Court of Justice of the EU (“CJEU”) to the original Privacy Shield was the perception that the US intelligence community had an ability to engage in disproportional data collection without any possibility of recourse by EU residents whose personal information may be swept into an investigation. The actual functioning of the program for the certifying businesses were much less controversial.

Since the structure of the program wasn’t the primary reason for Privacy Shield’s revocation, from a business perspective, the current DPF looks a lot like the old Privacy Shield. For businesses who made the decision to participate in the Privacy Shield program in the past, the operational burden shouldn’t be much different under the new DPF, if they have already taken steps to operationalize the requirements.

What is interesting about the new DPF is how it may impact a company’s decision to choose  between the Standard Contractual Clauses (“SCCs”) and the alternative adequacy mechanism for transfers. There is also some interest vis-à-vis the DPF and its interactions with state privacy laws.

DPF v. SCCs

One of the components of the new SCCs that were adopted in 2021 (which did not exist in the prior version of the SCCs) is the requirement for all SCCs to be accompanied by a transfer impact assessment (“TIA”)[1]. A TIA is designed to assess whether there are legal barriers to the enforcement of the SCCs in the relevant importing jurisdiction – in this case, the US. Many commentators, and some courts, have applied the Schrems II reasoning to argue that use of the SCCs as a transfer mechanism to the US is not effective in all circumstances, because the Foreign Intelligence Services Act (“FISA”) authorizes US intelligence to engage in bulk collection under section 702 and such programs are not proportional and do not have reasonable safeguards required under EU law.

Although the SCCs are still used to transfer European data to the US (mostly because after Privacy Shield was invalidated, practically speaking, they had been the only remaining transfer mechanism for many businesses), several commenters have taken the position that, if Schrems II is taken to its logical conclusion, then any use of SCCs in the US is effectively impossible, because US companies cannot live up to their promises in the SCCs. This was noted in an expert report commissioned by the German Conference of Independent Data Protection Supervisors to assess the broad reach of FISA section 702 programs. Needless to say, companies who undertake a TIA as part of their deployment of SCCs are also under some level of uncertainty as to the effectiveness since a TIA is not the opinion of a supervisory authority, but rather their own interpretation, and that of their legal counsel – which said expert report may cast doubt on.

The DPF is not plagued by such uncertainty. Specifically, recital 200 of the Decision expressly states the legal protections surrounding FISA programs are adequate and “essentially equivalent” to EU protections related to intelligence and national security. This is a momentous declaration, in our estimation, because as a consequence, participation by a company in the DPF seems to remover the need for a TIA for a transfer mechanism. Put another way, the recital 200 provides binding authority for the assertion that the primary motivation for a TIA (i.e. FISA section 702 programs) is now moot in that the DPF participants have sufficient safeguards (even in light of FISA 702) regardless of undertaking a TIA. Note that the removal of a TIA requirement only works for participants in the DPF and TIAs are still required when relying on the  SCCs as a transfer mechanism.

DPF v. State Law

Because the DPF establishes “essentially equivalent” controls for participants, the differences between the scope and requirements of EU privacy law and US state privacy law are brought into more apparent contrast. Looking across the two general frameworks, the differences in concepts, protective requirements, and other controls may actually motivate businesses who are already subject to the various state omnibus privacy laws, to skip participation in the DPF. This is mostly because the DPF is a bit more reasonable to businesses with respect to the exercise of individual rights than some state laws.

For example, the GDPR does not require the controller to comply in full with an access request if the response would “adversely affect the rights” of others, including, a business’ trade secrets or intellectual property[2]. The California Consumer Privacy Act has no such express limitation related to business’ data. That being said, there are a number of possible arguments available under other laws (trade secret, confidentiality obligations, etc.) that could justify a limiting a response to an access request. However, those limitations are not express in the California law – and they are in the GDPR and the DPF.

Similarly, the principles in the GDPR and DPF allow for a denial of an access request where responding to such request triggers an undue burden on the business. The California law’s limitation is a bit narrower than the GDPR/DPF limitation in this instance. California requires responsive disclosures to access requests unless the request is “manifestly unfounded or excessive” [3]. This standard narrower than the DPF standard of “…where the burden or expense of providing access would be disproportionate to the risks to the individual’s privacy…”[4].

Conclusion

This lack of alignment between DPF requirements and state law may lead to operational  confusion and uncertainty by US businesses interested or actively involved in the transfer of personal information from the EU. Regardless of the confusion related to the overlapping US and EU privacy laws, businesses who have previously participated in and are familiar with the Privacy Shield program may find it useful to also participate in the DPF. Additionally, for some business models, participation in the DPF can mean reduced administrative and legal costs as compared to putting in place and maintaining SCCs. However, it must be remembered that the DPF is not the same as compliance with US state privacy laws – even though some omnibus state privacy laws echo GDPR concepts. There are significant distinctions which have to be managed between the tactical implementation of a privacy program for US state law and a DPF compliance program.

Finally, even though there has been a commitment by some to challenge the DPF at the CJEU, the Commission’s approval of the DPF does not necessarily signal a “wait and see” approach. It is instead a time for companies to carefully evaluate and review their transfer activities, their regulatory obligations, and the most appropriate path forward.  All these years after Schrems II, it is at least nice to have a potential alternative to SCCs, in the right business conditions.


[1] Commission Implementing Decision (EU) 2021/914 Recitals 19 through 21

[2] GDPR, Article 15(4) and Recital 63.

[3] Cal. Civ. Code 1798.145(h)(3)

[4] Commission Implementing Decision for DPF Annex I, Section III.4; 8.b, c, e; 14.e, f and 15.d.

2023 has brought several states into the privacy limelight. As of Sunday, June 18, and with the signature of Texas governor Greg Abbott, the Texas Data Privacy and Security Act (“TDPSA”) was enacted, making the Lone Star state the tenth in the U.S. to pass a comprehensive data privacy and security law. The Act provides Texas consumers the ability to submit requests to exercise privacy rights, and extends to parents the ability exercise rights on behalf of their minor children.

The Texas Act provides the usual compliment of data subject rights relating to access, corrections, data portability, and to opt out of data being processed for purposes of targeted advertising, the sale of personal information, and profiling where a consumer may be significantly or legally effected. It also requires that covered businesses provide a privacy notice and other disclosures relevant to how they use consumer data.

Application Standard

Among the ten state-level comprehensive privacy bills, the TDPSA is the first to remove processing and profit thresholds from the applicability standard. Instead of using these thresholds to determine whether an entity is covered, the TDPSA applies to persons that: (1) Conduct business in Texas or produce products or services consumed by Texas residents; (2) Process or engage in the sale of personal data; and (3) Are not small business as defined by the United States Small Business Administration.[i]

Definitions and Obligations of Controllers and Processors

The TDPSA’s definition of “sale of personal data” aligns closely with that of the California Consumer Privacy Act (“CCPA”). It refers to the “the sharing, disclosing, or transferring of personal data for monetary or other valuable consideration by the controller to a third party.” The Act defines “process” as “an operation or set of operations performed” whether it be manually or automatically, on Texas consumers’ personal data or sets of data. This includes the “collection, use, storage, disclosure, analysis, deletion, or modification of personal data.” Unlike the CCPA, the law exempts data processed for business-to-business and employment purposes.

Covered businesses who are data controllers must provide consumers with “a reasonably accessible and clear” privacy notice. These notices include the categories of personal data processed by the controller, the purpose of processing personal data, the categories of data shared with third parties, and methods by which consumers can exercise their rights under the law. If a controller’s use of sensitive personal data or biometric data constitutes a sale, one or both of the following must be included:

  • “NOTICE: This website may sell your sensitive personal data.”
  • “NOTICE: This website may sell your biometric personal data.”

Processors, which are akin to “service providers” under the CCPA, are those people or businesses that process personal data on the behalf of the controller. Processors have a number of obligations under the TDPSA, including assisting controllers in responding to consumer rights requests and data security compliance. All processors will need to have a written data protection agreement (“DPA”) in place with a controller, which will include:

  1. “clear instructions for processing data;
  2. the nature and purpose of processing;
  3. the type of data subject to processing;
  4. the duration of processing;
  5. the rights and obligations of both parties; and
  6. a number of requirements that the processor shall follow under the agreement.

Processors will be required to ensure confidentiality of data, and at the controller’s direction, a processor must delete or return all of the information at the conclusion of the service agreement. However, this deletion requirement excludes data that must be retained pursuant to the processor’s records retention obligations.

Processors must also certify that they will make available to the controller all information necessary to demonstrate the processor ’s compliance with the requirements of this chapter, and that they will comply with a controllers assessment of their security practices. Lastly, should a processor engage any subcontractor, they will need another written contract that meets all of the written requirements set forth by the controller’s DPA.

Yet another state law DPA requirement brings into question whether businesses, particularly those on the national and multi-national level, are going to need separate addenda to service agreements that recite a la carte provisions that include separate definitions and commitments to comply with each state-level privacy law, as well as international data privacy laws such as the EU’s GDPR. Based on the new and upcoming privacy laws in the U.S., businesses can probably still operate using some form of uniform DPA that accounts for each of the different requirements. However, we may soon reach a point where the pages of all the appendices and addenda to comply with separate state requirements are greater than the typical service agreement contract.

Data Protection Assessment Requirement

The TDPSA mandates that controllers conduct and document data protection assessments. These assessments, which mirror those required by the Connecticut Data Privacy Act, require businesses to identify the purposes for which they are collecting and processing data, as well as the associated risks and benefits of that processing. Businesses will need to assess these benefits and risks for the following categories of processing activities involving personal data:

  1. Processing of personal data for targeted advertising;
    1. The sale of personal data;
    1. Processing for purposes of profiling if there is a reasonably foreseeable risk of unfair or deceptive treatment, injury to consumers (including financial and reputational risk), or physical intrusion to an individual’s solitude or seclusion;
    1. Processing of sensitive data; and
    1. Any processing activities that generally may pose a heightened risk of harm to consumers.

Enforcement

The TDPSA expressly states that it does not provide a private right of action. The Texas Attorney General holds exclusive power to field consumer complaints, investigate, issue written warnings, and ultimately enforce violations of the law. The AG may seek both injunctive relief and civil penalties of up to $7,500 in damages for each statutory violation.

The Texas AG also has the authority to investigate controllers when it has reasonable cause to believe that a business has engaged in, is engaging in, or, interestingly – is about to engage in – a violation of the TDPSA. While the Senate version suggested revising this authority to remove pre-violation (thought crime?) investigations, the language withstood its scrutiny and remained in the signed act. This was one of many suggested changes by the Senate prior to bill’s passing.

Back and Forth Between Texas Legislative Houses

As the TDPSA was drafted, a few notable revisions were made between the House and the Senate versions of the bill. However, most of the Senates proposed additions were not ultimately accepted. To start, the Senate added language that expressly excluded from its definition of biometric data “a physical or digital photograph or data generated from” photographs. Further, under the definition of “Sensitive data”, the Senate removed the House’s language that included sexual orientation data. Notably, the sexual orientation language has since been re-added to the finalized version of the law.

The House and Senate also went back and forth on some of the other definitions in the Act, including which entities fall are exempt. Under the “state agency” definition, the Senate broadened the language to include any branch of state government, rather than any branch that falls into the executive branch of government – which was on of the House versions. However, the Senate language made it into the finalized version. 

For the most part, the two legislative groups were in agreement as to which entities were exempt from the TDPSA. These include exemptions for institutions and data subject to Title V of the Gramm-Leach-Bliley Act,[ii] the HIPAA[iii] and HITECH[iv], non-profits and higher education institutions. The Senate sought to add electric utilities and power companies to this list, but the finalized version kept them off the exempt list.

The Senate’s revisions also proposed language allowing consumers to authorize a designated agent to submit rights requests (similar to what we see in the CCPA), but that language was not signed into law. The TDPSA does not let anyone act on another person’s behalf to exercise their privacy rights, other than parents acting on behalf of their minor children.

Conclusion

Section 541.055(e), which provides consumers the ability to have a third party authorized agent submit opt-out requests on their behalf, goes into effect January 1, 2025. Other than that rather narrow delayed measure, the rest of the TDPSA is set to go into effect on July 1, 2024. It will be one of the broadest reaching in the U.S., particularly because of its unique applicability standard. Its mandatory data protection assessments and requirement for written contracts with all processors make the law slightly less business friendly than the rest of the state privacy laws out there. But California is still in a league of its own on that score.


[i] The SBA defines small business generally as a privately-owned enterprise with 500 or fewer employees. Depending on the industry, however, the maximum number of employees may fluctuate and in some cases may not be factored in. Some businesses are defined as “small” according to their average annual revenue.

[ii] 15 U.S.C. Section 6801 et seq

[iii] 42 U.S.C. Section 1320d et seq.

[iv] Division A, Title XIII, and Division B, Title IV, Pub. L. No. 111-5

Seyfarth Synopsis: The U.S. District Court for the Northern District of Illinois recently denied Plaintiff’s motion to reconsider a prior dismissal of his privacy action due to untimeliness.  In a case titled Bonilla, et al. v. Ancestry.com Operations Inc., et al., No. 20-cv-7390 (N.D. Ill.), Plaintiff alleged that consumer DNA network Ancestry DNA violated the Illinois Right of Publicity Act (“IRPA”) when it uploaded his high school yearbook photo to its website.  The Court initially granted Ancestry’s motion for summary judgment, finding Plaintiff’s claims to be time-barred under the applicable one-year limitations period.  Upon reconsideration, Plaintiff  – unsuccessfully – made a first-of-its-kind argument that the Court should apply the Illinois Biometric Privacy Act’s five-year statute of limitations to the IRPA.

Background on the Bonilla Lawsuit

Ancestry DNA, most commonly known for its at-home DNA testing kits, also maintains a robust database of various historical information and images.  One subset of this online database is the company’s “Yearbook Database.”  This portion of the website collects yearbook records from throughout the country and uploads the yearbook contents – including students’ photos – to Ancestry.com.  On June 27, 2019, Ancestry DNA uploaded the 1995 yearbook from Central High School in Omaha, Nebraska to its Yearbook Database.

More than a year later, on December 14, 2020, Plaintiff Sergio Bonilla filed a lawsuit against Ancestry DNA over its publication of the Central High School yearbook.  Specifically, Plaintiff Bonilla – a current Illinois resident and former student of Central High School whose picture appeared in Ancestry’s database – alleged that Ancestry DNA improperly publicized his private information without obtaining his consent.  Plaintiff’s lawsuit asserted violations of the IRPA, as well as a cause of action for unjust enrichment.  Ancestry DNA filed a motion for summary judgment on the basis that Plaintiff’s action was not brought within the requisite one-year limitations period.  The Court agreed, thereby dismissing Plaintiff’s claims.

Court Denies Plaintiff’s Motion for Reconsideration

After the Illinois Supreme Court’s decision in Tims v. Black Horse Carriers (which held that BIPA is subject to a five-year statute of limitations – read our full summary HERE), Plaintiff filed a motion for reconsideration, contending that the Court should actually apply a five-year limitations period to IRPA actions, like it applies to BIPA.  To that end, Plaintiff emphasized that the IRPA (similar to BIPA) does not itself contain a statute of limitations.  Plaintiff also noted that both the IRPA and BIPA derived from legislative concerns centered on Illinois residents’ right to privacy.  Therefore, according to Plaintiff, the IRPA’s legislative purpose would be best served by applying the catch-all five-year limitations period of 735 ILCS 5/13-205. 

On reconsideration, the Court again rejected Plaintiff’s argument.  The Court first outlined relevant case law precedent, under which the only courts to address this issue previously held that the IRPA’s applicable statute of limitations is one year.  See Toth-Gray v. Lamp Liter, Inc., No. 19-cv-1327, 2019 WL 3555179, at *4 (N.D. Ill. July 31, 2019); see also Blair v. Nevada Landing P’ship, 859 N.E.2d 1188, 1192 (Ill. App. Ct. 2006). 

The Court then analyzed the Tims decision, which held that, “when the law does not specify a statute of limitations, ‘the five-year limitations period applies’ unless the suit is one for ‘slander, libel or for publication of a matter violating the right of privacy.’”  Here, the Court reasoned that an IRPA action squarely falls within the last category identified by the Court in Tims, as IRPA cases necessarily involve alleged violations of a party’s right to privacy.  Finally, the Court rejected Plaintiff’s contention that Tims controls this situation, instead holding that “[u]nlike the BIPA, the IRPA protects the publication of matters related to the right of privacy and, thus, falls under the one-year statute of limitations.”

Implications for Businesses

This decision establishes a welcome pro-business standard in the Illinois privacy law context.  Notably, the Illinois Supreme Court in Tims rejected the defense bar’s argument that BIPA violations were akin to privacy rights violations and subject to the one-year statute of limitations applicable to IRPA claims.  This Ancestry.com decision holds that the converse also is not true.  It is also the first court to reject expansion of the plaintiff-friendly five-year BIPA statute of limitations to claims beyond BIPA.

Though this decision was issued by an Illinois federal court – rather than the Illinois Supreme Court, which decided the recent Tims and Cothron v. White Castle System BIPA cases – it nonetheless offers some privacy protection for Illinois businesses that post or otherwise aggregate third parties’ content or information.  We will monitor whether defendants are able to expand the Bonilla decision into other related privacy law actions, or if Illinois courts will restrict its holding to actions brought under the IRPA.

For more information about the Illinois Right of Publicity Act, the Illinois Biometric Information Privacy Act, or how this decision may affect your business, contact the authors Danielle Kays and James Nasiri, your Seyfarth attorney, or Seyfarth’s Workplace Privacy & Biometrics Practice Group.

Seyfarth Synopsis: Federal judges are requiring attorneys to attest as to whether they have used generative artificial intelligence (AI) in court filings, and if so, how and in what manner it was used. These court orders come just days after two New York attorneys filed a motion in which ChatGPT provided citations to non-existent caselaw.[i]

There are many ways to leverage AI tools across the legal industry, including identifying issues in clients’ data management practices, efficiently reviewing immense quantities of electronically stored information, and guiding case strategy, but according to U.S. District Judge Brantley Starr of the Northern District of Texas, “legal briefing is not one of them.” Last Tuesday, May 30, Judge Starr became the first judge requiring all attorneys before his court to certify whether they used generative AI to prepare filings, and if so, to confirm any such language prepared by the generative AI was validated by a human for accuracy.[ii]

Judge Starr reasoned that:

These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.[iii]

Critically, the failure to submit such a certification would result in the court striking the filing and potentially imposing sanctions under Rule 11.

Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing. A template Certificate Regarding Judge-Specific Requirements is provided here.[iv]

Shortly thereafter, on June 2, Magistrate Judge Gabriel Fuentes of the Northern District of Illinois followed suit with a revised standing order that not only requires all parties to disclose whether they used generative AI to draft filings, but also to disclose whether they used generative AI to conduct legal research. Judge Fuentes deemed the overreliance on AI tools a threat to the mission of federal courts, and stated that “[p]arties should not assume that mere reliance on an AI tool will be presumed to constitute reasonable inquiry.”[v]

Mirroring the reasoning of Judge Starr, Judge Fuentes further highlights courts’ longstanding presumption that Rule 11 certifications are representations “by filers, as living, breathing, thinking human beings that they themselves have read and analyzed all cited authorities to ensure that such authorities actually exist and that the filings comply with Rule 11(b)(2).”[vi] Both Judges have made clear that in order to properly represent a party, attorneys must always be diligent in that representation and that reliance on emerging technology, as convincing and tempting as it may be, requires validation and human involvement.

While federal courts in Texas and Illinois were first to the punch, we don’t expect other jurisdictions to be far behind with court orders mirroring those of Judge Starr and Judge Fuentes.


[i] See Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), Order to Show Cause (S.D.N.Y. May 4, 2023); see also Use of ChatGPT in Federal Litigation Holds Lessons for Lawyers and Non-Lawyers Everywhere, https://www.seyfarth.com/news-insights/use-of-chatgpt-in-federal-litigation-holds-lessons-for-lawyers-and-non-lawyers-everywhere.html.

[ii] Id.

[iii] Hon. Brantley Starr, “Mandatory Certification Regarding Generative Artificial Intelligence [Standing Order],” (N.D. Tex.).

[iv] Id.

[v] Hon. Gabriel A. Fuentes, “Standing Order For Civil Cases Before Magistrate Judge Fuentes,” [N.D. Ill.]

[vi] Id.



You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions.  This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,[1] the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart.  The airline filed a motion to dismiss the case. The plaintiff’s lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff’s attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff’s counsel claimed. The Court directed plaintiff’s counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff’s lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions “may not be inclusive of the entire opinions but only what is made available by online database [sic].”[2]  Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis.[3]

In response, the Court stated that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations”[4], using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example.  The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that “there has been no such case before the Eleventh Circuit” and that the docket number shown in the plaintiff’s submission was for a different case.[5] The Court noted that “five [other] decisions submitted by plaintiff’s counsel . . . appear to be fake as well.” The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff’s counsel show cause as to why he should not be sanctioned for citing “fake” cases.[6]

At that point, plaintiff’s counsel revealed what happened.[7] The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer “had no reason to doubt.” The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT “provided its legal source and assured the reliability of its content.” He explained that he had never used ChatGPT for legal research before and “was unaware of the possibility that its content could be false.” The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he “had no intent to deceive this Court or the defendant.” The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded “[y]es,” one of the cases “is a real case,” and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis.[8]

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well.  For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply “Are the other cases you provided fake”?[9] Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI’s response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents.  But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software’s output.  Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.


[1] Case No. 22-cv-1461 (S.D.N.Y.).

[2] Id. at Dkt. No. 29. 

[3] Id.

[4] Id. at Dkt. No. 31. 

[5] Id.

[6] Id.

[7] Id. at Dkt. No. 32.

[8] Id.

[9] Id.

Tennessee and Montana are now set to be the next two states with “omnibus” privacy legislation. “Omnibus” privacy legislation regulates personal information as a broad category, as opposed to data collected by a particular regulated business or collected for a specific purpose, like health information, financial or payment card information. As far as omnibus laws go, Tennessee and Montana are two additional data points informing the trend we are seeing at the state level regarding privacy and data protection. Fortunately (or unfortunately depending on your point of view) these two states have taken the model which was initiated by Virginia and Colorado instead of following the California model.

Is There Really Anything New?

While these two new laws may seem to be “more of the same”, the Tennessee law contains some new interesting approaches to the regulation of privacy and data protection. While we see the usual set of privacy obligations (notice requirements, rights of access and deletion, restrictions around targeted advertising and online behavioral advertising, et cetera) in both the Tennessee and Montana laws, Tennessee has taken the unusual step of building into its law specific guidance on how to actually develop and deploy a privacy program in the Tennessee Information Protection Act (“TIPA”).

Previously, privacy compliance programs have been structured in a wide variety of ways, mostly as a result of the operational necessities of various businesses. With Tennessee’s new law, we now see a state attempting to standardize how businesses develop and implement privacy programs with more clearly defined NIST standards, as opposed to the traditional, but nebulous  concepts of “reasonableness” and “adequacy”.

NIST Privacy Framework

Tennessee’s law incorporates standardized compliance concepts by requiring the use of the National Institute of Standards and Technology (“NIST”) privacy framework entitled “a tool for improving privacy through enterprise risk management version 1.0”. More specifically, the TIPA states that “…a controller or processor shall create…” it’s privacy program using this framework. Unfortunately, it is unclear for now whether or not failure to use the NIST framework would actually constitute a violation of the law. One could potentially argue that if a program fulfills all of the obligations of the TIPA it should not matter what framework is used.

Part of the concern around a “mandatory” use of the NIST framework is that the framework is somewhat complicated to implement; and does not factor the size, capabilities, and processing risk activity of a particular organization. Since NIST intended the framework to cover a wide range of use cases and operational complexities, the framework is inherently complex. As a consequence, smaller and less mature organizations will likely struggle in implementing a privacy program under the NIST framework. This is particularly true since while NIST framework has various levels of maturity for a privacy program, the TIPA doesn’t articulate what “tier” of program maturity a controller needs to fulfill within the NIST framework to be compliant.

The whole issue of “mandatory v. permissive” use of the NIST framework is further muddied as a result of the TIPA giving an affirmative defense to controllers who use the NIST framework. If the NIST framework is oriented as an affirmative obligation, it would not be necessary to articulate the use of the NIST framework as an affirmative defense. In our opinion, Tennessee may have been better served by providing a safe harbor for privacy programs built under the NIST framework, as opposed to mandating that all programs must use the NIST framework. In any event, further clarity as to what constitutes “compliant” use of the NIST framework would be helpful.

Privacy Certification

Another useful concept which the TIPA introduced is the participation in a certification program  acting as evidence of compliance with the law. While not truly being a “safe harbor”, controllers that participate in the Asia Pacific Economic Cooperation Forum (“APEC”) Cross-Border Privacy Rules (“CBPR”) system may have their certification under these rules operate as evidence of compliance with the TIPA. Outside of one specific federal privacy law (i.e. COPPA), neither the federal nor state privacy laws have officially recognized certification schemes as providing evidence of compliance with the relevant law.

In the end, while there may be confusion in some of the components of the TIPA, Tennessee can be commended for attempting to provide more commercially viable guidance on how to comply with the TIPA, at least from the perspective of building out a privacy program. Additionally, this is the first time in the United states we have seen the use of privacy certification schemas as legally relevant evidence of compliance. Privacy certification systems have been around for some time, but they have almost never been capable of demonstrating legal compliance.

On March 15, 2023 the Securities and Exchange Commission (“SEC”) proposed three new sets of rules (the “Proposed Rules”) which, if adopted, would require a variety of companies to beef up their cybersecurity policies and data breach notification procedures. As characterized by SEC Chair Gary Gensler, the Proposed Rules aim to promote “cyber resiliency” in furtherance of the SEC’s “responsibility to help protect for financial stability.”[1]

In particular, the SEC has proposed:

  • Amendments to Regulation S-P which would, among other things, require broker-dealers, investment companies, and registered investment advisers to adopt written policies and procedures for response to data breaches, and to provide notice to individuals “reasonably likely” to be impacted within thirty days after becoming aware that an incident was “reasonably likely” to have occurred (“Proposed Reg S-P Amendments”).[2]
  • New requirements for a number of “Market Entities” (including broker-dealers, clearing agencies, and national securities exchanges) to, among other things: (i) implement cybersecurity risk policies and procedures; (ii) annually assess the design and effectiveness of these policies and procedures; and (iii) notify the SEC and the public of any “significant cybersecurity incident” (“Proposed Cybersecurity Risk Management Rule”).[3]
  • Amendments to Regulation Systems Compliance and Integrity (“Reg SCI”) in order to expand the entities covered by Reg SCI (“SCI Entities”) and add additional data security and notification requirements to SCI Entities (“Proposed Reg SCI Amendments”).[4]

As Commissioner Hester Peirce observed, each Proposed Rule “overlaps and intersects with each of the others, as well as other existing and proposed regulations.” [5] Therefore, while each of the Proposed Rules relates to similar cybersecurity goals, each must be considered in turn to determine whether a particular company is covered and what steps the company would need to undertake should the Proposed Rules become final.

Below we discuss each set of Proposed Rules in more detail and provide some takeaways and tips for cybersecurity preparedness regardless of industry.

Proposed Reg S-P Amendments

Reg S-P, adopted in 2000, requires that brokers, dealers, investment companies, and registered investment advisers adopt written policies and procedures regarding the protection and disposal of customer records and information.[6] But, as Chair Gensler explained in a statement in support of the Proposed Reg S-P Amendments, “[t]hough the current rule requires covered firms to notify customers about how they use their financial information, these firms have no requirement to notify customers about breaches,” and the Proposed Reg S-P Amendments look to “close this gap.”[7]

In particular, “[w]hile all 50 states have enacted laws in recent years requiring firms to notify individuals of data breaches, standards differ by state, with some states imposing heightened notification requirements relative to other states,” and the SEC seeks, through the Proposed Reg S-P Amendments, to provide “a Federal minimum standard for customer notification” for covered entities.[8] This includes a definition of “sensitive customer information” which is broader than that used in at least 12 states; a 30-day notification deadline, which is shorter than timing currently mandated by 15 states (plus 32 states which do not include a notification deadline or permit delayed notifications for law enforcement purposes); and required notification unless the covered institution finds no risk of harm, unlike 21 states which only require notice if, after investigation, the covered institution does find risk of harm.[9]

Furthermore, while Reg S-P currently applies to broker-dealers, investment companies, and registered investment advisors, the Proposed Reg S-P Amendments would expand the scope to transfer agents.[10] It also would apply customer information safeguarding and disposal rules to customer information that a covered institution receives from other financial institutions and to a broader set of information by newly defining the term “customer information” which, for non-transfer agents, would “encompass any record containing ‘nonpublic personal information’ (as defined in Regulation S-P) about ‘a customer of a financial institution,’ whether in paper, electronic or other form that is handled or maintained by the covered institution or on its behalf,” and for transfer agents, which “typically do not have consumers or customers” for purposes of Reg S-P, would have a similar definition with respect to “any natural person, who is a securityholder of an issuer for which the transfer agent acts or has acted as transfer agent, that is handled or maintained by the transfer agent or on its behalf.”[11]

Proposed Cybersecurity Risk Management Rule

The Proposed Cybersecurity Risk Management Rule will impact a variety of “different types of entities performing various functions” in the financial markets defined as “Market Entities,” including “broker-dealers, broker-dealers that operate an alternative trading system, clearing agencies, major security-based swap participants, the Municipal Securities Rulemaking Board, national securities associations, national securities exchanges, security-based swap data repositories, security-based swap dealers, and transfer agents.”[12]

As Chair Gensler explained, the Proposed Cybersecurity Risk Management Rule is designed to “address financial sector market entities’ cybersecurity,” by, among other things, requiring Market Entities to adopt written policies and procedures to address their cybersecurity risks, to notify the SEC of significant cyber incidents, and, with the exception of smaller broker-dealers, to disclose to the public a summary description of cybersecurity risks that could materially affect the entity and significant cybersecurity incidents in the current or previous calendar year.[13]

According to the SEC, these policies and procedures are “not intended to impose a one-size-fits-all approach to addressing cybersecurity risks,” and are designed to provide Market Entities “with the flexibility to update and modify their policies and procedures as needed[.]”[14] However, there are certain minimum policies and procedures that would be required, such as periodic assessments of cybersecurity risks,[15] controls designed to minimize user-related risks and prevent unauthorized system access,[16] periodic assessment of information systems,[17] oversight of service providers that receive, maintain, or process the entity’s information (including  written contracts between the entity and its service providers),[18] measures designed to detect, mitigate, and remediate cybersecurity threats and vulnerabilities,[19] measures designed to detect, respond to, and recover from cybersecurity incidents,[20] and an annual review of the design and effectiveness of cybersecurity policies and procedures (with a written report).[21] For most regulated entities, such measures are already in place.

Proposed Reg SCI Amendments

Finally, the SEC has proposed amendments to Reg SCI, a 2014 rule adopted to “strengthen the technology infrastructure of the U.S. securities markets, reduce the occurrence of systems issues in those markets, improve their resiliency when technological issues arise, and establish an updated and formalized regulatory framework” for the SEC’s oversight of these systems.[22]  Reg SCI applies to “SCI Entities,” which include self-regulatory organizations, certain large Alternative Trading Systems, and certain other market participants deemed to have “potential to impact investors, the overall market, or the trading of individual securities in the event of certain types of systems problems.”[23]

The Proposed Reg SCI Amendments would expand the definition of SCI Entity to include registered Security-Based Swap Data Repositories, registered broker-dealers exceeding a size threshold, and additional clearing agencies exempt from registration.[24] They also would broaden requirements to which SCI Entities are subject, including  required notice to the SEC and affected persons of any “systems intrusions,” which would include a “range of cybersecurity events.”[25]

Takeaways

While the Proposed Rules are not adopted as-of-yet, companies which could be covered should take the opportunity to reevaluate their cybersecurity practices and policies, both to mitigate as much as possible the risk of a cyber-attack and to be prepared to address an attack, including meeting all notification requirements, should one occur.

Among other things, best practices include:

  • A written cyber risk assessment which categorizes and prioritizes cyber risk based on an inventory of the information systems’ components, including the type of information residing on the network and the potential impact of a cybersecurity incident;
  • A cybersecurity vulnerability assessment to assess threats and vulnerabilities; determine deviations from acceptable configurations, enterprise or local policy; assess the level of risk; and develop and/or recommend appropriate mitigation countermeasures in both operational and nonoperational situations;
  • A written incident response plan that defines how the company will respond to and recover from a cybersecurity incident, including timing and method of reporting such incident to regulators, persons or other entities;
  • A business continuity plan designed to reasonably ensure continued operations when confronted with a cybersecurity incident and maintain access to information;
  • Tabletop exercises to review and test incident response and business continuity plans;
  • Annual review of policies and procedures.

As a next step, each of the Proposed Rules will be published on the Federal Register and open for comment for sixty days following this publication. Regardless of whether the Proposed Rules are adopted, they represent the SEC’s increasing awareness of, and desire to mitigate, cybersecurity incidents, and companies should be prepared accordingly.


[1] Gensler, Gary, Opening Statement before the March 15 Commission Meeting (SEC, March 15, 2023).

[2] See Press Release, SEC Proposes Changes to Reg S-P to Enhance Protection of Customer Information (SEC, March 15, 2023). The full text of the Proposed Reg S-P Amendments can be found here.

[3] See Press Release, SEC Proposes New Requirements to Address Cybersecurity Risks to the U.S. Securities Markets (SEC March 15, 2023). The full text of the Proposed Cybersecurity Risk Management Rule can be found here.

[4] See Press Release, SEC Proposes to Expand and Update Regulation SCI (SEC, March 15, 2023). The full text of the Proposed Reg SCI Amendments can be found here.
In addition, on March 15, 2023 the SEC re-opened comments on proposed cybersecurity risk management rules for investment advisors until May 22, 2023. For our analysis of these proposed rules, see How Fund Industry Can Prepare For SEC’s Cyber Proposal (Law360, March 4, 2022). The SEC is also presently considering comments on a different proposed rule mandating certain cybersecurity disclosures by public companies. See Carlson, Scott and Riley, Danny, SEC Proposes Mandatory Cybersecurity Disclosures by Public Companies (Carpe Datum Blog, April 14, 2022).

[5] Peirce, Hester, Statement on Regulation SP: Privacy of Consumer Financial Information and Safeguarding Customer Information (SEC, March 15, 2023).

[6] Proposed Reg S-P Amendments, supra n.2 at 1.

[7] Gensler, Gary, Statement on Amendments to Regulation S-P (SEC, March 15, 2023).

[8] Proposed Reg S-P Amendments, supra n.2 at 4.

[9] Id. at 4-6.

[10] Proposed Reg S-P Amendments, supra n.2, at 6-7.

[11] Id. at 74-75, 82.

[12] Proposed Cybersecurity Risk Management Rule, supra n. 3 at 9-10 (internal definitions of terms omitted).

[13] Gensler, Gary, Statement on Enhanced Cybersecurity for Market Entities (SEC, March 15, 2023).

[14] Proposed Cybersecurity Risk Management Rule, supra n. 3 at 103.

[15] Id. at 103-108.

[16] Id. at 109-112.

[17] Id. at 113-115.

[18] Id. at 115-116.

[19] Id. at 116-118.

[20] Id. at 118-124.

[21] Id. at 124-126.

[22] Proposed Reg SCI Amendments, supra n.4 at 10.

[23] Id. at 13-14.

[24] Id. at 24.

[25] Id. at 24-25.