ISBA Members, please login to join this section

December 2023Volume 54Number 3PDF icon PDF version (for best printing)

Artificial Intelligence Update

Since it was announced that artificial intelligence (“AI”) could not only take the multistate bar exam, but pass it with 90 percent accuracy, the legal profession has been on guard. Overnight AI went from an amorphous concept that may impact other professions to a potential threat right at our backdoor.

While AI is not prosecuting cases yet, AI systems have progressed. With these technological advancements comes additional concerns regarding accuracy of information, privacy, and means to regulate the field. This article introduces various AI advancements specifically made within the legal fields, as well as steps governments and private sector groups have taken to protect against the above referenced concerns. Finally, it explores ways the legal system can help regulate AI—specifically generative AI.

AI Specific to the Legal Profession

While ChatGPT garners the most media attention, legal-centric AI applications and programs exist such as Harvey.AI, Relativity, Logikcull, LawGeex, and Disco.1 Generally speaking, these vendors offer generative AI that tackle fundamental legal tasks such as legal research, e-discovery, and document review.2

LawGeex goes so far as to describe its services “as an extension of your legal team.” That its AI goes beyond simply redlining mistakes; can actually understand “the contractual context as well as [your firm’s] position. . . [and negotiate] with the counterpart – just like an experience attorney but with enhanced speed and accuracy.”3

Harvey AI is an AI startup, the brainchild of a former securities and antitrust litigator and AI research scientist, Winston Weinberg and Gabriel Pereyra respectively.4 Self-described as a “co-pilot for lawyers,;” this product is designed to automate legal research and some document drafting while simultaneously learning any given firm’s own work products and templates, so it becomes ‘smarter’ as to that firm’s particular practice, tone, and content.’”5 The first law firm to subscribe to Harvey’s services was London based Allen & Overy. David Wakeling, head of the firm’s Markets Innovation Group commented, “‘. . . I have never seen anything like Harvey. . . Harvey can work in multiple languages and across different practice areas, delivering unprecedented efficiency and intelligence.”’6 Another interesting aspect of Harvey AI is its marketing and branding. Unlike other legal AI websites, Harvey’s website has almost no information about its services, but simply invites interested parties to join a waitlist.7
These are just two examples of the AI technologies geared to service attorneys and legal professionals.

There have also been cases where attorneys have gotten in trouble with the use of AI. One instance is Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), an Order to Show Cause was issued when plaintiff’s counsel’s response to a Motion to Dismiss was filled with fictitious case cites. Plaintiff’s counsel states that these nonexistent cases were supplied by ChatGPT.

Concerns: Accuracy and Regulations

Technological advancements are not without their own challenges and concerns. General areas of concern include: accuracy of information, privacy, and lack of regulations governing this field.

  • Accuracy, Privacy, and Copyright: At this point, most have learned that everything on the internet is not real or even accurate. There are stories aplenty of attorneys utilizing AI only to find cited cases do not actual exist. Concerns have grown from accuracy of information and veracity of photos and videos. In fact, Congress recently expressed concern regarding AI-generated election ads and deepfake videos.8 Which opens the door for privacy and copyright concerns. The prevalence of social media makes images, voices, and content very accessible. Moreover, how will current laws apply to work created by generative AI. At a minimum, the general public must be told when they are viewing generative AI content, and there must be a way to ensure compliance.
  • Lack of Regulation: Recognizing a lack of AI regulation, governments as well as the private sector groups are trying to establish a legal framework to address AI. The United States, Canada, and the European Union have all taken such steps to varying degrees of success. In addition, several private companies have formed their own watch groups to help protect against harmful AI.

In the United States, various initiates, acts, and orders have been entered to address AI; however, as of today, no federal legislation exists. As early as February 11, 2019, former President Donald Trump signed an Executive Order creating the “American AI Initiative” to help ensure an AI-ready workforce and to supposedly keep America at the forefront of this technology.9 Throughout 2021, federal agencies made various presentations regarding principles, ethical considerations, and ways to approach and use AI.10 Effective  January 1, 2021, the National Artificial Intelligence Initiative Act of 2020 promulgated a definition of AI, introduced a “National Artificial Intelligence Initiative” which again instructed the federal government to support AI initiatives.11 In October 2022, the White House rolled out its “AI Bill of Rights,” and in early 2023, Biden-Harris Administration promulgated three principles to help ensure the responsible creation of AI: safety, security, and trust.12 Since its inception, 15 companies voluntarily committed to act with these principles in mind. In July 2023, the following seven companies voluntarily committed to take action ensuring their products adhere to the three principals – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.13 In September 2023, a second wave of companies – Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability – agreed to do the same. 14 Notably, these are pledges with no enforcement measures, akin to a pinky promise.

In June 2023, Senate Majority Leader Chuck Schumer (“Schumer”) launched an effort to establish rules on AI to address national security and education via his SAFE Innovation Framework for AI policy (“SAFE Framework”).15 The SAFE Framework focuses on Schumer’s policy objectives: Security, Accountability, protecting our Foundation, Explainability, and Innovation. A bipartisan effort, Schumer asked the leaders to work alongside his peers, in what he termed “AI Insight Forum,” to explore issues, answer questions, and develop the Senate’s policy response. Insight Forum topics include: guarding against doomsday scenarios, AI’s role in our social world, copyright and intellectual property, workforce, use-cases and risk managements, as well as privacy and liability.16 The first Insight Forum took place in September 2023. While senators and industry leaders met one participant – Inioluwa Deborah Raji, a researcher at the University of California, Berkeley, fellow at Mozilla, and named one of MIT Technology Review’s 235 Innovators Under 35 in 2020 – described the gathering as “more of an informational rallying effort than it was a genuine opportunity for meaningful policy discourse and recommendations.”17

At the state level, there has been a bit more success. Some states included AI regulations as part of broader consumer privacy laws, others have proposed or pending bills, and yet still more created task forces to investigate AI and areas of harm.18 Specifically in Illinois, this part legislative session acts were proposed regulating AI including the collection of personal information through algorithms,19 to prevent misuse of AI in automated hiring decisions and the healthcare realm,20 and to establish a Generative AI and National Language Processing Task Force.21

In comparison, on June 16, 2022, Canadian legislator introduced the Digital Charter Implementation Act 2022, which included Canada’s first proposed law dedicated solely to artificial intelligence, the Artificial Intelligence and Data Act (“AIDA”).22 While AIDA has not yet been adopted, AIDA attempts to regulate AI design, development and use in the private sector in connection to trade with a focus on harm and bias. AIDA imposes requirements and monetary as well as criminal penalties for non-compliance.23 Additionally, in September 2023, the Minister of Innovation, Science and Industry announced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.24 This temporary code “provides Canadian Companies with common standards” and allows them to show their Generative AI developments as well as its responsible use “formal regulations is in effect.25

One the other hand, in Europe, a legal framework specific to AI was proposed in April 2021, adopted by the European Commission in 2022 and accepted by European Parliament in 2023.26 The more recent adoption of the Artificial Intelligence Act (“AI Act”) results negotiations between the European Parliament, the European Commission, and the Council of the European Union.27 The negotiations have yet to take place; however, the AI Act places the European light years ahead of the United States with respect to actual legislation.

The AI Act proposes a regulatory framework that categorizes AI into four different risk levels: unacceptable risk, high risk, generative AI, and limited risk.28 Unacceptable risk are systems considered to be a threat to people and as such will be banned, unless it falls under an exception. High risk AI systems refer to those that negatively affect safety or fundamental rights. Under this level, AI systems are divided into two categories, AI products used in toys, aviation, and cars, and then “everything else.” The “everything else” category is then further subdivided into eight specific areas: 1) biometric identification and categorization of natural persons, 2) management and operation of critical infrastructure, 3) education and vocational training, 4) employment, worker management and access to self-employment, 5) access to and enjoyment of essential private services and public services and benefits, 6) law enforcement, 7) migration, asylum and border control management, and 8) assistance in legal interpretation and application of the law. Generative AI would have to comply with transparency requirements such as disclosing that content was created by AI, publishing summaries of copy righted data, and preventing the generation of illegal content. A limited risk system would have to comply with minimal transparency requirements that provide users with enough information to make informed decisions before using AI.

Finally, China and Singapore both have announced their approach to AI regulation.29 Lawmakers in Singapore are building on their 2019 Model AI Governance Framework and introduced its National AI Strategy. In China, a draft of its Administrative Measures for Generative Artificial Intelligence Services was released in April 2023.

In addition to government approaches, the private sector has taken steps to address regulatory concerns. For example, in July 2023, Google, OpenAI, Microsoft, and Anthropic announced the formation of the Frontier Model Forum.30 Its mission is to “advanced AI safety research and technical evaluations for the next generation of AI systems” and industry leaders and governments to continually monitor AI risks.31 While this is a useful measure, there still is no clear regulation.

Suggested Steps: The Public Justice System’s Role in AI Regulating AI

Rather than passively wait for legislators to take substantive action, perhaps it falls on the laps of attorneys, judges, and the public system as a whole, to shape the role of generative AI, at least within the legal sphere. There is no shortage of stories of attorneys and even judges using AI to draft arguments and opinions.

Author S.I. Strong explores this possibility and analyzes the benefits of judicial action, legislative action, action by state licensing authorities, complementary actions such as national legal associations or professional organizations and finally international options like the Hague Conference on Private International Law.32

With regards to judicial action, Strong asserts that individual court rules or local rules of court can initially address the use of generative AI in litigation. While an issue of consistency within and across different judicial systems may arise, this approach “has the benefit of speed . . . and accountability.”33 In addition, court sanctions can ensure accountability.34

Recently, a few U.S court judges have issued standing orders to address AI-related issues. On May 30, Judge Brantley Starr of the US District Court for the Northern District of Texas issue the first standing order on AI. The order requires attorneys and pro se litigants to “file a certificate declaring whether any portion of their filings will be drafted using generative AI tools.”35 Additionally, “attorneys must certify that either none of the content in any of their filings will be drafted using generative AI, or that generative AI content will be verified for accuracy by a human being.”36 Judge Starr, in his standing order, notes the “danger of generative AI making stuff up” and its ‘lack of a sense of duty, honor, or justice that binds practicing attorneys.”37

Similarly, on June 8, Magistrate Judge Gabriel A. Fuentes of the US District Court for the Northern District of Illinois “issued a revised standing order for civil cases.” It requires parties to disclose the use of generative AI to conduct legal research as well as draft documents for filing, as well as “the specific tool and the manner in which it was used.”38 Additionally, pursuant to Rule 11 of the Federal Rules of Civil Procedure, the Court “continue[s] to construe all filings as a certification, by the person signing the filed document and after reasonable inquiry.”39 “Mere reliance on an AI tool” does not constitute reasonable inquiry.40 A Rule 11 certification means that an “living, breathing, thinking human beings” have “read and analyzed all cited authorities to ensure that [it] . . . actually exists” and complies with the Rule.41

Strong identifies consistency across courts and even states a key problem. Strong suggests amendments to the federal rules as a possible means to achieve some level of consistency.

Conclusion

Laws and regulation at both the state and federal level has a long way to go to adequately regulate AI. While bills have been introduced, it is hard to tell when they will be enacted and importantly, if those bills will regulate AI as needed. As of now, no binding regulation have been passed by the federal government to bring uniformity and consistently to AI regulation. There are only guidances, frameworks, and pledges on using AI responsibly. Without the ability to enforce these guidances and frameworks and hold accountable those who pledge, it is difficult to believe AI will be used responsibly.

Since the advancement of AI, we have seen its prominent impact in the legal profession. Lawyers and judge alike are using AI to complete everyday tasks such as writing arguments or rendering decisions. With this comes the concern for accuracy, privacy, and copyright, as well as the lack of regulation of AI within the legal field itself. Judges have encountered briefs, motions and other filings to contain nonexistent authorities. This has led some judges to issue a standing order requiring disclosure of AI use in any filing. Absent binding regulation, members of the legal community may have to take the initiative to shape the role of generative AI within the legal sphere.


1. Jennifer Steeve & Jeffrey W. Gordon, Generative AI is Here and Ready to Disrupt, Insight – Riley Safer Holmes  Cancilla (Aug. 24, 2023), https://www.rshc-law.com/docs/default-source/articles/alerts-articles---....
 

3. See Lawgeex, https://www.lawgeex.com/

4. Kyle Wiggers, Harvey, Which Uses AI to Answer Legal Questions, Lands Cash from OpenAI, Techcrunch.com (Nov. 23, 2022), https://techcrunch.com/2022/11/23/harvey-which-uses-ai-to-answer-legal-q... (last visited Oct. 2, 2023)

5. Jennifer Steeve & Jeffrey W. Gordon, Generative AI is Here and Ready to Disrupt, Insight – Riley Safer Holmes Cancilla (Aug. 24, 2023), https://www.rshc-law.com/docs/default-source/articles/alerts-articles---....

6. Kate Rattray, Harvey AI: What We Know So Far, Clio (last updated July 24, 2023), https://www.clio.com/blog/harvey-ai-legal/ (last visited Oct. 5, 2023).

7. See Harvey.Ai, https://www.harvey.ai/ (last visited Oct. 5, 2023).

8. Matt O’Brien, Meta and X Questioned by Lawmakers Over Lack of Rules Against AI-Generative Political Deepfakes, AP News (Oct. 5, 2023), https://apnews.com/article/election-deepfakes-ai-x-twitter-facebook-meta... (last visited Oct. 5, 2023).

9. Artificial Intelligence - Regulatory Development- U.S., 1B Information Law § 1:19 (Sept. 2023).

10.  Id.

11. Id.

12. Cecilia Kang, 8 More Companies Pledge to Make A.I. Safe, White House Says, N.Y. Times (Sept. 12, 2023), https://www.nytimes.com/2023/09/12/technology/white-house-ai-tech-pledge...

13. “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023 at FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI ǀ The White House (last visited Sept. 27, 2023).

14. “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI,” September 12, 2023 at FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI ǀ The White House (last visited September 27, 2023); “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023 at FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI ǀ The White House (last visited Sept. 27, 2023).

15. Allison Pecorin, Schumer Unveils New AI Framework As Congress Wades Into Regulatory Spaces, ABC News (June 21, 2023), https://abcnews.go.com/Politics/schumer-unveils-new-ai-framework-congres... Majority Leader Schumer Delivers Remarks to Launch SAFE Innovation Framework For Artificial Intelligence At CSIS, Senate Democrats (June 21, 2023), https://www.democrats.senate.gov/news/press-releases/majority-leader-sch....

16. E. Kenneth Wright, Jr., Artificial Intelligence, Artificial General Intelligence and Beyond: Humanity in a Lurch.

17. Katrina Zhu and EPIC IPIOP Clerk, The State of State AI Laws: 2023, EPIC (August 3, 2023) https://epic.org/the-state-of-state-ai-laws-2023/, (last visited Oct. 6, 2023).

18. Id.

19. Id.

20. Id.

21. Id.

22. Christopher Ferguson and Heather Whiteside, The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act, FASKEN (Oct. 18, 2022), https://www.fasken.com/en/knowledge/2022/10/18-the-regulation-of-artific... Charles S. Morgan, et al., The Dawn of AI Law: The Canadian Government Introduces Legislation to Regulate Artifical Intellgence in Canada, McCarthy Tetrault (July 11, 2022), https://www.mccarthy.ca/en/insights/blogs/techlex/dawn-ai-law-canadian-g... (last visited Oct. 6, 2023).

23. Id.

24. Artificial Intelligence and Data Act, Government of Canada (last updated Sept. 27, 2023), https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-....

25. Id.

26. Shana Lynch, Analyzing The European Union AI Act: What Works, What Needs Improvement, Stanford University Human-Centered Artifical Intelligence (July 21, 2023), https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works... Müge Fazlioglu, US Federal AI Govermamce: Laws, Policies and Strategies, IAPP (June 2023) https://iapp.org/resources/article/us-federal-ai-governance/.

27. Müge Fazlioglu, US Federal AI Govermamce: Laws, Policies and Strategies, IAPP (June 2023) https://iapp.org/resources/article/us-federal-ai-governance/

28. EU AI Act: First Regulation on Artificial Intelligence, European Parliament News (June 14, 2023), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO9380....

29. Müge Fazlioglu, US Federal AI Govermamce: Laws, Policies and Strategies, IAPP (June 2023) https://iapp.org/resources/article/us-federal-ai-governance/

30. Cat Zakrzewski and Nitasha Tiku, AI Companies Form New Safety Body, While Congress plays Catch-up: The Frontier Model Forum is the Latest Sign that Industry is racing ahead of policy makers who want to rein in AI, Washington Post (July 26, 2023), https://www.washingtonpost.com/technology/2023/07/26/ai-regulation-creat....

31. Cat Zakrzewski and Nitasha Tiku, AI Companies Form New Safety Body, While Congress plays Catch-up: The Frontier Model Forum is the Latest Sign that Industry is racing ahead of policy makers who want to rein in AI, Washington Post (July 26, 2023), https://www.washingtonpost.com/technology/2023/07/26/ai-regulation-creat... Karoun Demirjian, Schumer Lays Out Process to Tackle A.I., Without Endorsing Specific Plans, N.Y. Times (June 21, 2023), https://www.nytimes.com/2023/06/21/us/ai-regulation-schumer-congress.html.

32. S.I. Strong, Race Against The Machine: Who Is Responsible For Regulating Generative Artificial Intelligence in Domestic and Cross-Border Litigation?, 2023 U. Ill. L. Rev. Online 165, 169 (Fall 2023).

33. Id.

34. Id.

35. Shannon Capone Kirk, Emily A. Cobb, and Amy Jane Longo, Judges Guide Attorney on AI Pitfalls with Standing Orders, Ropes & Gray (August 2, 2023) https://www.ropesgray.com/en/insights/alerts/2023/08/judges-guide-attorn....

36. Id.

37. Id.

38. Id.

39. Standing Order for Civil Cases Before Magistrate Judge Fuentes, United States District Court for the Northern District of Illinois (August 11, 2023).

40. Id.

41. Id.

Member Comments (1)

AI IS DANGEROUS.ARE WE GOING TO DISCARD WE ARE DEALING WITH HUMAN BEINGS AND THEIR FRAILTIES. AFFIDAVITS AND CERTIFICATIONS ARE NOT THE ANSWER.IF ATTORNEYS ARE FUND TO HAVE USED IT AND THERE IS ERROR THEN THEIR LICENSES SHOULD BE AT STAKE 

Login to post comments