Frequently Asked Questions and Suggested Best Practices Related to Generative Artificial Intelligence in the Legal Profession
Ever since the launch of ChatGPT in November 2022 took generative artificial intelligence mainstream, lawyers have been scrambling to keep up with ever-changing legal and ethics requirements on matters like copyright infringement, privacy, ethics obligations and judicial practices. Because of the pace of change, frequently asked questions (FAQ) and proposed best practices offer a superior format to an exhaustively-researched law review article for communicating and updating the state of the law, while providing lawyers with guidance for using generative AI responsibly to benefit clients.
I. Copyright, Plagiarism and Gen AI
Can I be sued for copyright infringement for use of output produced by generative AI?
Unlikely. Although copyright lawsuits by various artists, authors and most recently, the New York Times, have proliferated against companies like OpenAI, Microsoft and Google which offer generative AI services, individual users haven’t been targeted. The copyright lawsuits against AI companies consist of two main allegations: first, that GPT-4 and other models accessed and used large amounts of original content as input for training without permission, and second, the AI models produce content that is either identical or so similar to the original work that infringes on copyright. Individual users wouldn’t face liability for training claims, while the chance of an individual user inadvertently generating infringing content is low. Moreover, many AI vendors have promised to indemnify customers sued for copyright infringement resulting from use of the product (albeit subject to loopholes and other limitations).
Even if I don’t violate copyright laws, is it plagiarism to use generative AI content?
Technically yes, in extreme cases. Plagiarism is defined as the act of passing someone else’s work off as your own without attribution. So if you were to cut and paste AI-generated output, without any modification or attribution, into a blog post or brief, it would technically fit the definition of plagiarism. Plagiarism issues aside, cutting and pasting AI-generated content is a bad idea. Although Google doesn’t ban AI-generated content from search engines, it may be considered to be of lesser quality and ranked lower. Moreover, Open AI’s Terms of Use prohibit users from representing that AI-generated content was human-generated when it was not.
Can I copyright blog posts, logos or other content created by generative AI?
Under current law, no. The U.S. Copyright Office still takes the position that to qualify as a work of authorship, a work must be created by a human being, though it is re-examining this position in a Notice of Inquiry issued in August 2023. But for now, AI-generated works won’t qualify for copyright protection, so if you use AI to create a logo or book, others will be able to copy it too.
Best practices to avoid infringement and plagiarism: Though potential of copyright infringement and plagiarism claims are low, you can reduce them to near zero with best practices such as:
- Employ generative AI for less creative tasks like case summaries or chart-creation or for less-than-final work like first drafts and outlines;
- Avoid hacking genAI, e., intentionally asking it to produce an article in the style of Hemingway, or to replicate a photo of Joaquin Phoenix as the Joker;
- Never cut and paste generated AI content but instead, make it your own, by injecting your own commentary, spin and unique word choice;
- Where content can’t be easily modified (such as images) and will be widely visible, consider using an image-creation platform like Adobe Firefly which trains
- only on licensed work;
- Disclose use of AI where used to produce substantial portions of work;
- Use online tools to check content created with AI for plagiarized content or similarity to AI like https://www.zerogpt.com/; and
- Require employees and contractors to disclose the extent to which they relied on AI to avoid liability for their work, or ensure any works-for-hire they produce are eligible for copyright protection.
II. Privacy and Confidentiality
Does using generative AI pose a threat to data privacy and attorney-client confidentiality?
Yes, if used carelessly. Privacy and confidentiality concerns may arise when AI users input their data into AI models. In a highly publicized case in early 2023, a Samsung employee leaked proprietary trade data to ChatGPT, where it was publicly accessible. Still, even with privacy concerns, banning or avoiding use of generative AI is overkill and deprives lawyers and their companies access to beneficial tools.
Best practices to ensure privacy and confidentiality:
- Absolutely no use of personal identifiable information (PII) or trade secrets: Sharing highly protected information with most AI platforms (except for internally developed models) is asking for trouble, and generally unnecessary. Limit usage to anonymized and more general inquiries—along the lines of what you might ask on a listserv or when seeking advice from a colleague.
- Understand TOS: It’s imperative to review the terms of service for AI platforms. For example, the TOS for Anthropic’s Claude say that the service does not train on non-public input while OpenAI TOS provides for opt-outs.
- Use commercial products developed for legal: Where heightened protection is warranted, opt for commercial AI products developed specifically for lawyers which will offer more robust protections (though you’ll still want to review the TOS). For example, the Casetext TOS warrant at least a commercially reasonable standard of care to protect confidential information.
- Consult clients, as needed: If you adopt these best practices, routine disclosure of AI use for the majority of clients probably isn’t necessary. That said, if you represent corporate clients with their own internal AI use practices or defendants in highly sensitive matters, you may need to disclose and/or seek consent for AI use.
- Protect clients from third-party AI disclosures: Occasionally, clients may agree, or be compelled to share trade secrets or other confidential information as part of deal negotiations or discovery subject to a non-disclosure agreement (NDA). Be sure that the NDA addresses whether, and what types of generative AI products may be used for review and analysis of confidential information.
III. Duty to Supervise
Given that many AI products are known to produce inaccurate or “hallu-citations,” isn’t it a waste of time to use it?
No, provided you check the work produced and understand the limitations. By now, most lawyers are familiar with the case of New York lawyer Steven Schwartz who was sanctioned for citing fake cases that he admitted were generated by ChatGPT. More recently, former Trump counsel Michael Cohen passed along to his attorneys citations found with Google Bard that made it into the court filings. Since those initial incidents, there have been more, along with numerous court orders and rules requiring attorneys to disclose AI use and certify human review of all sources cited.
Though AI captured the lead, these mishaps resulted not from unreliable modern tech but old-fashioned, sloppy lawyering: the lawyers never read the cases that AI generated. Just as competent lawyers read the cases referenced in a head notes summary or review a law clerk’s research, the same practices govern use of AI.
Best practices for supervision and use of generative AI:
- Understand limitations: To avoid inappropriate AI usage, lawyers should understand its limitations. Generally speaking, consumer-facing platforms like ChatGPT or Claude aren’t any more useful for legal research than running a general Google search or Wikipedia. But they’re still useful for summarizing complicated cases, translating legalese, issue-spotting for outlines, or drafting marketing and website content and correspondence and discovery requests. For “bet the company” litigation or any court filings, opt for commercial tools developed for legal.
- Check for bias: Many AI tools have built-in gender and racial biases that sadly, reflect current social norms. For example, an AI program prompted to describe an attorney will invariably use a masculine pronoun, or generate an image of an attorney as a white man. Other responses may yield similar stereotypes, so it’s important to scrutinize results and repeat prompts for more suitable output.
- Don’t trust until verified: For now, presume that any content generated by AI is presumptively untrustworthy until verified. This applies to all content, even summaries, because sometimes AI misses a subtle point or confuses a holding. That said, accuracy is rapidly improving, and results generated in 2024 are vastly more on-point than those from a year ago.
- Supervise AI use: Supervision of AI use not only means reviewing and checking all sources produced and revising prompts but also overseeing AI use by team members. Ensure that other attorneys in your firm and staff are properly trained on AI use and maintain a record of prompts for your review and certifies that they verified the outputs.
IV. AI and Disclosure Requirements
Must AI be disclosed in court filings?
Not universally, but some courts are requiring disclosure through standing orders. And a Montana federal judge prohibited an attorney granted pro hac vice admission from using generative AI programs like ChatGPT to draft briefs.
Must AI use be disclosed to clients?
No, but may be advisable in some situations. Currently, no statutes or ethics rules require disclosure of generative AI use to clients. And traditionally, lawyers have not shared with clients the tools and products used to research or draft contracts or court filings. That said, the California State Bar’s Practical Guidance on AI (2023) suggests that lawyers might consider communicating about AI use to clients depending upon risk involved.
Are there any other AI disclosure requirements I should be aware of?
Yes. As mentioned earlier, some of the AI platforms like OpenAI require disclosure of AI-created content. More recently, YouTube announced that creators who use AI to produce content must inform viewers through use of AI tags. While disclosure is important, transparency and accuracy matter too. The SEC recently brought an enforcement action and levied substantial civil penalties on two investment companies for falsely touting use of AI tools for investment services, when they didn’t actually do so.
Best practices for disclosure:
- Be aware of court orders: Familiarize yourself with court rules or standing orders regarding compulsory AI disclosure so you can comply.
- Be accurate in disclosures: Don’t misrepresent the AI capabilities employed by your firm.
- Be aware of platform terms of service: As noted earlier, the terms of service for some generative AI platforms like OpenAI prohibit users from attributing ChatGPT produced content to human authorship. So if you’re planning on cutting and pasting ChatGPT content verbatim in a brief or client communication, you may need to disclose under the TOS even if not required by the court or a client.
- Be forthright about AI use when asked by a client: If a client expressly inquires about your firm’s AI use, you must disclose, or at least, explore why the client wants to know. In some cases, a client may have proprietary trade secrets or a corporate policy on AI use that it wants to make your firm aware of.
- Exercise your professional judgment: But for the exceptions above, disclosing AI use to clients is entirely your call. For some firms, AI is a unique selling proposition worth highlighting on their website and in their engagement agreement. For others, AI is a tool of the trade just like a computer or word-processing program and outside the realm of what clients need to know. And some firms come out in the middle—sharing use of AI in highly sensitive or complex cases, but not for routine, low risk cases. Any approach is fine so long as in your judgment, it serves your firm and your clients.
V. Legal Ethics
Has the ABA or any state bar issued an ethics opinion on AI use by lawyers?
Currently, no rules have been formally adopted. But some guidance is available from two states. In 2023, the State Bar of California published this practical guidance on use of generative AI in law practice, and Florida issued Proposed Advisory Opinion 24-1 on generative AI use by lawyers. In 2024, New Jersey issued Preliminary Guidance on Use of AI and in April, New York released a comprehensive AI Guidance document, 80 pages long. A summary and comparison of the Florida and California rules, generated by Anthropic’s Claude and a summary of the NY rules generated by ChatGPT are provided at the end of this article.
If my bar hasn’t issued ethics guidance, does that mean that I can’t use generative AI?
Absolutely not. Although the California guidance and Florida proposed rules are helpful, generative AI does not require special regulation. Instead, ethical use and implementation of AI can be accomplished by adhering to existing ethical obligations, such as the duty of tech competence, duty of confidentiality, duty to supervise and duty to avoid harassment or bias. It’s likely that the ABA and other states will follow California and Florida—so if you adopt those practices, you can minimize risk.
This article was originally posted on LawOfficesofCarolynElefant.com in 2024. It is republished here with permission.