In the past year, the spam emails I used to get from tailors offering to visit my office to make bespoke shirts and suits have been replaced by vendors looking to demo their new AI-powered legal tools that promise to transform my practice and make my life easier. Whether any of these specific AI tools actually work as advertised and justify their costs is hard to say. But the Eleventh Circuit Court of Appeals just gave attorneys a reason to include AI in their toolbox.
Last month, Circuit Judge Kevin Newsom wrote a concurring opinion in Snell v. United Specialty Insurance Company, No. 22-12581, 102 F.4th 1208 (11th Cir. 2024), admitting he used ChatGPT for an insurance case and suggesting attorneys consider using AI for their practice. In Snell, the plaintiff was allegedly injured when playing on a ground-level trampoline in an Alabama homeowner’s backyard. The plaintiff sued the homeowner and the landscaper who installed the trampoline. The landscaper submitted a claim to his insurance carrier, but the claim was denied because the “injury did not ‘arise from [the landscaper’s] performance of landscaping.” The carrier concluded the “accident stemmed from [the] ‘[a]ssembly and installation of a Trampoline,’” for which there was no coverage. The district court and the Eleventh Circuit agreed with the carrier.
The case ultimately turned on an atypical set of dispositive facts and a “quirk” of Alabama insurance law. But much of the argument still focused on “whether [the landscaper’s] installation of an in-ground trampoline, an accompanying retaining wall, and a decorative wooden ‘cap’ fit within the common understanding of the term ‘landscaping’ as used in the insurance policy[.]” To wrestle with this question, Circuit Judge Newsom admitted he “wonder[ed] whether ChatGPT and other AI-powered large language models (‘LLMs’) might provide a helping hand.” So he asked ChatGPT, “What is the ordinary meaning of ‘landscaping’?” The response he received was “more sensible” than he expected:
“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.
This experience led Judge Newsom to conclude that “maybe” LLMs could “be useful in the interpretation of legal texts,” especially when trying to understand the “ordinary meaning” of words in legal instruments, like statutes, contracts, and policies.
As the concurring opinion explained, LLMs are uniquely capable of understanding the “ordinary” meaning of words because they are “quite literally ‘taught’ using data that aim to reflect and capture how individuals use language in their everyday lives.” So, for example, LLMs can “offer meaningful insight into the ordinary meaning of the term ‘landscaping’ because the internet data on which they train contain so many uses of that term, from so many different sources—e.g., professional webpages, DIY sites, news stories, advertisements, government records, blog posts, and general online chatter about the topic.” The result is that LLMs can leverage “inputs from ordinary people” and, because LLMs are so readily accessible, be “available for use by ordinary people.”
There are, however, potential pitfalls that attorneys need to be aware of, including the fact that LLMs can “hallucinate” (i.e., generate “facts” that aren’t true) and be manipulated with strategically drafted prompts. LLMs also may not capture language and speech that is offline, underrepresented on the Internet, or outdated (i.e., how a word was ordinarily used in 1776 versus 2024). This, Judge Newsom concludes, is why LLMs should be just one of many “datapoints to be used alongside dictionaries, canons, and syntactical context in the assessment of terms’ ordinary meaning. That’s all; that’s it.”
All that said, Judge Newsom ultimately concluded that: “At the very least, it no longer strikes me as ridiculous to think that an LLM like ChatGPT might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts.” And I think he’s absolutely correct. AI and LLMs are a new tool that should be in every attorney’s arsenal to be applied, when appropriate, to give additional context and perspective to the issues we face. The same principles and considerations that Judge Newsom applied to the ordinary-meaning analysis could, for example, be extended to trademark and consumer protection cases. Whether a mark or an advertisement is likely to lead to consumer confusion is often a significant issue in these types of cases. ChatGPT could be used to bolster or undermine the results of a market survey about what the general public thinks. Like Judge Newsom said, LLMs may not be the be-all and end-all, but they certainly can provide helpful datapoints.
For more information on artificial intelligence (AI), please contact Jason Kelly, Esq., CIPP/US/E at [email protected].