Publications

PDF

Artificial Intelligence: A path to shorter workdays or the demise of human viewpoint?

Briana Combs
American Bar Association - Appellate Issues 2024 Winter Issue
02.13.2024

Appellate lawyer Mark Davies, Washington and Lee University School of Law Professor Joshua Fairfield, Thomson Reuters Senior Vice President for Product Development Emily Colbert, and the Honorable Herbert Dixon discussed artificial intelligence (“A.I.”) and its place in the legal arena in a breakout session titled “Why BOTher Writing” on November 3, 2023. Their discussion focused on “ChatGPT,” an A.I. program with the ability to respond in full text to a question posed by the user.

Retired Judge Herbert Dixon kicked off the panel discussion by commenting that artificial intelligence has become an important part of our lives. He noted that during the same week of the AJEI Summit, President Biden issued new mandates regarding artificial intelligence and safety, highlighting A.I.’s growing presence throughout society.

This sparked commentary from the audience, and particularly from Judge Boggs, who posed the following question to the ChatGPT bot: “Name three of Judge Boggs’s most famous cases.” After the bot responded with an answer in full text, Judge Boggs reviewed the answer—only to find that the bot’s response was not accurate. This led to further remarks from Judge Dixon, who explained his own unsuccessful interaction with the ChatGPT chatbot. Judge Dixon explained that after he too had posed a question to ChatGPT, the bot “hallucinated” a response, providing incorrect information.

On the topic of the bot’s potential for “hallucination,” Attorney Mark Davies commented that according to his conversations with certain technology- focused individuals, “hallucinations” by A.I. are fixable and will not be a problem over time. Davies then posed the following dilemma—that some lawyers use “hallucination” by A.I. systems such as ChatGPT as an excuse not to use technology. This leads one to ponder…is the potential benefit really worth the risk?

Thomson Reuters’ Emily Colbert then added her take on ChatGPT. She explained that users need to be aware of both the input and the output (i.e., what is being presented to the system, and what the system is generating in response). Emily emphasized the importance of properly educating lawyers on how to use ChatGPT, further explaining that there are several techniques that exist when it comes to using technology. For that reason, Emily advocated that it is important for each user to do his or her due diligence before engaging ChatGPT’s advanced technology. And because ChatGPT was trained in a moment in time, Emily alerted the audience to the inevitable reality in the future, ChatGPT’s data will become “stale.”

According to Professor Fairfield, while the technology itself will improve over time, the hallucinations will not go away and in fact will become more persuasive. This is a frightening and risky proposition—that in the future, the “hallucinations” by ChatGPT will become harder and harder to detect. According to Professor Fairfield, the core problem with ChatGPT is that it is not grounded in common sense (i.e., the system does not recognize context). As an example of this potential for technological failure, Professor Fairfield mentioned Google Maps—noting that while many people were skeptical of Google Maps at first, it is very widely used now, even though it is not always accurate and can lead users in the wrong direction. Following mention of Google Maps, Mark Davies noted society’s over-reliance on technology.

Attorney Davies then received the following question: Would you use a chat bot at oral argument if permitted to do so? In response, Davies stated that while he does think the bot would be helpful, that does not come without qualification. Specifically, Davies explained that the bot could be used to run a search, but that use would depend on the type of question posed. For example, Davies noted that the bot could be used to run a search on a simple research question (with the risk that the answer may be wrong). But if the question posed was more challenging (for example, an inquiry about the heart of a case), Davies explained that he would not use the chat bot in that scenario.

The conversation then shifted to a discussion of how to ensure that assistive technology such as ChatGPT is used properly by law students. Professor Fairfield responded first by emphasizing that technology must be approached from an ethical perspective. According to Professor Fairfield, if the technology is used to write, it should not be used as a substitute for one’s own skill and judgment. Fairfield further explained that it is important to think about whether one’s use of A.I. will help or hurt those who the technology is being used for. He emphasized that users must remain on guard to ensure that use of the technology is helping, and not hurting, those the user is trying to serve (e.g., clients). Professor Fairfield also expressed the importance of making sure that A.I. does not take over the writing of the law, especially where A.I. bots are not a part of human society and do not care about society’s welfare.

The next question posed to the panel included the following inquiry: How should the courts and judges deal with the use of ChatGPT by practitioners, and must lawyers make certification in their pleadings disclosing that they are using A.I.? Mark Davies responded first, opining that practitioners should not hide from the judge the technology they are using. Davies noted that this is a transition period, and over time, technology may become part of attorneys’ practices. Emily Colbert then expressed her opinion that the shift from books to technology is a good one. She explained that younger lawyers will start to use A.I. in law school. According to Emily, A.I. is a part of life and lawyers should not turn their backs on it.

The next comments came from Professor Fairfield, who posed the possibility of an “under disclosure problem.” Specifically, Professor Fairfield explained that the use of A.I. is often undetectable, leading to the potential for attorneys to refrain from disclosing their use of the technology. For this reason, Professor Fairfield expressed his opinion that judges need to be clear in their orders about exactly what they require lawyers to disclose. If judges are not clear, this could lead to both under disclosure problems (e.g., attorneys not disclosing to the court that they used A.I. to draft their briefs), and over disclosure problems (e.g., attorneys disclosing too much—for example, an attorney notifying the court that he or she “used Google to run a search”).

Emily Colbert then jumped in again, assuring the audience that Westlaw is working to ensure that all statements generated by A.I. can be validated (via footnotes which can be used to verify statements made by the bot). Emily explained that while all users still need to verify the answers generated by A.I., A.I. still saves users time even with this need to verify. She then emphasized the importance of user due diligence, i.e., that anyone using the A.I. models exercise due diligence to understand what he or she is using and how the technology works before putting it to practical use. Mark Davies then expressed the overall opinion that while technology may make errors, it can still be a helpful tool.

Several additional considerations were then explored by the panel. First, Professor Fairfield mentioned string cites of cases, noting that A.I. will begin generating string cites it believes are connected. According to Professor Fairfield, this is concerning because that “connection” will not be determined by humans and human thought. And in five years, all connections between cases will be determined by A.I.—which is even more concerning.

Emily Colbert then commented on the evolving nature of A.I. and its presence. She explained that the smaller the A.I. language model, the less accurate it is likely to be. According to Emily, this makes the idea of building one’s own A.I. model risky because of the limited size of such an individual model.

Professor Fairfield then expressed his opinion that it is uncomfortable to think about, from a judge’s perspective, that attorneys may use A.I. to write briefs and generate language a particular judge likes and uses. According to Professor Fairfield, in that context, the A.I. would be “massaging” the brief to please the judge. But in Professor Fairfield’s opinion, the network of meaning needs to be human. Professor Fairfield then looked to the future, explaining that years from now, the next generation will be reading connections between cases that A.I. made, rather than connections made by the human mind. This, according to Professor Fairfield, is problematic.

The panel then shifted to closing comments. The first comments came from Mark Davies, who expressed that there is a genuine concern about: 1) how malicious people may use A.I. technology, and 2) whether A.I. may eventually outsmart its human users. Davies thus emphasized the importance of A.I. safety and keeping A.I. technology “under control.”

Emily Colbert then gave her final thoughts, expressing her opinion that it is important for more seasoned lawyers to aid in developing the regulations for A.I. and the rules for how it should be used. And Professor Fairfield concluded the session by noting his concern about input and output. He explained that eventually, the output now generated by A.I. will be the input that will go back into A.I.—leading to a “vicious cycle.” According to Professor Fairfield, this poses a “scary” future—one where the “persuasive” statements A.I. generates will become part of the law and the next generation’s understanding of that law.

Attorneys

Practice Areas