Why do lawyers hate AI?

Why do lawyers hate AI?

Key learning

These are the 3 key problems that many lawyers are typically complaining about Artificial Intelligence (AI):

  1. Client data confidentiality concerns (online and even offline)
  2. Accuracy concerns (aka “AI hallucinations”), missing critical details, and thereafter, plausible-sounding replies with high confidence
  3. Most laws are typically local, but generic data trains most AI solutions

 

My personal reason for helping lawyers

A few years ago, someone severely threatened to physically harm me or my family if I didn’t pay them a huge ransom (the amount I didn’t even ever earn in my life). It’s amazing the judiciary or law enforcement can do in some countries to harm innocent people with the motivation from generous bribes in broken systems. Luckily though, since I’m sitting down here and writing this article, clearly, I have (narrowly) escaped that death trap. But I was extremely lucky to receive tremendous help from some very trustworthy lawyers. Since then, I’m slowly finding ways to help other similar innocent people stuck in similar scenarios.

               I thought openly educating everyone about my hard times might make a dent towards preventing the situations, but to my surprise, no one cared. It’s like educating everyone about the severity of COVID before it ever happened. Most humans can’t even fathom severe troubles until it happens to them on a personal level. Likewise, there are many innocents who die every day worldwide from extreme false threats from some very corrupt law enforcement or judicial systems. I don’t blame any single person or even the system because pragmatically, if I were to lead a big country, I can’t imagine creating any single system which can ensure no corruption at the ground level.

man in black shirt sitting beside woman in white shirt
Photo by Saúl Bucio / Unsplash

               Now, the alternate approach towards protecting such innocent people and empowering eminent lawyers, I began a journey to provide valuable technology support to legal space. I personally love technology because if done properly once, it is the best leverage to automate almost any mundane task on a massive scale. Likewise, after trying and failing a few times on this journey, I started learning the key challenges of the lawyers.

 

Learning lawyer’s key challenges

My interactions with many lawyers over the past few years have revealed some of their key professional challenges, including extensive time spent reviewing documents, difficulty delegating high-quality work, and accommodating unusual client requests. But these are only challenges at a surface level and don’t really communicate where exactly the lawyers fail at emerging technologies (such as Artificial Intelligence, AI) to help bolster their support to the society. After some brainstorming, I found one podcast that mentioned that the best place to find such questions is on Quora. And indeed, there it was . . .

Screenshot of the actual Quora page

Many people answered this question to a great extent. There are also many such questions readily available and well answered by experts from various domains. I’ve spent several days devouring all the deep information available here. Please excuse me for I forgot to cite all references properly; I'll keep that update in mind in the future blogs. The highest challenge (or perhaps fear) is that user input data used to train the models might lose client confidentiality information. Valid concern indeed. However, there exist several approaches to (relatively) easily overcome this challenge if some lawyers really intend to use an AI tool.

               Next, I learned that the AI really messes up the replying to any question in the legal space. My deep research revealed three major limitations that explain the AI’s flawed responses in the legal field.

  1. AI hallucination – AI loves to create words and stories out of thin air even in hypercritical sectors such as law, medicine, finance, engineering, etc. At best, these false claims can create great laughter to the subject experts, similar to how a fresh graduate might giggle up the industry leaders. Though in the worst cases, these can really lead to heavy professional damage, if not outright loss of life.
  2. Missing critical details – Most AI tools love to summarize things and thereby, let go of any critical information which is likely needed for winning the case. In fact, not having some basic keywords in the court documents can lead to instant case file rejection even before the start of the first hearing. Clearly, a tremendous impact.
  3. Difficulty in detecting false claims – Ironically, if the AI tools start performing quite well, say accurate replies in 99% situations, then it becomes even harder to detect their failures in the remaining ones. This is an even bigger problem because every tiny detail matters in a legal document. However, I believe that if someday we reach AI tools reach this state, this is a “good” problem to have because implicitly it means AI is really helping us tremendously in our mundane life.
a robot holding a gun next to a pile of rolls of toilet paper
Photo by Gerard Siderius / Unsplash

Apart from challenges of confidentiality and variants of AI output garbage, some lawyers raised another very valid concern. Unlike most other information such as software coding or food grade labels, deep legal documentations are typically valid only in a regional locality (sometimes, are even unique for each city). Conversely, most AI tools train on generic worldwide data that is biased toward sectors with more readily available information, such as software coding. To further exacerbate the situation, these regional documents are many-a-times disconnected from the legal space, which means most AI tools don’t even have access to such information in order to learn about the regional requirements.

               So, in total, now I’ve learned 5 main key challenges of the lawyers who seem very eager to use AI tools but are completely unsure how to. Luckily, lawyers are not the only one who are facing these challenges. All critical sectors face them and, hence, many of the leading AI experts around the world are doing heavy research in these domains. Just to name a few, here’s a list:

  1. Retrieval-Augmented Generation (RAG)
  2. Truthfulness Training & Alignment
  3. Fact-checking and Verification Models
  4. Probabilistic Modelling & Confidence Estimation
  5. Prompt Engineering & System Design
a cell phone sitting on top of a laptop computer
Photo by Aidin Geranrekab / Unsplash

                The problem with these solutions is that these require very good AI skill set to implement, well at least as of now. Further, these are state-of-the-art AI research features that are still in the experimental phase and may take years (if not decades) to become widely adopted by masses who need them right now. Therefore, my aim with LegalOps AI is to distil the deep tech information for lawyers (and perhaps even other non-AI-experts) to easily implement such tools in their mundane lives. Just like me, if this is something you are excited about learning, hit the subscribe button to get notified of the upcoming free newsletters delivered directly to your favourite inbox. Happy reading!

 

Call to action

If you are a proficient lawyer or subject expert on this article and believe that there’s some critical error, please let us know your thoughts in the comments below now.