Artificial Intelligence Trying a Case:
It Can Be Done, But Should It Be?
This FBFK attorney put A.I. to the legal test
By Steven Darling, Attorney
The official Tweet, from Mr. Browder’s Twitter account of @jbrowder1 reads:
“On February 22nd at 1:30PM, history will be made. For the first time ever, a robot will represent someone in a US courtroom. DoNotPay A.I. will whisper in someone’s ear exactly what to say. We will release the results and share more after it happens. Wish us luck!”
We may never get to see DoNotPay’s A.I. in action, but as a former prosecutor and defense attorney with trial experience, I actually believe that it can be done – there are a number of features that could integrate well with an existing attorney’s practice habits. That said, would a defendant without an attorney really want artificial intelligence as a co-pilot during trial?
I Gave A.I. a Legal Try
An argument can be made that a successful lawyer is already analyzing data and rules like a computer in the court room. A digital defense attorney would simply need to navigate the criminal justice system in the same way. As long as the human co-pilot effectively communicated the arguments of DoNotPay’s A.I., for example, the state may be prevented from proving each element and as a result the defendant could be found not guilty. But what is our computerized criminal defense attorney actually up against?
Let’s ask the now-famous free A.I. platform, ChatGPT. When prompted with the question “What concerns would artificial intelligence have when trying to operate as an attorney in the courtroom?” our electronic companion replied in mere seconds with the following list:
- Lack of Emotional Intelligence: AI lacks the ability to empathize with clients and jurors, which is an important aspect of being a successful attorney. AI may struggle to connect with jurors, read their emotions, and adapt to their changing moods.
- Limited Understanding of Context: AI may not have the same level of understanding of the context in which a case is being tried as a human attorney would. This could make it difficult for AI to interpret non-verbal cues and understand the nuances of a case.
- Limited Interpretation of Legal Precedents: AI may struggle to interpret legal precedents in the same way that a human attorney would. While AI can analyze vast amounts of data, it may not be able to discern the subtle differences between cases and apply legal principles in a nuanced way.
- Lack of Creativity: AI may struggle to come up with creative legal arguments and strategies that are necessary in a courtroom setting. While AI can analyze data and draw conclusions based on patterns, it may not be able to think outside the box and develop innovative approaches to legal problems.
- Concerns about Bias: There are concerns about bias in AI decision-making. If AI were to make legal decisions, it would need to be programmed in a way that minimizes the risk of bias based on factors such as race, gender, or socio-economic status.
At least the machine is aware of key limitations it may face in terms of acting as an attorney – but what about the actual mechanics of the case it needs to interface with?
To be prepared at even a limited level of participation, an algorithm could be programmed to know each essential element of the offense, which means it would need to engage in machine learning akin to the years of law school every aspiring lawyer must attend. The advantage is that DoNotPay’s A.I. can likely do it in a much shorter period of time, and with far less reliance on human comforts like sleep or caffeine.
The A.I. would also need to understand the parameters of the Rules of Evidence for a given jurisdiction, which often prohibit certain questions from being asked or answered, due to varying levels of reliability of human speech. Finally, the A.I. would need to be “good” at formulating those responses. Experienced prosecutors, defense attorneys, and other types of litigators know that having the right idea is one thing, but it’s another to effectively put that right idea into well delivered human speech.
Human Error When the Defendant is Relying on A.I.
We’ve heard from artificial intelligence about what concerns it has about being the lead attorney, but what does human error look like in the context of a criminal trial when the defendant is relying on artificial intelligence as support alone?
- A failure to be able to connect with the jurors through the process of Voir Dire (jury selection).
- A failure to be able to formulate whether or not to waive opening statements or other important moments in the trial for strategic purposes.
- A failure to make a timely objection read out to you by the A.I.
- A failure to understand the nature of an objection made by the A.I., which invites an inability to answer follow up questions by the judge if the A.I. cannot understand their nuance.
- A failure to convey the sense or spirit of the A.I.’s legal instructions, which could cause a disconnection with the judge or jury ultimately deciding the case.
- A failure to determine whether or not the Defendant should suddenly testify due to some statement made during the State’s direct that the Defendant should strategically clarify.
- A failure to understand unspoken communications among the humans in the court room, such as tonality, exchanged glances, or the use of words in a double-entendre or other manner not readily understood by the machine.
Shortcuts Aren’t Always Better
As it stands, humans already have a hard enough time at the courthouse, and while philosophically speaking a machine could likely analyze all the things it needs to do to succeed as an attorney, it still must rest on a human to either convey its words, or properly receive and understand them for their intended purposes. A person cannot merely repeat what a computer whispers in court and successfully navigate our legal system without an understanding those words and their effect on the proceeding. While the prospects seem high for its use a tool to licensed attorneys, only time will tell if advances in artificial intelligence can overcome the challenges of nuanced human communication to the degree that DoNotPay’s A.I. can really be of assistance in the courtroom to everyday, unrepresented litigants.