Could Artificial Intelligence increasingly limit the practice of Law?
The topic of Artificial Intelligence, with its enormous reach and how little we actually know – in many cases, we are still in an intuitive stage – has given rise to an authentic flood of studies, opinions, controversies, and heated debates that practically occur daily.
Our Laboratory believes that one of the best services it can provide to all those individuals and organizations following our work is to offer a carefully selected series of opinions, positions, and debates, updated as close to the date they occur as possible, to genuinely keep those who are paying attention informed about what is happening and our perspective.
By the way, the Laboratory is working on its Artificial Intelligence Microlab and will share its conclusions and perceptions in due time, but the urgency of the topic does not allow for many delays. That is why today we are launching a Series on Artificial Intelligence, which we hope will be the foundation for analysis, reflection, and conclusions on the projections that a topic of this magnitude forces us to address. No one, neither governments, nor international bodies, nor regional organizations, think tanks, or individuals can remain indifferent to its evolution.
As always, we hope that our service can be of use to you.
Artificial intelligence may not steal your job, but it could change it
Artificial Intelligence (AI) is already being used in the legal field. Is it really ready to be a lawyer?
Advances in Artificial Intelligence (AI) tend to generate anxiety about the future of jobs. This latest wave of AI models like ChatGPT and OpenAI’s new GPT-4 is no different. First, we had the launch of the systems. And now we are seeing predictions of job automation.
In a report published by Goldman Sachs’ group in early April, it was predicted that AI advancements could somehow automate 300 million jobs (representing about 18% of the global workforce). OpenAI also published its own study, in collaboration with the University of Pennsylvania (USA), which claimed that ChatGPT could affect more than 80% of jobs in the country.
The numbers seem overwhelming, but the language used in these reports can be frustratingly vague. “Modify” can mean many things, and the details are unclear.
People whose jobs involve communication through language may, as expected, be particularly affected by large language models like ChatGPT and GPT-4. Let’s take an example: lawyers. In early April, I looked at the legal industry and how it is likely to be affected by new AI models, and what I found is that there are as many reasons for optimism as there are for concern.
The outdated and slow legal industry has been undergoing technological disruption for some time. In an industry with little labor and a need to deal with piles of complex documents, technology that can quickly understand and summarize text can be immensely useful. So, how should we think about the impact these AI models might have on the legal sector?
First, recent advancements in AI are particularly compatible and well-suited for legal work. In March, GPT-4 passed the Uniform Bar Exam, also known as UBE, similar to the OAB exam in Brazil, which is the standard test required to obtain a law license in the USA. However, that doesn’t mean AI is ready to defend.
The model may have been trained on thousands of practice tests, which would have made it an impressive candidate, but not necessarily a great defender. (We don’t know much about GPT-4’s training data because OpenAI didn’t publish that information.)
Still, the system is excellent at text analysis, which is extremely important for lawyers.
“Language is of utmost importance in the legal industry and the field of law. All roads lead to a document. That means you have to read, analyze, or write a document… and that is really the currency people trade in,” says Daniel Katz, a law professor at Chicago-Kent College (USA), who oversaw the GPT-4 test.
Moreover, according to Katz, legal work has many repetitive tasks that can be automated, such as searching for applicable laws and cases and obtaining relevant evidence.
One of the researchers from the UBE test, Pablo Arredondo, has been secretly working with OpenAI since the fall to use GPT-4 in his legal product, Casetext. According to the Casetext website, it uses AI for “document review, legal research memos, statement preparation, and contract analysis.”
Arredondo also says that as he uses GPT-4, he gets increasingly excited about the potential of the language model to help lawyers. He says the technology is “incredible” and “refined.”
However, AI in the legal field is not a new trend. It has already been used to review contracts and predict legal outcomes, and researchers have been exploring how AI could help pass laws. Recently, DoNotPay, a consumer rights firm, considered submitting an AI-written argument in a court case, using a so-called “robot lawyer” to recite it into the ears of the defendants through a headset. (DoNotPay did not take action and is being sued for practicing law without a license).
Despite these examples, this type of technology has not yet seen widespread adoption in law firms. Could that change with the new large language models?
Thirdly, lawyers are accustomed to the work of reviewing and editing.
Large language models are far from perfect, and their results should be closely verified, which requires a lot of work. However, lawyers are very used to reviewing documents, whether produced by people or machines. Many are trained in document review, which means that greater use of AI, with a human involved in the process, could be relatively easy and practical compared to the adoption of this technology in other sectors.
The big question is whether lawyers can be convinced to trust an AI system rather than a junior lawyer who has spent a few years in law school.
Finally, there are limitations and risks. Sometimes, GPT-4 can even create a very convincing but incorrect text and misuse reference content. Once, says Arredondo, GPT-4 made him doubt the facts of a case he had worked on. “I told it, you’re wrong. I defended this case. And AI said, you can boast about the cases you’ve worked on, Pablo, but I’m right and here’s the proof.” And then it gave a URL that went nowhere. Arredondo adds, “She’s a bit of a sociopath.”
Katz says it’s essential that humans constantly monitor the results generated by AI systems, emphasizing the professional obligation of lawyers to be thorough and meticulous: “You shouldn’t just take the results of these systems and hand them to people without reviewing them.”
Other professionals are even more skeptical. “This is not a tool I would trust to conduct important legal analysis and ensure it’s reliable and accurate,” says Ben Winters, who leads the AI and human rights projects at the Electronic Privacy Information Center. Winters characterizes the culture of generative AI in the legal field as “too confident and irresponsible.” It has also been reported in the media that AI is affected by racial and gender biases.
There are also long-term considerations and complex issues. If lawyers have less practice in legal research, what does that mean for their skill and knowledge in the field?
But we are still a bit far from that scenario. For now.
Original source and credits: MIT Technology Review, May 16, 2023.
Article developed by MIT (Massachusetts Institute of Technology). Original in Portuguese. Translation by the Technical Team.
0 Comments