4

AI in legal services - will lawyers or citizens win in this battle for the futur...

 9 months ago
source link: https://diginomica.com/ai-legal-services-will-lawyers-or-citizens-win-battle-future
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

AI in legal services - will lawyers or citizens win in this battle for the future?

By Chris Middleton

December 8, 2023

Dyslexia mode



Image of a gold gavel

(Image by 3D Animation Production Company from Pixabay)

When it comes to AI in legal services, there are two sides to the coin. On the one side, generative tools, Large Language Models, and other forms of AI present significant – in some cases existential – transformational challenges to legal services providers. We explore some of the issues in our previous report

But on the other, they could be a boon for citizens, offering expertly trained tools that carry out essential functions quickly, accurately, and (perhaps) inexpensively. Few would say that would be a bad thing. But this basic principle could challenge the business models of not only lawyers, but also other professional service industries, including finance, banking, accountancy, and insurance.

The AI age suggests that all these and more must be open to change, and to forging new approaches and skills paths for their staff. The core challenge for the industry is that, though complex, nuanced, and open to interpretation, the law is fundamentally a set of rules. That means it lends itself to automation – or, at least, many aspects of it do; ones that don’t require the human touch. 

Thus, it seems likely that new generations of providers may emerge to offer accessible, AI-driven legal services directly to citizens. In turn, these would disintermediate the kinds of lawyers for whom some tasks are bread-and-butter income, but also a tiresome, repetitive grind. These might include conveyancing, lease searching, probate, or settling minor disputes; functions that are essential, but often frustratingly slow and expensive for clients. 

In this version of the future, small, local solicitors might start disappearing from the High Street and Main Street, as AI-based providers take over. Even so, friction and intransigence would make that transition complex in the medium term. 

But as previously explored, law firms themselves would benefit from the opportunities presented by AI, if they are nimble and far-sighted enough to seize them. AI could speed up many tasks, such as research, admin, and document discovery, allowing them to do more with less. The key will be ensuring human responsibility, as our last report explained.

But is it that simple?

In general, therefore, AI offers the promise of making legal services accessible to more and more people, many of whom never receive justice because they can’t afford to seek it. 

At least, that’s one theory. But there’s another possibility. Namely that, as AI begins to change or threaten the business models of many law firms, it may no longer be worth their while to offer some services. Instead, they may move upmarket to focus on higher-profile cases and higher-value clients. Why do the boring stuff anymore?

The result may be a ‘justice gap’, as wealthy tech-haves use AI, while have-nots are either shut out completely, or unable to afford professional-grade technology that becomes a premium, rather than commoditized, service. 

After all, AI companies will want to recoup their investments. Having captured a new market there is no reason to assume vendors won’t jack up prices to whatever levels they believe are sustainable – given legal services’ historic premiums.

For the public, therefore, the justice gap may widen if local solicitors are pushed out of the market by AI, leaving many citizens – especially the digitally excluded – unable to seek help or receive essential legal services.

So, which vision is the most likely? 

Access to justice will be a key benefit of AI. That’s the view of Ellen Lefley, a lawyer for law reform and human rights charity JUSTICE. Speaking at a Westminster Legal Policy Forum on AI this week, she said:

I’m eternally an optimist, and I don't think that there's an inevitability of AI being integrated and the justice gap widening. I really do see the opportunity for the reverse. 

I think there is space for more targeted, shorter, and therefore cheaper, legal, professional, and personal interventions that are facilitated by AI. In turn, that could have a real capacity impact, not only on Legal Aid providers, but also on community law centres and volunteer organizations.

But she acknowledged that there will be challenges:

The question of how that's remunerated through state-funded Legal Aid is one that would obviously need serious consideration – and I don't have the answers. But I do have optimism.

Also energized by AI’s promise is Kriti Sharma, Chief Product Officer, Legal Tech, at Thomson Reuters. She told delegates:

Solving meaningful access to justice problems, and affordable access to justice, that's what I'm excited about. 

We have thousands of software developers, machine learning and data science experts, and product designers. But even more interestingly and importantly, hundreds of attorneys and subject matter experts. They have shown incredible resilience and forward thinking, in not only building content or trusted information assets, but also pivoting to a different role in the world of AI.

She explained:

It's the human language and expertise that the people in our profession bring which are really driving the difference between good AI products and great ones; it’s the subject matter expertise. 

One of the approaches that we've taken very early is not just training AI models, but training them with approaches to inject trust. We use a technique called retrieval augmented generation [RAG]. That means you're taking some of the broader models and grounding them in trusted and accurate legal information to bring inaccuracies down.

Good news. But this vision seems oddly short term and inward facing. Once the models are properly trained on the law’s details, nuances, and differing interpretations, what then? A positive outcome for citizens, we hope. But what of the legal profession? And others like it?

Matthew Hill is Chief Executive of the Legal Services Board, an independent body overseeing regulation in England and Wales. He said that regulators will need to be open to – and facilitate – change, so that citizens are the ultimate beneficiaries:

We're looking at AI from the point of view of the consumer and the benefits it might bring in terms of access, cost, and convenience. 

I talk about regulators being ‘actively open’. We know that innovators can be deterred from the legal sector because of their perception – often only their perception – of the complexity of the regulatory landscape. But often, innovators are surprised by how receptive legal services providers – and, indeed, regulators – are, once the ice is broken.

So, we think that legal services regulators should be actively encouraging innovators into the space, rather than passively waiting for them to make contact. And flexing their regulation to accommodate new approaches to the delivery of legal services for the public good.

But he accepted that this, too, is the optimistic version:

It's fair to suggest that legal services have not been in the vanguard of innovation compared to other sectors. So, when the lightning pace of development in AI meets the historic innovation inertia of legal services, the risk is the sector ends up being ‘done to’ rather than leading the revolution.

Indeed. So, the key challenge will be reversing centuries of deep-seated culture and behaviour. Common sense suggests that this is unlikely to happen at scale, though LexisNexis data reveals that two-thirds of large legal practices are considering AI strategically (see our previous report). 

Following a long way behind are 10,000 or so smaller firms – the kind that citizens look to for local justice, but which work as slowly as possible because they bill by the hour. So, the implication is that some form of justice gap will emerge outside of big cities. A multibillion-dollar opportunity for AI companies, but a messy future for vulnerable citizens.

The technology gap

There is another gap, too: a technology one. At least in the medium term.

Hill explained:

We hear newsworthy examples of case references being fabricated out of nothing by the AI, and other stories we should be worried about, including the inherent bias in systems that are trained on data generated by humans. [More on that later.]

But we need to remember that tools such as Chat GPT are not designed to give legal advice. Just as I wouldn't expect good results if I cut my toenails with a chainsaw, we ought not to be surprised if ChatGPT throws out dodgy legal advice. It's just not designed for the job.

What we should be more surprised about is legal professionals relying on ChatGPT in the first place.

Quite. But as our previous report explained, lawyers are already using ChatGPT for critical case work, and some are being fined for it. At scale, such behaviour could result in cases collapsing, plus severe reputational damage for lawyers and their employers.

So, what else might the future hold? Hill posed the question himself:

What will it be like to be a lawyer if much of what we've traditionally thought of as lawyering – research, analysis, advice, the overlay of experience and judgement to synthesize arguments – will certainly be capable of being done faster, cheaper, and better by machines? What would be left for a human lawyer to do in this world?

A question all professional services industries should ask themselves right now. In fact, a question anyone with a career should ask – and anyone with a job. (To ask it is not pessimistic; this is essential critical and strategic thinking!) 

He continued:

There's the Elon Musk view of the future in which, unencumbered by the need to work, humans descend into an existence in which, shorn of purpose and the need to think, we devolve into a diminished version of humanity.

For what it's worth, we take a slightly more optimistic view. And it's based on the premise that there isn't enough lawyering to go around. Many people and businesses, even in our mature democracy, don't have access to the legal services that would help them get fair outcomes. 

So, the great opportunity of AI is to bring the law fully to the people, by drastically cutting unit labour costs, increasing convenience, and removing bottlenecks.

He added:

Maybe the great lawyers of the middle future are the ones who capitalize on their human communication skills to act as trusted navigators, the ones who understand both how the technology is best deployed, in any given case, and how to articulate the problems that people are living through.

An enticing vision, though Hill noted it was a “flight of fancy”. 

No doubt local champions will emerge, but a future vision in which both law firms and Big Tech providers act selflessly like James Stewart in a Frank Capra film is – sadly – about as likely to happen as Elon Musk donating his billions to a pro-trans activist group.

But law firms must ask themselves probing questions, said Hill (aka act like a lawyer on your own behalf). These include how to get to grips with training, education, onboarding, competence, and standards. 

He explained:

We currently train lawyers for the jobs of the last 50 years, not for the next 50. And what about business models? Might the writing beyond the wall for the billable hour when work will be done in seconds? 

Maybe this will mark the transition to value rather than time-based charging. And this might, in turn, have positive effects on diversity in the sector by reducing the reliance on long working hours and removing the pressure to be available 24/7.

Excellent points. 

Professor Richard Susskind is, as President of the Society for Computers and Law, both an eminent figure in legal-services technology and a rare combination of evangelist and critical thinker (one usually cancels out the other). 

He drew comparisons with how AI may transform the medical sector, shifting it from treatment to prevention:

In law, the challenge is to find the equivalent of non-invasive therapy – ways of using AI to deliver results that clients want, but working in different ways. Delivering some kind of preventative lawyering: putting a fence at the top of the cliff, rather than an ambulance at the bottom. Using AI to help people avoid legal problems as well as solve them.

Many people [in medicine] ask the wrong questions. When asked, ‘What's the future?’, neurosurgeons should ask, ‘How in the future will we solve problems to which neurosurgeons are our current best answer?’ 

So, ‘What's the future of law or lawyers?’ is the wrong question. We should be asking, ‘How in the future will we solve the problems to which traditional laws are our current best answer?’ And my best answer to that, at least in part, is through artificial intelligence.

But what about other AIs?

By this point, nearly all of the discussion about AI in legal services has been around generative AI and LLMs – the disruption brought about by epochal one-year-old ChatGPT. But what about other forms of AI? What about algorithms that use human-authored data to predict behaviour? 

This is a critical question, as it has already had deleterious effects in the justice system. The highest profile example was the use of the COMPAS sentencing guidelines algorithm in some US courts, which opened a Pandora’s Box of problems. 

To summarize, it was found to recommend harsher sentences for black offenders and more lenient ones for white, because it drew on decades of unequal treatment in some courts. In this way, systemic racism was both automated and given a veneer of computer-generated veracity and trust. 

So, can the use of predictive AI ever, morally and ethically, be approved in the legal system? 

On this point, Susskind opted for ‘evangelist’ rather than critical thinker:

It's right to draw attention to these systems if they underperform and exhibit bias. Clearly, we have to be concerned. But imagine a world where systems perform at a very high level – at a level where they outperform us. In which case, the moral and ethical requirement is to use them, rather than neglect or ignore them.

He then made an excellent point.

What we're really lacking is the ability to evaluate such systems. With new medical procedures or pharmaceuticals, we have systematic ways of evaluating them. And they're not released to the market until the positive outcomes are confirmed. 

But we don't have this in law. We don't have a methodology for comparing proposed new developments with what we have today. And what we have today is a biased and a flawed system. So, one of the great advantages of AI is it is holding up a mirror to our current world.

My take

No one doubts AI’s potential to transform society in positive ways. But reality is messy, complex, and flawed – ie, never exactly how evangelists envision it. 

The real danger is not the technologies themselves, but those who rush to adopt them uncritically. And, sometimes, those who ignore them until it is too late.


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK