The world of change, transformation, and IT is currently caught up in the storm of the AI bubble that is dominating the world of technical projects and programmes. With so many organisations looking to implement and adopt AI techniques, we’re always looking to see how it’s set to transform sectors and workplaces.
Jenny Simmons, our Legal and Professional Services Manager, recently hosted a breakfast roundtable on AI and Change in Legal with a number of legal experts and leaders from our network to see how these revolutionary tools could transform the space.
Supporting debates on the potential power and dangers of AI in the legal space, the roundtable generated some key conversations around approaches to AI technology and the importance of understanding “hallucinations” and how they affect the accuracy of AI-generated content.
Details and technicalities are crucial factors in the legal sector, more so than any other. Because of this, AI is becoming a point of contention for many law firms, with many concerned with the prevalence of AI errors and misunderstandings.
We delved into this problem along with many other poignant topics during our session. To help you see what you missed from the roundtable, we’ve included the key points of our latest event round-up with all the information and guidance that was shared during the meeting.
It’s no understatement to say that AI has transformed the world of tech. In almost every sector and industry, clients and investors are calling on companies to harness the power of AI to deliver services and products at a faster rate and a fraction of the price.
The same is the case in the legal sector, according to those who attended our roundtable. A number of top firms are already looking to implement the latest in tech from organisations such as LexisNexis and Thomson Reuters due to growing pressure from shareholders. With the potential time and money savings of AI, many are hoping it can open up new opportunities for continuous improvement across the sector.
Many are already utilising the technologies to quickly find relevant cases and clauses in databases and beyond, helping to save consultants and archivists time and energy.
However, there are still a plethora of questions that need to be considered before organisations dive head-first into the trend. For one, the existence of frequent inaccuracies within generated responses could be catastrophic if not picked up before influencing legal research and, in the worst case, contracts.
The reliability and accuracy of AI-generated legal content was raised during our discussions and the data and anecdotes that were brought forward did make for concerning conversations.
Research from Stanford University has revealed just how prevalent inaccuracies or “hallucinations” are within some of the legal sectors leading AI software. According to a recent study, LexisNexis’ Lexis+ AI and Thomas Reuters’ Westlaw AI-Assisted Research and Ask Practical Law AI create content with “hallucinations “17% to 33% of the time.
For legal applications, this error rate is not only concerning, but it’s also dangerous. If firms were to become complacent and rely on these tools without a sufficient proofing system, they’d risk handing over work to clients that is littered with errors that could impact crucial legal contracts or proceedings.
Thankfully, both organisations are looking to implement RAG (Retrieval-Augmented Generation) to minimise these errors and to improve customers’ trust in the systems.
RAG techniques allow companies to make use of their existing data to generate content rather than simply relying on the learning carried out by the technology itself.
Essentially this technique allows for AI tools to reference documents and files within databases to cross-reference and ensure the content generated is accurate and is built on concrete technicalities, not AI assumptions.
However, with the legal sector being home to so many complex situations that, often, single clear-cut answers aren’t possible from seemingly simple questions. Where context is key, RAG techniques could still struggle in avoiding the prevalence of “hallucinations.”
Firms that are looking to adopt AI techniques, with or without RAG, still need to ensure that their AI-generated content is checked with a fine-toothed comb before being utilised.
The data above can be incredibly frightening, but the reality is that if organisations take a measured approach when implementing AI technologies, many of the fears surrounding the trend can be almost entirely quashed.
Attendees spoke about how finding that balance between being proactive and complacent is crucial to getting the best out of AI. Investing time and money into software and tools also needs to be backed by considerable planning and, just as importantly, a willingness to challenge assumptions.
Putting all your eggs into one basket simply puts you at risk of becoming too reliant on AI and is likely to see you, or even worse your clients, catch mistakes in your work which would’ve never materialised if created by hand.
On the other end of the spectrum, those companies that refuse to invest and capitalise on AI risk falling behind their competitors, who will be able to offer services and expertise at a rate much faster than them.
Finding the perfect spot between caution and action is only possible after rigorous internal conversations. Only then will organisations be able to implement AI technologies in an effective, sustainable and ethical way.
While it may seem that only organisations can take steps to prepare for AI applications, the reality is that you can prepare yourself by researching and testing how to create prompts that get the most out of AI.
Knowing what these tools need to create accurate content is key; that’s why we’d recommend getting to grips with some of the free AI platforms that are available, such as ChatGPT and Gemini.
Also, if your organisation is currently onboarding a new specific legal AI software, have a read through its information to see what it can and can’t do and what you need to provide to get the most out of it.
Our roundtable helped to confirm the point that AI should be used as an extension of services you already provide; it shouldn’t be something you reinvent your business around. Nor should it be abandoned and left to those who are willing to take the plunge.
With the right approach and mandate around AI, legal companies have the chance to open themselves to a world of opportunity.
If you’re interested in attending one of our legal and professional services events in the future, reach out to Jenny Simmons at jenny@deltragroup.com.
10th July
Events
Related insight
Looking to
transform?
Quicklinks
Address
Deltra Group
52-54 Gracechurch St
London
EC3V 0EH
Contact
+44 (0)207 375 9500
info@deltragroup.com