By Doug Wallace and Kristen King
Published - SUMMER 2024 | The Litigator The New York Times reported a story last year about a lawyer who used ChatGPT to do his legal research for a motion filed with the federal court. A problem arose when opposing counsel was unable to locate any of the decisions cited in the brief, and upon inquiry, the judge found that they were nonexistent. The lawyer ultimately “threw himself on the mercy of the court” and admitted that his research was from “a source that has revealed itself to be unreliable.” At a subsequent hearing into his conduct, he admitted that “I did not comprehend that ChatGPT could fabricate cases.” The lawyer was found to have acted in bad faith and was fined $5,000 by the judge. At the time of writing, it was not clear if a separate disciplinary process was initiated.
In December 2023, it was reported that Michael Cohen, Donald Trump’s former lawyer, also gave fictitious legal citations to his own lawyer, concocted by the Artificial Intelligence (“AI”) program, Google Bard. These citations made their way into a brief filed in federal court, with similarly embarrassing results.
It is sobering to consider that these cautionary tales are only the ones we know about, where something went wrong and it was widely reported. It begs the question—how often is AI used by insurers and the legal profession in Canada, and to what effect? This article explores the use of AI in the insurance industry and its role in claims processing and handling. It then focuses on the role of AI in defence practice; the risks and benefits of its use in litigation and how the courts are responding to it.
What is AI?
AI is an umbrella term that was first used in 1956. It is not a single technology or function and there is no consensus on a specific definition. AI generally involves algorithms, machine learning, and natural language processing and falls under two main types: narrow (weak) AI and general (strong) AI. “Narrow AI” can do one thing as well or better than a human. For example, e-discovery AI technologies can find relevant documents and evidence faster and more efficiently than human lawyers. “General AI”, theoretically, would do most if not all things better than a human.
Current insurance AI technologies are based on narrow AI that uses machine learning, natural language processing (“NLP”), and computer vision. Machine learning is a branch of AI that uses algorithms trained on data sets to create self-learning models capable of predicting outcomes and classifying information. NLP is concerned with giving computers the ability to understand text and spoken words in the same way humans can. Computer vision is a field of AI focused on developing techniques to help computers automatically derive information from digital images, videos, and other visual inputs. Key players involved in developing these technologies include Applied Systems, Cape Analytics, IBM, Microsoft, OpenText, and Oracle, among others.
AI and Claims Handling
Some insurers already use at least some element of narrow AI technologies in claims intake, processing and adjudication, underwriting, image and document analysis, claims fraud detection, estimating and valuating loss, property damage analysis, automated inspections, claims volume forecasting, and pricing and risk management. As a result, insurers may begin to report consistent employee performance, reduced human error, better and quicker decisions driven by historical data, the ability to quickly detect fraud, and better customer service.
Insurance technology or “InsurTech as it is known, involves the use of innovative technologies designed to enhance the effectiveness of the insurance industry, and it is on the rise. According to a 2017 report prepared by Deloitte, 95% of insurance executives intended to start or to continue to invest in AI technologies. Insurers know that if they adopt or invest in these technologies they can gain competitive advantage by increasing the speed at which they review and synthesize information and make decisions. This is going to continue to change how insurers operate, from the point of contact with the customer through to the way policies are underwritten and claims are handled, processed, adjudicated, and litigated.
The most mature AI application is the automation of the claims handling process. Traditionally, claims processing has been one of the more costly and labour intensive aspects of an insurer’s business. Since much of the work in processing claims is standardized and repetitive, it is a good fit for AI automation.
For example, AI-powered chatbots or virtual assistants can be used to handle the initial stages of claims reporting for property damage or accident benefits. The chatbots can interact with the customer, collect relevant information about the loss, and guide them through the claims process. To increase efficiency, machine learning tools or NPL can then assess and extract information from the documents submitted to better understand the details of the claim. Computer vision technology can be used to analyse images and photographs, including the extent of damage to an automobile. Machine learning can also be applied to detect patterns or anomalies in the data submitted, which could indicate fraud. AI technologies can then compare the submitted damage and information with policy documents and generate automated coverage decisions and settlement offers in a fraction of the time a human would take. This would not be entirely without human oversight, however. Claims adjusters must review the AI results in order to verify and settle the claim. For more straightforward claims, the use of these technologies can significantly reduce processing times allowing adjusters to concentrate on more complicated claims.
To put this into context, Lemonade Inc., a U.S. property and casualty peer to peer insurance company that uses AI-powered claim analysis, boasts a three-second AI claims review process. Eighteen anti-fraud algorithms are reported to run on image and video claims information received from customers and a response is rendered almost instantaneously.
While these InsurTech products undoubtedly offer significant cost and time savings, claims decisions rendered by AI must also be ethical and comply with applicable laws and regulations. Looking forward, concerns remain about the reliability and accuracy of AI-generated content. Insurers must be mindful of these risks, particularly if they wish to rely on AI generated data to underlie decisions to approve or deny claims and/or to manage litigation files.
One problem is that we do not fully understand how these AI tools are making decisions. The inability to see how learning systems make their decisions, which means we cannot trace the system’s thought process, is known as the “black box problem.” A potential risk of this “black box problem” has been raised by researchers, including the Algorithmic Justice League, who have pointed to systemically racist and discriminatory results that have been produced by AI in different contexts, often stemming from historical datasets used to train machine learning algorithms.
Processing speed is also considered a potential liability with these technologies because of the assumptions AI may make at increased speeds. If AI is using an assumption that leads to a biased decision, it will repeatedly reproduce that bias. As a result, steps must be taken by insurers to ensure the AI tools they use are not biased and do not discriminate.
Without safeguards, the risk of bias and discrimination in claims processing and adjudication could increase insurers’ exposure to bad faith claims. While AI-related bad faith claims have yet to be litigated in Canada, it is foreseeable that insurers may be required to produce source code and/or answer questions related to the algorithm or model that generated data upon which a decision was based. The prospect that proprietary code may be producible and that such production could underpin allegations of systemic bad faith and/or a possible class action, could encourage early cooperation from insurers and the possibility of early resolution.
AI and Litigation
In light of these developments, is there still a need for insurance defence counsel? There is some consensus that AI will not replace lawyers, at least not yet. AI-powered tools have not yet demonstrated an ability to grasp the personal and nuanced aspects of litigation. That said, the traditional, costly, and time intensive tasks lawyers performed—legal research, document review and management, and drafting—are evolving.
In their book “The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better,” Abdi Aidid and Benjamin Alarie explore legal singularity, which they describe as “the idea that law will reach functional completeness, in the sense that practically any legal question will have an instantaneous and just resolution”. The authors state that new tools are emerging that allow much of the work lawyers traditionally performed to be computed. For example, legal materials that lawyers work with (cases, contracts, statues, and regulations), can be used as training data for predictive algorithms. The models can synthesize the information to predict how the law would apply in various circumstances. There are already a number of commercially available AI-enabled legal tools that advertise their ability to predict how courts are likely to rule on legal issues. Westlaw Precedent and Lexis+ are two examples.
With AI technologies, legal research that once required hours to complete, can now take minutes. For example, some Canadian companies currently offer services including legal research memos and arguments generated by AI technology. For instant memos, they boast a turnaround time of five minutes and for legal arguments 20 to 30 seconds.
Arguably, near instant access to legal research would allow lawyers to provide advice more quickly at a lower cost and with higher accuracy. As the technology improves and becomes more readily available, clients may come to expect this level of service from lawyers. Historically, insurers have imposed decreasing time limits for research and document review. Developments in AI are unlikely to alter this trend. It is now possible, if not likely, that insurers will develop or subscribe to existing AI technologies to conduct their own legal research and document review. AI is therefore likely to reduce the time lawyers spend performing these tasks.
Beyond legal research and document review, AI technologies can also assist lawyers in to prepare case chronologies and summaries, draft pleadings and other court documents, streamline the discovery process, and even generate factums. These technologies are unlikely to replace human lawyers anytime soon, however. There are several reasons why.
First, while AI can reduce the time spent on more mundane tasks, oversight by human lawyers is essential. This is partly because, to be successful, AI tools require detailed, specific and relevant data. While AI may be theoretically capable of understanding certain facts, it is unlikely to know if a witness is credible or how to interpret their evidence critically. The maxim, “garbage in, garbage out” takes on greater meaning through the lens of AI. The desired output will only be achievable if its training and inputs are sufficiently robust.
Similarly, AI cannot replicate a lawyer’s ability to conduct meaningful examinations for discovery. While AI can be used to generate a list of potential questions, a lawyer should not rely on AI (at least not yet) to generate follow-up questions and explore the rabbit holes as they arise. In these situations, a lawyer’s knowledge of the law, facts, documents, procedural rules, in addition to their advocacy skills, remains necessary. As yet, there is no substitute for years of experience interacting with expert witnesses, judges, mediators, arbitrators and opposing counsel. Any strategy missing this intangible element is unlikely to achieve a better outcome than one informed by human knowledge.
The legal profession also operates largely on the credibility and reliability of the lawyers who practice within it. The accuracy of a lawyer’s work product is critical for both clients and the courts. To use AI with confidence, lawyers will need to find AI tools that are trained on reliable sources of constantly evolving material which is tested and proven. To do otherwise risks embarrassment or worse.
Canadian courts are alive to these issues and are beginning to consider and address the use of generative AI by lawyers. In Ontario, Rule 61 of the Rules of Civil Procedure, which governs appeals to an appellate court, was recently amended to require factums to include a certificate stating that the signatory is satisfied “as to the authenticity of every authority listed.”
Courts in other jurisdictions are addressing the issue more explicitly. Practice directions released in June 2023 by the Supreme Court of Yukon and the Court of King’s Bench of Manitoba provide that if any counsel or party relies on AI (such as ChatGPT or any other AI platform) for their legal research or submissions before the Court, they must advise the Court of the tool used and for what purpose.
On December 20, 2023, the Federal Court also published a Notice to the Parties and the Profession on the Use of Artificial Intelligence in Court Proceedings. The notice provides that the Court expects parties to inform it, and each other, if they have used AI to create or generate new content in preparing a document. If any AI generated content is included in a document submitted to the Court, the first paragraph must disclose that AI was used to create or generate that content. This is based on the Federal Court’s recognition of both the risks and benefits of AI, including “hallucinations” and the potential for bias in AI programs, their underlying algorithms, and data sets.
Conclusion
Insurers, lawyers and the courts will need to think carefully about the legal and ethical challenges posed by AI technology while it continues to evolve at breakneck speed. As we move towards legal singularity, defence lawyers may be increasingly at the mercy of business decisions based on AI generated data. Cost effectiveness will likely remain a priority as profitability continues to be top of mind for insurers. While lawyers will not be replaced by AI anytime soon, they must inform themselves about the risk and benefits before using it.
Comentarios