Trendinginfo.blog

Would you hire the lawyer who just got sanctioned for using AI?

GettyImages 104821087 e1778867981875.jpg

GettyImages 104821087 e1778867981875.jpg

Thank you for reading this post, don't forget to subscribe!

All over the country, lawyers are using artificial intelligence to write briefs and help them prepare for court. It is not going well.

A family in Alabama lost a trust dispute last month because their lawyer filed citations to cases that do not exist. The Alabama Supreme Court dismissed their appeal, calling the conduct egregious, and barred the lawyer from filing in that court again without co-counsel sign-off.

In the same month, a federal judge in Oregon sanctioned two lawyers $110,000, the largest AI hallucination penalty in American legal history, after they submitted 23 fabricated citations and eight invented quotations. The case was subsequently dismissed.

In Manhattan, a judge ruled recently that a defendant who used a general-purpose AI chatbot to help prepare his case had waived attorney-client privilege. If you type your defense strategy into a chatbot, the government can subpoena it, read it, and use it against you.

According to a database compiled by lawyer and data scientist Damien Charlotin, there have now been more than 1,300 cases globally where a court or tribunal has commented on AI-generated hallucinations in legal filings. Behind each of those cases is a client who paid a lawyer and trusted the system. Behind each, more often than not, sits a lawyer who placed blind trust in a technology that generates text with complete confidence and no capacity for self-verification. 

Not all AI is created equal. There is a real difference between general-purpose large language models like ChatGPT and Claude that have been trained on the open web, and industry-specific legal AI tools that are plugged into the same databases lawyers have been using for decades. Unfortunately, Wall Street has struggled to tell the difference.

When Anthropic released a legal plug-in for Claude recently, it contributed to a roughly $285 billion selloff in technology stocks. The chaos in courtrooms around the world tells a different story. Solving legal AI is harder than tweaking a standard large language model.

I have practiced law across three jurisdictions and now serve as General Counsel of one of the world’s largest legal technology companies, LexisNexis.

The question I am asked most often right now is, “Which AI is most capable?” My view is that this is the wrong question. The right one, “Which AI can be trusted in a courtroom?” is a different question. In law, those are not the same thing.

The obligation runs to the client and to the court simultaneously. The American Bar Association has identified how five of its Model Rules of Professional Conduct are directly impacted by AI use. They are competence, confidentiality, candor toward the tribunal, and supervisory responsibility.

When a lawyer submits a hallucinated citation, they fail their client. It is also a failure to the court. In fact, it corrupts the record the entire system depends on.

The structural problem runs deeper than which model a lawyer uses. General-purpose AI is designed to produce text that looks like the right answer, which in most domains is most of the job. In law it is the wrong job. The model cannot verify that the case it cited exists, that the case says what the brief claims, or that the case remains controlling authority. The gap is architectural, not a capability problem to be solved by the next training run. The consequences are concrete. Lawyers get sanctioned, claims get dismissed, defendants get handed to the prosecution.

The question to ask of any legal AI tool is not how it performs on a benchmark, but what it is built on, and whether a lawyer can trace, verify, and stand behind the output in open court.

There are two ways AI changes the practice of law. The first is compression, the same work, faster. The second is expansion, work that was never possible before. AI’s expansion potential in law is enormous, but it can only rest on a foundation that does not fabricate the underlying law. Litigation strategy built on decades of judge-specific outcome data is not a faster version of an existing task. It is work that did not previously exist. The same is true of regulatory monitoring built into deal documents that update the moment the law changes. There are many other examples, and the list grows weekly.

The market will eventually price what the profession has always known. In law, the cost of a wrong answer is paid in someone’s freedom, their assets, or their family’s future.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Source link

Exit mobile version