ChatGPT use could end the career of a US lawyer

ChatGPT use - Lawyer - Law

The New York lawyer is facing a court hearing after his firm used the AI tool for legal research.

A New York lawyer’s career is on the line after his firm used ChatGPT to conduct legal research. A judge for the case has said that they are now facing “unprecedented circumstances” after the lawyer’s filing was determined to have referenced legal case examples that did not exist.

The lawyer told the court he was “unaware that its content could be false” when using the AI tool.

According to the lawyer, he didn’t know that along with creating requested original content, ChatGPT would also produce false content. That said, the AI tool does come with a warning that it may “produce inaccurate information.”

ChatGPT use - Focus
Credit: Photo by depositphotos.com

The lawyer, Peter LoDuca’s case involved a man who had filed an alleged personal injury lawsuit against an airline. The legal team submitted a brief citing previous court cases as a component of its effort to use precedent to prove that the case should be allowed to move forward. However, the airline’s lawyers followed up by writing to the judge, saying that they were unable to find a number of the referenced cases in the brief.

The lawyer’s ChatGPT use led to the inclusion of six fictional cases cited in the brief.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” wrote Judge Castel in an order demanding that the lawyer explain their brief.

It was soon determined that a number of filings included research that was not prepared by Peter LoDuca, the plaintiff’s lawyer, but that it was instead researched by one of his colleagues at the same firm, Steven A. Schwartz. With 30 years of experience as an attorney, Schwartz decided that ChatGPT use would make it easier to research similar cases.

He has since submitted a written statement in which Schwartz confirmed that LoDuca was not a participant in the research and that he was not aware of how it had been accomplished. He further stated that he “greatly regrets” having relied on the AI chatbot, which he had never before employed for research, saying that he was “unaware that its content could be false.”

Shwartz stated that he would never again use artificial intelligence as a “supplement” for his future legal research “without absolute verification of its authenticity.”

Leave a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.