New York Lawyer Lands in Hot Water After Using Chat GPT in Lawsuit
New York Lawyer Steven A Schwartz of the firm Levidow, Levidow, and Oberman recently admitted to using ChatGPT to create an error-ridden legal brief, launching Schwartz and the Bar into uncharted ethical territory.
The suit began when a man named Roberto Mata sued the airline Avianca, claiming that one of the airline’s metal serving carts on the plane struck and injured his knee on a flight to JFK. Predictably, Avianca filed for the suit to be dismissed before it even got off the ground. Schwartz and the rest of Mata’s lawyers vehemently disagreed filing an opposing, 10-page brief full of relevant prior decisions to convince the Judge to keep the lawsuit alive. Cases included Varghese v. China Southern Airlines, and Martinez v Delta Airlines, along with several citations of relevant law.
After sending the brief to the Judge and the airline’s lawyers, a problem quickly arose; no one could find the law that Schwartz was referring to.
Because ChatGPT had invented it all.
Schwartz came clean on Thursday, admitting to the court that he had used the Chatbot to help with research and that it was only now that he realized it was “a source that has revealed itself to be unreliable.” According to the New York Times, Schwartz has been practicing law for three decades and told the Judge he did not intend deception and that this was his first experience using ChatGPT.
As such, he argued he “therefore was unaware of the possibility that its content could be false.” To bolster this point, Schwarts claims to have asked the Chatbot if the content of the brief it creates was true, to which ChatGPT gave a definitive yes.
However, they were not true. According to the Times, Schwartz has said that he “greatly regrets” usingChatGPT” and will never do so in the future without absolute verification of its authenticity.”
The Presiding Judge, Judge Castel, said in an order that he has been confronted with “an unprecedented circumstance” due to Scwartz’s actions and the “bogus” nature of his brief. As such, Castel set a hearing for June 8th to address the matter and potential sanctions.
However, the Judge would essentially be flying blind. While general rules about deception and legal obligations apply, The Bar Association has no regulations on using this new AI technology. Evidently, it needs to produce some soon, to set the rules of the road.
While this instance is troubling in many ways, it also provides a modicum of comfort to know that advanced AI isn’t ready to replace white-collar jobs just yet.
Other stories you may want to read: