A federal judge in New York is considering sanctions against an attorney who used the ChatGPT artificial intelligence tool to draft a legal brief that was presented as evidence, but contained case citations that were completely untrue; fabricated by the AI tool.
In a court hearing yesterday, the New York Times reports, Judge P. Kevin Castel, grilled attorney Steven A. Schwartz, about why he used results produced by the chatbot without first personally checking that they were accurate.
According to the Times, Schwartz told the judge he "did not comprehend that ChatGPT could fabricate cases".
Meanwhile, a federal Judge in Texas is requiring that attorneys who appear in his court must formally attest that “no portion" of their filings were drafted by generative artificial intelligence,” or if they were; that they were first checked “by a human being.”
So what does all this mean for the future, especially as some experts are predicting that AI will one day replace many jobs now filled by hard working, highly educated attorneys?
We asked University of Akron Law Professor Emeritus, J. Dean Carro, who specializes in topics including Criminal Law, Appellate Advocacy, Trial Advocacy, and Criminal Procedure.