Saturday, September 23, 2023

Lawyers say chatbot tricked them into creating fake cases

Two attorneys expressed their remorse Thursday before an angry Manhattan federal court judge, accusing ChatGPT of misleading them by including a fictitious legal investigation in a court filing.

Attorneys Steven A. Schwartz and Peter Loducka could be punished for a document included in a lawsuit against an airline that referenced previous court cases that Schwartz believed to be true, but was actually created by an AI-powered chatbot Was. ,

Schwartz explained that they used the innovative program to find legal precedent to support a client’s case against Colombian airline Avianca for an injury she suffered on a flight in 2019.

The chatbot, which has captivated the world with its essay-like responses to user requests, suggested several aviation incident cases, which Schwartz was able to find through the usual search methods used by his law firm. were unable.

The problem was that many of these cases either never happened or involved airlines that didn’t even exist.

Schwartz told Judge P. Kevin Casteel that he “acted under the mistaken impression … that this website was getting cases from some source that I did not have access to.”

Schwartz said that he “failed miserably” to do follow-up research to make sure the references were correct.

“I don’t understand that ChatGPT can make up cases,” he said.

Microsoft has invested nearly $1 billion in OpenAI, the company behind ChatGPT.

The success of ChatGPT, which shows that artificial intelligence could change the way humans work and learn, has raised fears among some. Hundreds of industry leaders signed a letter in May warning that “reducing the risk of extinction from AI must be a global priority alongside other societal risks such as pandemics and nuclear war.”

Judge Castel seemed both shocked and troubled by the unusual incident, and was disappointed that the attorneys had not acted quickly to correct the false legal references when they were first brought to their attention by their Avianca counterparts and the court. Avianca exposed the false jurisprudence in a document filed with the court in March.

The judge confronts Schwartz with a legal case created by a computer program. The case was initially described as a wrongful death case by a woman against an airline, but was changed to a lawsuit involving a man who missed a flight to New York and incurred additional expenses.

“Can we agree that this is legal bullshit?” Castel asked.

Schwartz said that he had mistakenly believed that the confusing presentation had resulted from extracts obtained from different parts of the case.

When Castle was finished questioning, he asked Schwartz if he had anything else to add.

“I want to sincerely apologize,” Schwartz said.

The lawyer said he had suffered personally and professionally for the mistake and felt “embarrassed, humiliated and extremely sorry”.

He claimed that he and the firm he worked for—Levido, Levido & Oberman—had put in place safeguards to ensure that something like this would never happen again.

Loducka, the other attorney working on the case, said he relied on Schwartz and failed to properly review what his partner had compiled.

After reading aloud parts of one of the cases the judge mentioned how easy it was to tell there were inconsistencies, Loducka said, “I never thought it was a perjury case.”

LoDuca said the result “makes me very sad.”

Ronald Minkoff, an attorney with the law firm, told the judge that the document’s distribution was “negligent, not bad faith” and should not lead to penalties.

He said lawyers have historically struggled with technology, especially modern technology, “and it’s not getting any easier.”

“Mr. Schwartz, who does very little federal research, decided to use this new technology. He thought he was using a simple search engine,” Minkoff said. “All I was doing was playing with live ammunition.”

Daniel Shin, an assistant professor and deputy director of research at the Center for Legal and Judicial Technology at William & Mary School of Law, said he made Avianca’s case last week during a conference that drew dozens of in-person and online participants from around the state. and federal courts in the United States, including the federal courthouse in Manhattan.

He said that the issue caused shock and awe during the conference.

“We’re talking about the Southern District of New York, the federal district that handles major financial crimes since September 11, 2001,” Shin said. “This was the first documented case of potential professional misconduct by a lawyer using generative AI.”

He said that this case shows that lawyers cannot understand how ChatGPT works because it hallucinates and talks about imaginary things in such a way that they seem real but are not.

“It highlights the dangers of using promising AI technologies without being clear about the risks,” Shin said.

The judge said he would rule on sanctions at a later date.

World Nation News Desk
World Nation News Deskhttps://worldnationnews.com/
World Nation News is a digital news portal website. Which provides important and latest breaking news updates to our audience in an effective and efficient ways, like world’s top stories, entertainment, sports, technology and much more news.
Latest news
Related news