OpenAI has Little Legal Recourse against DeepSeek, Tech Law Experts Say
Adan Crews upravil tuto stránku před 2 měsíci


OpenAI and the White House have actually implicated DeepSeek of using ChatGPT to inexpensively train its brand-new chatbot.
- Experts in tech law say OpenAI has little option under copyright and contract law.
- OpenAI’s terms of usage might apply however are mostly unenforceable, they say.
Today, OpenAI and the White House accused DeepSeek of something similar to theft.

In a flurry of press declarations, they said the Chinese upstart had bombarded OpenAI’s chatbots with queries and hoovered up the resulting data trove to quickly and cheaply train a design that’s now nearly as excellent.

The Trump administration’s top AI czar said this training process, called “distilling,” amounted to copyright theft. OpenAI, on the other hand, told Business Insider and other outlets that it’s examining whether “DeepSeek may have wrongly distilled our models.”

OpenAI is not stating whether the business prepares to pursue legal action, instead guaranteeing what a spokesperson termed “aggressive, proactive countermeasures to safeguard our technology.”

But could it? Could it sue DeepSeek on “you took our content” premises, similar to the grounds OpenAI was itself sued on in an ongoing copyright claim filed in 2023 by The New York Times and other news outlets?

BI postured this question to experts in technology law, drapia.org who said tough DeepSeek in the courts would be an uphill battle for OpenAI now that the content-appropriation shoe is on the other foot.

OpenAI would have a difficult time showing an intellectual property or copyright claim, these legal representatives said.

“The question is whether ChatGPT outputs” - implying the responses it generates in response to inquiries - “are copyrightable at all,” Mason Kortz of Harvard Law School said.

That’s due to the fact that it’s unclear whether the responses ChatGPT spits out qualify as “creativity,” he stated.

“There’s a teaching that says creative expression is copyrightable, however realities and ideas are not,” Kortz, who teaches at Harvard’s Cyberlaw Clinic, said.

“There’s a substantial concern in copyright law today about whether the outputs of a generative AI can ever constitute innovative expression or if they are always unguarded facts,” he included.

Could OpenAI roll those dice anyway and claim that its outputs are secured?

That’s unlikely, the legal representatives stated.

OpenAI is currently on the record in The New York Times’ copyright case arguing that training AI is an allowed “reasonable usage” exception to copyright security.

If they do a 180 and inform DeepSeek that training is not a reasonable use, “that might come back to sort of bite them,” Kortz said. “DeepSeek could say, ‘Hey, weren’t you simply stating that training is reasonable use?’”

There may be a distinction between the Times and DeepSeek cases, .

“Maybe it’s more transformative to turn news short articles into a model” - as the Times accuses OpenAI of doing - “than it is to turn outputs of a model into another design,” as DeepSeek is stated to have actually done, Kortz said.

“But this still puts OpenAI in a pretty predicament with regard to the line it’s been toeing relating to reasonable usage,” he included.

A breach-of-contract claim is more likely

A breach-of-contract claim is much likelier than an IP-based claim, though it comes with its own set of issues, said Anupam Chander, who teaches technology law at Georgetown University.

Related stories

The terms of service for Big Tech chatbots like those established by OpenAI and Anthropic forbid using their content as training fodder for a contending AI model.

“So perhaps that’s the suit you may possibly bring - a contract-based claim, not an IP-based claim,” Chander said.

“Not, ‘You copied something from me,’ however that you gained from my model to do something that you were not permitted to do under our contract.”

There might be a drawback, Chander and Kortz said. OpenAI’s regards to service require that a lot of claims be resolved through arbitration, not lawsuits. There’s an exception for claims “to stop unapproved use or abuse of the Services or intellectual residential or commercial property violation or misappropriation.”

There’s a bigger hitch, though, specialists said.

“You must understand that the brilliant scholar Mark Lemley and a coauthor argue that AI regards to use are likely unenforceable,” Chander stated. He was describing a January 10 paper, “The Mirage of Expert System Regards To Use Restrictions,” by Stanford Law’s Mark A. Lemley and Peter Henderson of Princeton University’s Center for Infotech Policy.

To date, “no model developer has in fact attempted to impose these terms with monetary charges or injunctive relief,” the paper states.

“This is likely for good reason: we believe that the legal enforceability of these licenses is questionable,” it includes. That’s in part due to the fact that model outputs “are largely not copyrightable” and because laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act “offer limited recourse,” it says.

“I think they are likely unenforceable,” Lemley informed BI of OpenAI’s regards to service, “because DeepSeek didn’t take anything copyrighted by OpenAI and because courts typically won’t impose arrangements not to contend in the absence of an IP right that would avoid that competitors.”

Lawsuits between parties in different countries, each with its own legal and enforcement systems, are constantly challenging, Kortz said.

Even if OpenAI cleared all the above hurdles and won a judgment from an US court or arbitrator, “in order to get DeepSeek to turn over cash or stop doing what it’s doing, the enforcement would boil down to the Chinese legal system,” he said.

Here, OpenAI would be at the mercy of another extremely complicated location of law - the enforcement of foreign judgments and the balancing of specific and corporate rights and national sovereignty - that extends back to before the starting of the US.

“So this is, a long, made complex, stuffed process,” Kortz included.

Could OpenAI have safeguarded itself much better from a distilling attack?

“They might have used technical measures to block repetitive access to their website,” Lemley stated. “But doing so would likewise hinder regular clients.”

He added: “I don’t think they could, or should, have a legitimate legal claim versus the searching of uncopyrightable info from a public site.”

Representatives for DeepSeek did not instantly react to an ask for remark.

“We understand that groups in the PRC are actively working to utilize methods, including what’s referred to as distillation, to try to reproduce advanced U.S. AI designs,” Rhianna Donaldson, an OpenAI spokesperson, told BI in an emailed statement.