OpenAI Confronts Claims of Profiting from AI Model Scrutiny in Court Cases

OpenAI Confronts Claims of Profiting from AI Model Scrutiny in Court Cases

OpenAI Confronts Claims of Profiting from AI Model Scrutiny in Court Cases


# How Can You Extract the Truth from an AI Model?

Since the introduction of ChatGPT two years ago, AI models have rapidly surged in popularity, prompting tech firms to compete in launching innovative products intended to transform everyday life. However, this swift evolution has sparked concerns from both government officials and the general public. While AI has the capacity to improve productivity and foster creativity, it brings along notable risks, including job displacement, the propagation of misinformation, hindered competition, and potential threats to national security.

As AI advancements continue, a significant challenge remains: comprehending what exists within these models—how they function, the data they utilize for training, and the possible dangers they may present. But how can one encourage an AI model to “disclose” its internal workings? This question has become pivotal in ongoing legal disputes and regulatory actions aimed at holding AI firms responsible for the threats their models might generate.

## The Obstacle of AI Opacity

A major complication with AI models is their lack of transparency. These models, particularly extensive language models like ChatGPT, are frequently referred to as “black boxes.” Even the creators may lack complete insight into how specific outputs are determined. This opacity complicates efforts to evaluate whether AI models inflict harm, such as copyright violations, misinformation dissemination, or generating biased results.

Regulatory entities are rushing to construct frameworks to identify harmful AI applications before their launch. However, once a model is set free, it becomes even more challenging to delve into and scrutinize its inner functions. This difficulty is exacerbated by AI firms, motivated by competition and profit, who increasingly guard their models from scrutiny.

The more ambiguous the workings of these models to the public, the more complex it becomes to hold companies accountable for irresponsible AI deployments. This lack of transparency is central to various lawsuits, including one initiated by *The New York Times* (NYT) against OpenAI, ChatGPT’s creator, regarding copyright issues.

## The Legal Struggle for AI Model Examination

In the suit filed by *The New York Times*, the plaintiffs aimed to evaluate OpenAI’s models to ascertain whether they had been trained on copyrighted content without authorization. However, OpenAI suggested a highly restrictive inspection procedure that limited the number of inquiries the plaintiffs could make and imposed fees for exceeding a designated limit.

Per this protocol, the NYT could engage an expert to analyze confidential OpenAI technical documents in a secure, secluded setting. The expert would have a limited duration and a capped inquiry number to extract information from the AI model. OpenAI proposed that once the plaintiffs reached the cap—set at $15,000 worth of retail credits—they would be responsible for sharing the costs of any additional inquiries with OpenAI.

The NYT contested this setup, arguing that it could incur as much as $800,000 in retail credits to gather sufficient evidence for their claims. They accused OpenAI of trying to profit from the discovery phase by demanding retail prices for model examination instead of revealing the actual costs associated with model access.

This legal conflict underscores the challenges faced by plaintiffs seeking to examine AI models for potential damage. If courts permit AI firms to impose retail rates on model inspections, it could dissuade future lawsuits from individuals or organizations unable to bear the hefty discovery costs.

## The Wider Consequences for AI Responsibility

The resolution of the NYT lawsuit could carry significant consequences for AI accountability. Should AI firms be permitted to limit access to their models and impose exorbitant fees for examination, it will become progressively challenging for the public to hold these corporations accountable for any damages their models may inflict.

This situation is not exclusive to OpenAI. Other AI developers, particularly those working on image-generating systems and chatbots, are also facing litigation due to alleged harms. For example, creators have filed lawsuits against AI firms for unauthorized use of their works in training image generators, and several chatbots have encountered defamation allegations.

In light of these issues, some governments are undertaking measures to regulate AI and ensure the safety of models prior to their release. In the United States, the Biden administration has proposed the establishment of the Artificial Intelligence Safety Institute (AISI), tasked with performing safety evaluations on AI models to identify risks associated with privacy, discrimination, and other civil rights concerns.

Nonetheless, participation in the AISI’s safety evaluations is voluntary, and not all AI entities have consented to engage. Moreover, the AISI lacks adequate funding, which may hinder its capacity to carry out thorough safety assessments on all AI models. This situation leaves the public reliant on the internal evaluations conducted by AI firms, which may not always effectively identify and address potential risks.

## The Horizon of AI Transparency

As AI technology progresses, the demand for increased transparency and accountability will only escalate. AI companies must find a way to balance the protection of their proprietary information with the need to ensure that their models do not inflict harm. Achieving this balance will necessitate new regulatory structures, as well as collaboration between governments