Elon Musk’s legal challenge to break up OpenAI may depend on whether its for-profit branch aligns with or detracts from the lab’s mission to ensure artificial general intelligence benefits humanity.
A former employee testified in a federal court in Oakland, California, that the company’s push to commercialize AI products compromised its AI safety commitment.
Rosie Campbell, who joined OpenAI’s AGI readiness team in 2021 and left in 2024 when it was disbanded, mentioned that another safety-focused team, the Super Alignment team, was also shut down.
“When I joined, it was research-focused with frequent discussions on AGI and safety, but it shifted towards a product focus over time,” she said.
Under questioning, Campbell admitted significant funding was necessary for AGI objectives but argued that creating super-intelligent models without proper safety measures contradicted the organization’s initial mission.
Campbell cited an incident where Microsoft deployed OpenAI’s GPT-4 model in India through Bing before it was evaluated by OpenAI’s Deployment Safety Board. Though the risk was minimal, she stressed the importance of setting strong safety precedents as technology advances.
OpenAI’s attorneys had Campbell admit that, in her opinion, OpenAI’s safety approach is better than xAI’s, the AI company Musk founded and now owned by SpaceX.
OpenAI publishes model evaluations and a safety framework but declined to comment on its AGI alignment approach. Dylan Scandinaro, head of preparedness, was hired from Anthropic, with CEO Sam Altman welcoming the hire on social media.
The use of GPT-4 in India was a factor in OpenAI’s non-profit board briefly firing Altman in 2023 after complaints about his management style. Tasha McCauley, then a board member, testified about Altman’s lack of transparency with the board.
McCauley addressed reported patterns of Altman misleading the board and not disclosing matters, such as ChatGPT’s launch.
“We, as a non-profit board, were meant to oversee the for-profit aspect, but our ability to do so was questionable,” McCauley said.
However, OpenAI reversed its decision to remove Altman after internal support for him grew, and Microsoft intervened.
The non-profit board’s failure to control the for-profit entity supports Musk’s claim that OpenAI’s shift from a research entity to a major private company violated the founders’ agreement.
David Schizer, a former Columbia Law School dean and Musk’s witness, echoed McCauley’s concerns.
“OpenAI emphasizes safety over profits, but if something requires a safety review, it must happen. The process is crucial,” Schizer said.
The issue of AI’s role in for-profit businesses extends beyond one lab, McCauley noted, suggesting that governance failures at OpenAI indicate a need for stronger AI regulation, emphasizing that public interest should not depend on one CEO’s decisions.
