Elon Musk spent more than seven hours in court this week arguing that the future of artificial intelligence is no longer just about innovation, but about trust.
At the center of the dispute is OpenAI, the company behind ChatGPT. Musk, one of its co-founders, told a federal court that the organization had strayed from its original purpose as a non-profit serving humanity. Instead, he said, it had shifted toward commercial gain.
“This was meant to be a charity,” Musk testified, describing the early vision of OpenAI as a project that would not benefit any individual.
AI governance and ‘charity’ debate
Musk’s argument hinges on how OpenAI was founded in 2015. While the original announcement did not explicitly describe it as a charity, Musk repeatedly framed it that way in court. He accused OpenAI executives, including Sam Altman and Greg Brockman, of abandoning that principle.
The case now raises broader questions about AI safety regulation and whether organizations developing powerful technologies should prioritise public benefit over profit.
The role of funding and AI infrastructure
Musk told the court that OpenAI depended heavily on his early contributions. He said he provided initial funding, recruited key researchers, and leveraged relationships to secure critical resources.
That included access to AI infrastructure, such as computing power and partnerships. Musk highlighted his connections with Microsoft CEO Satya Nadella and Nvidia CEO Jensen Huang as central to OpenAI’s early growth.
He also pointed to a later Microsoft investment, reportedly worth billions, as a turning point. Musk described the deal as a “bait and switch,” suggesting it marked a shift away from OpenAI’s original mission.
Tensions over ownership and incentives
According to Musk, a 2022 exchange with Altman deepened his concerns. After questioning the structure of the Microsoft deal, Musk said he was offered the chance to buy equity in OpenAI.
“It felt like a bribe,” he told the court.
The issue of OpenAI valuation / ownership is now central to the trial, with implications for how AI companies balance investor returns with public accountability.
AI safety warnings resurface
Musk also returned to a familiar theme: the risks of artificial intelligence. He described past conversations with Google co-founder Larry Page, claiming they disagreed sharply on the importance of safeguarding humanity.
“The reason OpenAI exists is because of AI safety,” Musk said, adding that concerns about existential risk were foundational to the company’s creation.
That argument resurfaced in court when Musk’s legal team pushed to include testimony about extinction-level threats. “We all could die,” his lawyer said, though the judge limited how far such arguments could go.
A clash of visions for AI’s future
Despite his warnings, Musk acknowledged that his own company, xAI, operates as a for-profit entity. He defended the structure, saying profit-driven firms can still deliver social benefits.
The contradiction did not go unnoticed in court. At one point, the judge remarked on the irony of Musk raising concerns about AI risks while actively building another AI company.
As the trial continues, the outcome could shape not just OpenAI’s future, but the broader rules governing AI development, especially as companies compete for talent, funding, and dominance in a rapidly evolving field powered by Nvidia computing power and global capital.
Read also: Billionaire’s $1 fuel station sparks debate over pricing power
















