October 4, 2024

Before Sam Altman was ousted from OpenAI last week, he and the company’s board of directors had been bickering for more than a year. The tension got worse as OpenAI became a mainstream name thanks to its popular ChatGPT chatbot.

Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

Another member, Ilya Sutskever, who is also OpenAI’s chief scientist, thought Mr. Altman was not always being honest when talking with the board. And board members worried that Mr. Altman was too focused on expansion while they wanted to balance that growth with A.I. safety.

The news that he was being pushed out came in a videoconference on Friday afternoon, when Mr. Sutskever, who had worked closely with Mr. Altman at OpenAI for eight years, read to him a statement from the board. Though the decision stunned OpenAI’s employees, exposing its board members to tough questions about their qualifications to manage such a high-profile company, it was the culmination of long-simmering boardroom tension.

The rift also showed how building new A.I. systems is testing whether businesspeople who want to make money from artificial intelligence can work in sync with researchers who worry that what they are building could eventually eliminate jobs or become a threat to humanity if things like autonomous weapons grow out of control.

OpenAI was started in 2015 with an ambitious plan to one day create a superintelligent automated system that can do everything a human brain can do. But friction has long plagued the OpenAI board, which hasn’t even been able to agree on replacements for members who have stepped down.

Now the company’s continued existence is in doubt, largely because of that dysfunction. Nearly all of OpenAI’s 800 employees have threatened to follow Mr. Altman to Microsoft, which asked him to lead an A.I. lab with Greg Brockman, who quit his roles as OpenAI’s president and board chairman in solidarity with Mr. Altman.

The board had told Mr. Brockman that he would no longer be OpenAI’s chairman but invited him to stay on at the company — though he was not invited to the meeting where the decision was made to push him off the board and Mr. Altman out of the company.

The board has not said what it thought Mr. Altman was not being honest about.

There were indications that the board was still open to his return, as it and Mr. Altman held discussions that extended into Tuesday, two people familiar with the talks said. But there was a sticking point: Mr. Altman rejected some of the guardrails that had been proposed to improve his communication with the board. It was not clear what exactly those guardrails would be.

Mr. Sutskever did not respond to a request for comment on Tuesday.

OpenAI’s board troubles can be traced to the start-up’s nonprofit beginnings. In 2015, Mr. Altman teamed with Elon Musk and others, including Mr. Sutskever, to create a nonprofit to build A.I. that was safe and beneficial to humanity. They planned to raise money from private donors for their mission. But within a few years, they realized that their computing needs required much more funding than they could raise from individuals.

After Mr. Musk left in 2018, they created a for-profit subsidiary that began raising billions of dollars from investors, including $1 billion from Microsoft. They said that the subsidiary would be controlled by the nonprofit board and that each director’s fiduciary duty would be to “humanity, not OpenAI investors,” OpenAI said on its website.

After Mr. Altman was forced out and Mr. Brockman left, the four remaining board members are Mr. Sutskever; Adam D’Angelo, the chief executive of Quora, the question-and-answer site; Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and computer scientist.

A few weeks before Mr. Altman’s ouster, he met with Ms. Toner to discuss a paper she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

“I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

But shortly after those discussions, Mr. Sutskever did the unexpected: He sided with board members to oust Mr. Altman, according to two people familiar with the board’s deliberations. He read to Mr. Altman the board’s public statement explaining that Mr. Altman was fired because he wasn’t “consistently candid in his communications with the board.”

Mr. Sutskever’s frustration with Mr. Altman echoed what had happened in 2021 when another senior A.I. scientist left OpenAI to form the company Anthropic. That scientist and other researchers went to the board to try to push Mr. Altman out. After they failed, they gave up and departed, according to three people familiar with the attempt to push Mr. Altman out.

“After a series of reasonably amicable negotiations, the co-founders of Anthropic were able to negotiate their exit on mutually agreeable terms,” an Anthropic spokeswoman, Sally Aldous, said. In a second statement, Anthropic added that there was “no attempt to ‘oust’ Sam Altman at the time the founders of Anthropic left OpenAI.”

Vacancies exacerbated the board’s issues. This year, it disagreed over how to replace three departing directors: Reid Hoffman, the LinkedIn founder and a Microsoft board member; Shivon Zilis, director of operations at Neuralink, a company started by Mr. Musk to implant computer chips in people’s brains; and Will Hurd, a former Republican congressman from Texas.

After vetting four candidates for one position, the remaining directors couldn’t agree on who should fill it, said the two people familiar with the board’s deliberations. The stalemate hardened the divide between Mr. Altman and Mr. Brockman and other board members.

Hours after Mr. Altman was ousted, OpenAI executives confronted the remaining board members during a video call, according to three people who were on the call.

During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Mr. Altman. This, he said, violated the members’ responsibilities.

Ms. Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all of humanity,” and if the company was destroyed, she said, that could be consistent with its mission. In the board’s view, OpenAI would be stronger without Mr. Altman.

On Sunday, Mr. Sutskever was urged at OpenAI’s office to reverse course by Mr. Brockman’s wife, Anna, according to two people familiar with the exchange. Hours later, he signed a letter with other employees that demanded the independent directors resign. The confrontation between Mr. Sutskever and Ms. Brockman was reported earlier by The Wall Street Journal.

At 5:15 a.m. on Monday, he posted on X, formerly Twitter, that “I deeply regret my participation in the board’s actions.”