Insider Exposes: Altman Reduces OpenAI’s Computing Power and Focuses on Profitable Products; Warns Against Complaints to Preserve Equity

13 tweets in a row! OpenAI Super Alignment Leader Jan Leikethe one who just followed Ilya out of the company, revealed the real reason for leaving the company and more insider information.

First, the computing power is not enoughthe 20% promised to the super-aligned team was short of the mark, causing the team to go against the current, but it became increasingly difficult.

Advertisement

Secondly, safety is not taken seriously.the security governance issue of AGI is not as high priority as launching a “shining product”.

Immediately afterwards, more gossip was dug up by others. For example, every member who leaves OpenAI must sign an agreement.Promise not to speak ill of OpenAI after resigning.if you don’t sign, it will be deemed as automatically giving up the company’s shares.

However, there are still those who refuse to sign and come out with fierce news (laughing to death), saying that the core leadership has long-standing differences on the priority of security issues.

Since last year's palace fight, the conflict of ideas between the two factions has reached a critical point, and now it seems to have collapsed in a dignified manner.

Advertisement

Therefore, although Altman has sent a co-founder to take over the Super Alignment team, it is still not favored by the outside world.

Twitter netizens who were on the front line thanked Jan for having the courage to say this shocking thing, and sighed:

Come on, it seems that OpenAI really doesn’t pay much attention to this security!

But looking back, Altman, who is now in charge of OpenAI, can still sit still for the time being.

He stood up and thanked Jan for his contribution to OpenAI super alignment and security, and said that he was actually very sad and reluctant to leave Jan.

Of course, the key point is actually this sentence:

Wait, I’ll post a longer tweet than this in two days.

The promised 20% computing power is actually part of the pie.

From the OpenAI court battle last year to now, the soul figure and former chief scientist Ilya has almost stopped making public appearances and speaking out publicly.

Even before he publicly announced his resignation, there were already divergent opinions. Many people think that Ilya has seen something terrible, such as an AI system that may destroy humanity.

Netizen: The first thing I do when I wake up every day is think about what Ilya saw.

This time Jan opened up and said that the core reason is that the technical party and the market party have different views on the priority of security.

The differences are serious, and the current consequences… everyone has seen it.

According to Vox, sources familiar with OpenAI revealed,More safety-conscious employees have lost faith in Altmann“This is a process of trust breaking down bit by bit.”

But as you can see, not many departing employees are willing to talk about this publicly on public platforms and occasions.

Part of the reason is that OpenAI has always hadThe tradition of having employees sign severance agreements with a non-disparagement agreement. If you refuse to sign, it will be equivalent to giving up the OpenAI option you previously received.That means employees who speak out could lose a fortune.

However, the dominoes still fell one after another——

Ilya's resignation adds to OpenAI's recent wave of departures.

Following the announcement of their resignations, in addition to Jan, the leader of the Super Alignment team, at least five members of the security team have resigned.

Among them is Daniel Kokotajlo (hereinafter referred to as Brother DK) who has not signed a non-disparagement agreement.

Last year, Brother DK wrote that he believed that the probability of an existential disaster in AI was 70%.

Brother DK joined OpenAI in 2022 and works on the governance team. His main job is to guide OpenAI to safely deploy AI.

But he also resigned recently and gave an interview:

OpenAI is training more powerful AI systems, with the goal of eventually surpassing human intelligence.

This could be the best thing that has ever happened to humanity, but if we are not careful, it could also be the worst thing.

Brother DK explained that when he joined OpenAI, he was full of revenge and hope for security governance. He hoped that OpenAI would become more responsible as it gets closer to AGI.But in the teammany peopleSlowly realized that OpenAI will not be like this anymore.

“Gradually lost confidence in the OpenAO leadership and their ability to handle AGI responsibly.” This is the reason why Brother DK resigned.

Disillusionment with the future of AGI's security work is part of the reason why Ilya is exacerbating a wave of departures.

Another part of the reason isThe super-aligned team may not have as many resources to conduct research as the outside world imagines.

Even if the Super Alignment team works at full capacity, the team can only obtain 20% of the computing power promised by OpenAI.

And some of the team's requests were often denied.

Of course, this is because computing power resources are extremely important to AI companies, and every point must be allocated reasonably; but also because the job of the Super Alignment team is to “solve different types of security issues that will actually arise if the company successfully builds AGI.”

In other words, the super-aligned team corresponds to the future security issues that OpenAI needs to face – the focus is on the future and it is unknown whether they will appear.

As of press time, Altman has not sent out his “longer tweet (than Jan's revelation).”

But he briefly mentioned that Jan was right to be concerned about security issues;“We still have a lot to do; and we're committed to doing that.”

At this point, everyone can set up a small bench first, and then we can eat together as soon as possible.

In summary, many people have left the Super Alignment team, especially the departure of Ilya and Jan, leaving this stormy team without a leader.

Follow-up arrangements, yesCo-founder John Schulma takes overbut no longer has a dedicated team.

The new hyper-aligned teams will be more loosely connected groups with members spread across departments across the company, which an OpenAI spokesperson described as “deeper integration.”

This has also been questioned by the outside world, because John's original full-time job was to ensure the security of current OpenAI products.

I wonder if John will be able to be busy with the sudden extra responsibilities and lead the two teams that focus on current and future security issues?

Ilya-Altman Controversy

If the time line is stretched out, today's collapse is actually the sequel to the Ilya-Altman dispute between OpenAI.

Back in November, when Ilya was still there, he worked with the OpenAI board to try to fire Altman.

The reason given at the time was that he was not sincere in his communication.In other words, we don’t trust him.

But the final result was obvious. Altman threatened to join Microsoft with his “allies”. As a result, the board of directors capitulated and the removal action failed. Ilya leaves the board. Altmann, on his part, chose members who were more beneficial to him to join the board of directors.

After that, Ilya disappeared from social platforms until he officially announced his resignation a few days ago. And it is said that he has not appeared in the OpenAI office for about 6 months.

He also left an intriguing tweet at the time, but it was quickly deleted.

I've learned many lessons over the past month. One of the lessons is that the phrase “the beatings will continue until morale improves” applies more often than it should.

But according to insiders,Ilya has been co-leading the Super Alignment team remotely.

On Altmann's side, the biggest accusation employees leveled against him was the inconsistency between his words and his actions. For example, he claimed that he wanted to prioritize safety, but his behavior was contradictory.

Except that the originally promised computing resources were not given. For example, we recently approached Saudi Arabia and others to raise funds to manufacture cores.

Those safety-conscious employees were confused.

If he really cared about building and deploying artificial intelligence in the safest possible way, wouldn't he be so crazy about accumulating chips to accelerate the development of the technology?

Earlier, OpenAI ordered chips from a startup Altman invested in. The amount is as high as US$51 million (approximately RMB 360 million).

In the report letters written by former OpenAI employees during the days of Gong Dou, Altman's description seemed to be confirmed once again.

It is precisely because of this“Words and deeds are inconsistent” from beginning to endThe operation made employees gradually lose confidence in OpenAI and Altman.

Ilya is like that, Jan Laike is like that, Super Alignment Team is like that.

Some considerate netizens have sorted out the important points of relevant things that have happened in the past few years – let me first give you a thoughtful reminder, mentioned below P(doom)refers to “the possibility of AI causing doomsday scenarios.”

  • In 2021, the leader of the GPT-3 team left OpenAI due to “security” issues and founded Anthropic; one of them believed that P (doom) was 10-25%;

  • In 2021, the head of RLHF security research resigned, and P (doom) was 50%;

  • In 2023, the OpenAI board fired Altman;

  • OpenAI fires two security researchers in 2024;

  • In 2024, an OpenAI researcher who paid special attention to security resigned. He believed that P (doom) was already at 70%.

  • In 2024, Ilya and JAN Laike left.

Technology or market?

Since the development of large models, “How to implement AGI?” can actually be attributed to two routes.

Technical schoolIt is hoped that the technology will be mature and controllable before application;Market schoolIt is considered to be open while applying “progressive” to reach the end.

This is also the fundamental disagreement in the Ilya-Altman debate, namely the mission of OpenAI:

Should we focus on AGI and super alignment, or should we focus on expanding the ChatGPT service?

The larger the ChatGPT service is, the greater the amount of calculation required; this will also take up time for AGI security research.

If OpenAI were a non-profit organization dedicated to research, they should spend more time on super-alignment.

Judging from some of OpenAI’s external initiatives, the result is obviously not.They just want to take the lead in the competition of large models and provide more services to enterprises and consumers..

In Ilya's opinion, this is a very dangerous thing.Even though we don’t know exactly what will happen as we scale,In Ilya's opinion, the best approach is to put safety first..

Be open and transparent so that we humans can build AGI securely and not in some covert way.

But under Altman’s leadership, OpenAI seems to be pursuing neither open source nor super-alignment. Instead, it is bent on running in the direction of AGI while trying to build a moat.

So in the end, did AI scientist Ilya make the right choice, or did Silicon Valley businessman Altman make it to the end?

There's no way of knowing yet. But at least OpenAI now faces a critical decision.

Industry insiders summarized two key signals:oneChatGPT is OpenAI’s main income. Without better model support, GPT-4 will not be provided to everyone for free;

anotherIt is that if the departing team members (Jan, Ilya, etc.) are not worried about more powerful features coming soon, they will not care about the alignment issue… If the AI ​​stays at this level, it basically doesn't matter.

However, the fundamental contradiction of OpenAI has not been resolved. On the one hand, there are the concerns of AI scientists who are like pirates about the responsible development of AGI, and on the other hand, there is the urgency of Silicon Valley market parties to promote the sustainability of technology through commercialization.

The two parties are irreconcilable. The Scientologists are completely out of OpenAI, and the outside world still doesn’t know where GPT has gone?

People who are eager to know the answer to this question are a little tired.

A sense of powerlessness came over me, just as Hinton, Ilya’s teacher and one of the three Turing Award giants, said:

I'm old, I'm worried, but there's nothing I can do about it.

Reference links:

  • (1)https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

  • (2)https://x.com/janleike/status/1791498174659715494

  • (3)https://twitter.com/sama/status/1791543264090472660

Advertisement