Today, some netizens have posted red team invitation emails sent to them by OpenAI. It seems that GPT-5 has entered the red team test? Netizens have started to express their imagination and “urge” Sam Altman online. Other foreign media have revealed that a mini version of OpenAI’s multi-billion “Stargate” will be launched as soon as 2026.
GPT-5 has started red team testing?
Just in the past few days, many people on the Internet have posted the red team admission notice sent to them by OpenAI.
There were previous rumors that GPT-5 will be released in June this year. It looks like the timeline for red team testing and model release fits nicely.
Some netizens directly posted screenshots of themselves receiving email invitations from OpenAI.
This is consistent with what Sam Altman said before.
It is reported that GPT-5 has been ready for everyone, but the risk of release is too great, so it has to be postponed.
Can it be used in three months?
However, some people said, don’t worry yet. These people only received invitations from the red team for testing, and did not mention the specific model.
It is possible that they received the email after they filled in the following application information.
The reason why security testing is so important for the new version of GPT is that ChatGPT already has a very large number of users. If there is a problem with security, OpenAI may also face the same pressure from public opinion as Google.
On the other hand, To B business is OpenAI's main source of income, and customized ChatGPT can greatly enhance the business capabilities and efficiency of each enterprise.
Some say red team testing will last 90-120 days.
If this red team test is for GPT-5, then we should be able to use it within three months!
In the community, this rumor made the masses excited! They have long been unable to restrain their speculation and imagination about GPT-5.
For example, how big will the context window of GPT-5 be?
Currently, Gemini 1.5 Pro is 1M, Claude 3 is 200K, and GPT-4 is 128K. I wonder what amazing record GPT-5 will break.
Everyone has listed their wish list for GPT-5——
Such as 10Mtoken’s contextual window, lightning-fast interference, long-term strategic planning and reasoning, the ability to perform complex open operations, GUI/API navigation, long-term contextual memory, always-invisible RAG, multi-modality, and more.
Some people speculate that maybe GPT-5, like Claude 3, will provide several different models.
Someone summarized the latest rumors and rumors about GPT-5 and the red team. The general points are as follows——
-OpenAI is expected to release GPT-5 this summer, and some enterprise customers have received demonstrations of enhanced features;
-GPT-5 is “substantially better” and has undergone major upgrades compared to GPT-4. It requires more training data;
-Potential capabilities of GPT-5 include generating more realistic text, performing complex tasks such as translation and creative writing, processing video input, and improving reasoning;
-Sam Altman said that GPT-5 is still in training, there is no firm release date, and extensive security testing may take months. However, he confirmed that OpenAI will “release an amazing new model” this year.
On March 29, Runway CEO and AI investor Siqi Chen, who obtained internal information, said that GPT-5 has achieved unexpected step function gains in reasoning.
It even figured out on its own how to keep ChatGPT from having to log in every other day.
Maybe this is what Ilya saw?
Does this mean that AGI has been implemented within OpenAI? ! If true, this is amazing.
“I don't believe that only AGI can achieve such capabilities.”
In short, netizens said that according to the leaked to-do list, OpenAI’s next task is to release GPT-5!
Everyone is calling Altman, it’s time to release GPT-5, don’t be too picky, we don’t ask too much.
The red team tests to ensure the security of GPT-5
As early as September 23, OpenAI officially announced the recruitment of a group of red team testers (Red Teaming Network) and invited experts in different fields to evaluate the model.
A red team composed of experts in different fields to find system vulnerabilities has become the key to ensuring the security of the next-generation model GPT-5.
So, what do red team testers generally need to do?
The types of AI red team attacks mainly include prompt attacks, data poisoning, backdoor attacks, adversarial examples, data extraction, etc.
“Prompt attack” refers to injecting malicious instructions into the prompts controlling LLM, thereby causing the large model to perform unexpected operations.
For example, earlier this year, a college student used tips to obtain confidential information from a large company, including the code names of AI projects being developed, as well as some metadata that should not have been exposed.
The biggest challenge of “hint attacks” is to find new hints or sets of hints that threat actors have not yet discovered and exploited.
Another major attack that red teams need to test for is “data poisoning.”
In the case of data poisoning, threat actors will attempt to tamper with the data on which LLM is trained, creating new biases, vulnerabilities for others to exploit, and backdoors to corrupt the data.
“Data poisoning” can have a serious impact on the results provided by LLMs, because when LLMs are trained on poisoned data, they learn correlation patterns based on this information.
For example, misleading or inaccurate information about a certain brand or political figure can influence people's decision-making.
Or, models trained on contaminated data provide inaccurate medical information about how to treat common illnesses or ailments, leading to more serious consequences.
Therefore, red teamers need to simulate a series of data poisoning attacks to discover any vulnerabilities in the LLM training and deployment process.
In addition, there are multiple attack methods, and OpenAI invites experts to ensure that GPT-5 can complete security testing.
GPT-5 is really not far away
As netizens said, the opening of the red team test means that GPT-5 is really not far away.
Some time ago, Altman mentioned in a blog interview, “We will release an amazing new model this year, but we don't know what it will be called.”
Despite this, the entire network unanimously calls the next-generation model released by OpenAI GPT-5, and there are rumors that the project code-named Arrakis is the prototype of GPT-5.
According to FeltSteam's predictions, the performance of this Arrakis multi-modal model far exceeds that of GPT-4 and is very close to AGI.
In addition, the model parameters are said to be 125 trillion, approximately 100 times that of GPT-4, and training will be completed in October 2022.
Netizens also summarized the release schedule of previous GPT series models: GPT-1 was born in June 2018, GPT-2 in February 2019, GPT-3 in June 2020, GPT-3.5 in December 2022, GPT -4 will be released just three months later in March 2023.
Regarding the release time of GPT-5, it may come out this summer.
Recently, a picture circulated on the Internet shows that Y Combinator has launched a GPT-5 early access waiting list.
Netizens raised questions. We all know that the relationship between Ultraman and YC is unusual. Does this mean they can gain access to the model or information before it becomes public?
Last month, it was also reported that users have experienced GPT-5 and its performance is amazing.
Foreign media revealed that some enterprise users have already experienced the latest version of ChatGPT.
“It's really great, a qualitative leap forward,” said a CEO who recently saw the effects of GPT-5.
OpenAI showed how the new model worked based on the CEO's company's specific needs and data.
He also mentioned that OpenAI also hinted that the model has other undisclosed functions, including the ability to call the AI agent OpenAI is developing to complete tasks autonomously.
GPT-5, is it definitely the correct route?
However, among the highly anticipated calls for the release of GPT-5, there are also some different voices.
For example, some people think that GPT-5 cannot drive your car, GPT-5 cannot solve the nuclear fusion problem, GPT-5 cannot cure cancer…
In addition, does our pursuit of models have to be more intelligent?
A cheaper, faster, less energy-intensive model could be more revolutionary than GPT-5 alone.
Some people agree with this point of view, saying that too many people (especially developers) are too obsessed with GPT-5.
There's no need to be so fanatical, there's so much that can already be done and built using the current model.
Just choose the niche correctly, build an AI product that meets the needs of that niche, give users intuitive access to AI, and focus on better UI/UX.
The formula is simple. Do we really need to blindly pursue the power of big bricks?
Many people agreed and said that very valuable things can be created even with GPT-3.5.
The question is not how advanced the model is, but how to meet the needs of the niche market.
Intelligent computing center, start small
The $100 billion “Stargate” supercomputer that was exposed at the end of March and was used to train GPT-6 has been unearthed by foreign media today with more new content.
Last Friday, foreign media The Information revealed a surprising news: OpenAI and Microsoft are formulating an ambitious data center project that is expected to cost US$100 billion.
As soon as this news came to light, questions from people in the AI and cloud computing industries came like a snowflake——
Where exactly is the data center located in the United States?
What chip will be used?
Where does the staggering amount of electricity required to run data centers come from?
…
To this end, The Information has dug up more information. The specific details are as follows.
First of all, previous news said that Stargate will be launched as early as 2028, and the latest news shows that a smaller data center will be launched in Wisconsin as soon as 2026.
It's certainly worth less than a hundred billion dollars, but is still estimated to cost billions.
Other details are as follows –
Use NVIDIA chips, but no NVIDIA network cables
First of all, of course, most of the server racks in the data center this time mainly use NVIDIA chips.
However, what is interesting is that the network cables connecting various AI chip servers will not use NVIDIA products.
It is reported that OpenAI has informed Microsoft that it no longer wants to use Nvidia's InfiniBand network equipment. Instead, it will likely use an Ethernet-based cable.
OpenAI “abandoned” Nvidia InfiniBand for two reasons.
For one, InfiniBand is too expensive!
While it provides better performance, it is also more expensive than Ethernet cables.
Second, OpenAI does not want AI developers to rely too much on Nvidia.
You know, OpenAI is currently one of the largest consumers of NVIDIA server clusters in the world. Moreover, the performance of InifiniBand devices is sometimes unreliable.
So, will Nvidia lose a chunk of revenue?
No, you are overthinking.
The billions of dollars saved will be used by OpenAI to buy more Nvidia chips, and Nvidia will still make a lot of money.
It seems that OpenAI can accept the reduction in network performance, but the desire for stronger computing power remains unchanged.
The battle between InfiniBand and Ethernet has become a hot topic
In fact, the PK of InfiniBand and Ethernet has been a hot topic at recent conferences and dinners in Silicon Valley.
All cloud providers and data center operators are predicting: Will Ethernet catch up to InfiniBand?
The answer given by the vast majority of people is yes.
OpenAI's move to abandon the latter supports this argument.
How expensive are NVIDIA cables?
This number is quite astonishing——
Nvidia's sales of network cables have exceeded the money spent on selling GPUs!
Nvidia Chief Financial Officer Collete Kress revealed this astonishing data in February this year: the annualized revenue of the emerging cable business has exceeded $13 billion.
In other words, it generated about $1.1 billion in revenue in December, accounting for about 15% of Nvidia's total revenue for the month.
Network cables are so expensive, no wonder OpenAI chose not to play.
References:
https://www.reddit.com/r/singularity/comments/1bv8m4k/gpt5_red_teaming_underway/
https://www.theinformation.com/articles/openai-moves-to-lessen-reliance-on-some-nvidia-hardware