AI PC is just in its infancy, so you might as well wait and see–Quick Technology–Technology changes the future

As the general public becomes more widely aware of AIGC, all walks of life have set off an “AI revolution”. As if overnight, AI has become the main development direction that everyone has been pursuing.

Especially for PCs, if you don’t count the ridiculous “AI PC” TV commercial more than ten years ago, the concept of AI PC has been widely proposed since the second half of last year.

Advertisement

In just a few months, a large number of “AI PCs” equipped with Core Ultra processors and AMD 8000 series Ryzen processors have been launched. However, AI PCs with a strong “name” also face many problems. .

1. Hardware first, but no “killer” application

The explosion of AIGC caught upstream hardware manufacturers off guard. Everyone wanted to quickly capture the market in the early stages of this wave and occupy the right to speak in the market. So starting in the second half of last year, Intel and AMD announced a new generation of mobile terminals. processor.

And whether it is Intel, AMD or terminal product manufacturers, they all say that the “AI PC” equipped with these processors is another computer revolution, and new products are springing up like mushrooms after a rain, as if personal electronic devices have ushered in an unprecedented innovation.

Advertisement

But at least until now, the appearance of AI PC has been packaged in a luxurious auditorium. Although its internal decoration cannot be said to be without, compared with the gorgeous appearance, it is still very simple.

The biggest reason for this is that the current AI PC does not have a “killer” application.

Just like 4G is to short videos and mobile games, through 4G’s faster network speed and lower latency, we can receive more information streams of greater magnitude and more interest than before, and we can also play more interesting games. Mobile games, coupled with increasingly cheaper tariffs, have led to the rapid spread of 4G.

However, the current AI PC lacks this kind of application that can benefit the general public. Users' perception of AI PC is mostly based on the hype of publicity, but there is not much perception of actual experience.

AMD Chairman and CEO Su Zifeng also said: “We are all looking for the 'killer application' that can promote PC upgrades and enter a new cycle. We believe that AI will become the ability to trigger the growth of the PC market.”

Hardware support is a very important aspect. Now that the framework has been set up, it needs to be filled with content. If we want users to actively embrace “AI PC”, upstream chip manufacturers and terminal product manufacturers need to accelerate the search for the killer AI. Local application allows the internal decoration to match the external luxury, and promotes users' replacement and purchase from the inside out.

2. Let “NPU” come in handy first

For AI PCs, the biggest difference from previous laptops is the addition of the “NPU” unit to the processor. Different from the traditional computing core CPU and display core iGPU, the NPU is a unit specifically used for AI calculations. So, obviously, when you want to run AI-related functions and applications, at least the NPU must be involved in the work.

In order for the NPU to intervene in the work, it is not enough for the NPU to “exist”. It also requires calling and optimizing at the system and software levels. Unfortunately, among the few AI-related applications currently, not all applications can call the NPU. , if the NPU cannot be used to improve performance or energy efficiency, then for consumers, there will definitely be little desire to replace AI PCs.

At the AI ​​PC Summit held by AMD not long ago, AMD demonstrated four AIGC-related DEMOs on site, including Wenshengwen, Wensentu, Paintings and Code Generation. Only when the code is generated, it means that the application is running on the NPU. During the official display, we also noticed that the working status of the NPU could not even be seen in the task manager in the Windows system.

In terms of hardware, allowing NPU to fully participate in AI applications is also the top priority for hardware manufacturers to collaborate with software manufacturers.

3. Energy efficiency ratio may be more important than “computing power”

When Intel released its first Core Ultra processor equipped with an NPU last year, it stated that the use of the NPU is not only to improve AI computing power, but also to improve the energy efficiency of AI calculations, such as in video conferencing, background If AI functions such as blurring, background switching, and human eye correction are performed using the NPU, the power consumption of the CPU package can be reduced without compromising the effect, thereby enhancing battery life.

The video conferencing and live broadcast enhancement software XSplit VCam can call the NPU in the Core Ultra processor to perform functions such as background blurring and switching. According to actual tests, compared to placing the load on the GPU, the CPU package power consumption is reduced by 2.5% when using the NPU. ~3W or so.

AI PC is just in its infancy, so you might as well wait and see

When the load is placed on the NPU, you can see that the NPU starts to work. At this time, the battery's discharge power consumption is about 21.5W~22.5W.

AI PC is just in its infancy, so you might as well wait and see

For AI applications, we may not expect high computing power. Consumer-level CPU performance cannot match that of server chips on the user side.

If some commonly used and trivial AI functions can be loaded onto NPUs with higher energy efficiency, improving battery life and reducing body heat and fan noise will be the future development direction of AI PCs.

4. “Cloud” is indispensable, and AI PC’s support for mobile networks cannot be delayed.

As mentioned in the previous part, based on the current level of technology, it is impossible for consumer-grade CPUs to compare with server chips in terms of computing power. In applications such as Wenshengwen and Wenshengtu, local AI has better privacy protection. .

However, the number of parameters will be far less than that of the cloud, and the function and quality of result generation will not be as good as the cloud. Therefore, both local and cloud are indispensable for AI applications.

Locally, it mainly handles applications that do not require high computing power or low quality requirements, such as retrieving photos containing keywords in multiple photos, extracting the main content of documents, image keying, etc.

When applications such as high-computing power and high-quality text-based graphs, graph-based graphs, and video generation are required, they can be run on the cloud of the model operator. If they are enterprise users with high privacy requirements and dedicated large language models in the enterprise, , you can use the enterprise's own cloud server to perform calculations and generate results.

For freelancers who need large models and have high privacy requirements, they can set up a personal server at home to deploy large models, just like a NAS, which can be called anytime and anywhere without worrying about privacy leaks.

Therefore, in the future, AI PCs will not only need processors with higher computing power and larger memory, but also equipped with stable and faster mobile network modules. This will also be a key threshold for future PCs to enter AI. Mobile Network modules should gradually become popular in mainstream notebooks.

5. Intel and AMD may accelerate the development of “unified memory”

Local large models have extraordinary demands for GPU memory. Currently, independent graphics thin and light notebooks equipped with Intel Core processors and AMD Ryzen processors are far lower than Apple's MacBook in terms of the number of parameters that can be run on large models.

One of the main reasons is that Apple's M-series chips have “unified memory” technology. To put it simply, the GPU in Apple's M-series chips can directly access memory to act as video memory as needed.

At the same time, Apple has increased the bandwidth and capacity to bring the memory bandwidth to the level of the memory bandwidth of a mainstream high-end graphics card, allowing the M series chips to run large models with higher parameters.

AI PC is just in its infancy, so you might as well wait and see

M3 Max supports up to 128GB of unified memory, while the previous generation M2 Max supports 96GB.

There will definitely be friends who will say that if it’s so complicated, wouldn’t it be better to just get a standalone graphics card?

The independent graphics card itself has high power consumption. Although GDDR memory is fast, it generates more heat than DDR and LPDDR memory. The notebook cannot carry a large amount of GDDR memory. For notebooks that require portability and battery life, at least in terms of AI , compared to the “unified memory” solution, independent graphics cards are not an excellent solution for “increasing graphics memory capacity”.

At present, the core display scale and performance of Intel Core Ultra processors and AMD Ryzen processors are increasing day by day. In the future, with more iterations, if “unified memory” technology can be added, the upper limit of the number of parameters that can run large models will be It can be greatly improved.

I personally predict that if Intel and AMD launch processor products with unified memory technology on the market, they will be distinguished from conventional models and become an independent product line.

6. Write at the end

With the explosion of AI, the bets of upstream manufacturers, and the active follow-up of downstream manufacturers, AIPC, which is still in its infancy, still has a long way to go in the future. The development of technology is also changing with each passing day. What kind of form can AI PC be derived from? We will wait and see what the future path of PC is.

AI PC is just in its infancy, so you might as well wait and see

Advertisement