Tao Zhexuan spearheads the United States’ AI “moon landing plan” with the unveiling of a 62-page report

[Introduction to New Wisdom]Just now, a 62-page report led by Terence Tao was released, summarizing and predicting the huge changes that AI will bring to semiconductors, superconductors, basic physics of the universe, life sciences and other fields. If these predictions can be realized in a few decades, the AI ​​”moon landing plan” brewing in the United States will come true.

Just now, a technical report led by Terence Tao on the potential impact of AI technology on global research was released.

Advertisement

The 62-page report summarizes the changes that AI has made to materials, semiconductor design, climate, physics, life sciences and other fields, as well as predicts the changes that may be caused by AI in the future.

Report address:

Advertisement

https://www.whitehouse.gov/wp-content/uploads/2024/04/AI-Report_Upload_29APRIL2024_SEND-2.pdf

In addition to summarizing the vignettes in the scientific field that AI tools have changed, Tao Zhexuan and others also issued three appeals——

1. Human scientists must be given more power;

2. Everyone must use AI tools responsibly;

3. Basic AI resources need to be shared at the national level.

Once the necessary AI infrastructure is in place, new scientific “moonshots” will be possible

As we all know, AI – can help researchers gain more insights from data and identify the most likely solutions; can help with daily tasks, allowing researchers to focus on core research; can help automate laboratory processes; It can help complete simulations that were previously difficult to achieve; it can bring together multiple forms of data through multi-modal basic models and create new synergies between different branches of science.

When resources are in place and can provide access to computing power, secure data sharing services, open source AI models, and other critical infrastructure, we can begin planning those very complex and large-scale “moonshot” scientific research projects.

These items may include:

– A fundamental model that simulates the complexity of human cells, allowing for the study of disease and experimental treatments in silico, rather than in a test tube or in vivo;

– A detailed whole-Earth model that uses traditional and AI models to describe the components of the Earth system, while also being continuously updated with highly diverse real-time data;

– Discover practical room-temperature superconductors through systematic collection, processing, and AI-assisted analysis of existing data and literature, as well as automated laboratory synthesis and testing of viable candidates.

With the emergence of shared AI resource infrastructure, new forms of cooperation will gain substantial benefits from scale effects. That is to say, as the scale of the project expands, unit costs will decrease and efficiency will increase.

At the same time, this kind of cooperation can also reduce duplication of work between different teams and improve research efficiency.

Subject areas that AI is about to disrupt

At this stage in the development of human science, we have reached a critical point. In many areas, we face huge obstacles, and once these obstacles can be overcome, progress in these areas will enter a new stage.

What’s exciting is that it’s difficult for us to make too many breakthroughs on our own, and AI is likely to be able to solve them!

Of course, in order to realize these ideas, we also face some potential risks that must be considered, and we also need the resources required to achieve our goals.

AI designs semiconductors, keeping the U.S. number one

Today, modern electronic equipment that supports the global economy and national security all rely on “chips” to run.

As the functionality of these chips increases, so does their complexity—currently the most advanced chips contain tens of billions of components.

Currently only the largest companies have the ability to manufacture these high-end chips due to the massive engineering resources and complex infrastructure required. AI can significantly improve the quality of chip design while reducing the time and number of people required.

Of course, these AI tools are not meant to replace designers, but to help alleviate the shortage of professional chip designers by improving designers’ work efficiency.

Now, there are many AI-assisted tools developed specifically for chip designers, which can allow junior designers to solve problems that would otherwise require senior designers to spend a lot of time.

At the same time, some chip design AI agents can also summarize error reports and design documents, or generate scripts for other design automation tools based on simple English prompts.

https://arxiv.org/pdf/2311.00176

Even AI under development can design circuits faster or smaller than traditional methods.

By leveraging reinforcement learning techniques, the AI ​​receives positive “rewards” and negative “penalties” as it explores possible circuit configurations, allowing it to adjust its design strategy and ultimately find those circuit design methods with desirable characteristics.

With the rapid advancement of semiconductor technology, each change requires the redesign of thousands of standard design cells to adapt to new manufacturing processes. For many manufacturers, this process can require up to 80 person-months of labor.

In contrast, a combination of generative AI for data clustering and reinforcement learning for correcting design rule errors can automate this design process, reducing the amount of work required by more than a thousand times.

Paper address:https://dl.acm.org/doi/pdf/10.1145/3569052.3578920

At the same time, the application of FPGA enables rapid iteration on the latest AI driver placement and routing technology, achieving more than three times efficiency improvement.

During the creation of chip designs, multiple analyzes must be performed on the designs to ensure that they comply with specified standards and manufacturing process constraints.

In the past, in order to accurately grasp the “parasitic” characteristics, it was necessary to first create a layout diagram of the circuit. This step often added several days of manual work to each iteration of the design cycle.

Now, the entire design iteration process can be completed in minutes, resulting in a circuit that meets expected specifications quickly.

Soon, more powerful LLMs will be transformed into “chip design assistants” – they can not only answer questions, evaluate and verify designs, but also perform some routine design tasks.

In addition, AI technology will greatly improve designers’ work efficiency, perhaps tenfold or more. Designers only need to focus their attention on the algorithm and system level, and leave the more detailed design level to AI.

Moreover, AI’s synthesis and analysis tools will drastically shorten the design cycle, allowing the process to go from high-level design description to completed verified layout in just a few hours, instead of the weeks it currently takes.

PCAST ​​expects that by integrating these cutting-edge technologies into the chip manufacturing process, the United States will continue to maintain its leadership in semiconductor design and effectively alleviate the severe labor shortage problem in this field.

It can even achieve the ambitious goal of the U.S. semiconductor industry – to develop new platforms, methods and tools so that chip production requires only one-tenth of the manpower currently required.

Revealing the fundamental physics of the universe: 1-minute simulation of 1 month of supercomputing

These mysteries about the universe have never been answered.

What “dark matter” holds galaxies together?

And what “dark energy” drives the accelerated expansion of the distance between all galaxies?

What is the significance of the recently observed ancient galaxies?

These basic understandings of the universe can enable us to achieve technological leaps.

For example, it may be difficult for us to imagine a more abstract and unrealistic basic theory than general relativity. However, it is the basis of the global positioning system GPS and solves positioning and navigation problems that we have never imagined before. The economic benefits are in the hundreds of billions of dollars.

Today, AI has become an important tool in the work of physicists and cosmologists in experiments and observations, used in most steps of design, implementation and analysis.

Some applications of AI build on current methods of comparing and testing theories with data through computational simulations, such as what the data would look like if a theory were correct.

▲ Unsupervised in-distribution anomaly detection for new physics via conditional density estimation

These simulations are perhaps the most difficult tasks for supercomputers because they require calculating every step of the behavior of every particle, star or galaxy.

But the benefit of AI is that it can learn larger models from these simulations. In this way, scientists can shorten the tasks of these supercomputers, allowing them to see an approximate value of a month's workload of a supercomputer in less than a minute.

Using AI, researchers can scan millions of possible theories, each with a different initial image of our universe, and they can see which one better explains the data we actually observe with telescopes.

By the end of 2030, we will be able to use AI to analyze ten years of data from the Nancy Grace Roman Telescope.

▲ Nancy Grace Roman Space Telescope

Through AI analysis of data, scientists are likely to find startling evidence that our universe will not end in a cold silence of exponential expansion, but will repeatedly have big bangs and restart the cycle.

AI has the ability to discover patterns in complex data sets, with far more variables than humans can keep track of.

If a new discovery breaks the rules, it will stand out.

Particle physicists have held competitions to find the best way to search for these “anomalies,” which could lead to new physical discoveries. The winners of the competition were based on discoveries made by AI.

▲ Generate effective physical laws of cosmic fluid dynamics through Lagrangian deep learning, and predict dark matter overdensity, stellar mass, electron momentum density, etc. in hybrid simulations

These AI methods are likely to allow us to discover some extremely rare and unexpected particles in the next generation of CERN and Fermilab accelerator experiments, which will help build a unified theory that combines gravity with other forces.

Basic physics and cosmology are both based on statistical analysis of data, and therefore require an in-depth understanding of probability in data interpretation. This requirement has also promoted the development of AI in handling probability rigor.

Because what we need AI to do is not just provide the most likely answer (“That's a picture of a cat”), but to develop AI systems that can provide a range of possible answers and provide the likelihood that each answer is correct ( “There's a 69 percent chance it's a cat, a 22 percent chance it's an aardvark, an 8 percent chance it's a balloon, and a 1 percent chance it's a refrigerator.”)

▲ Physicists are looking for a theory to unify quantum physics with general relativity

For a key number measurement, it gives a set of possible values, such as a 68%, 95% or 99.9% probability.

Assessing uncertainty is crucial to basic physics, and AI that strictly follows probability will also bring changes to many other scientific fields, and is also of great significance to the unexpected applications of science.

Perhaps 20 years from now, scientists will use AI to see analogies between quantum computers and black holes, opening up a new benchtop method for testing general relativity, and a powerful new timing technology.

New materials: superconductors, cold atoms, topological insulators, superconducting qubits

In the past, major improvements in the quality of human life were driven by advances in material science such as bronze, iron, concrete, and steel.

Today we live in the age of silicon, hydrocarbons and nitrates. The near future may be the era of nanomaterials, biopolymers and quantum materials.

The assistance of AI will open up many possibilities that have only existed in imagination before, including room temperature superconductivity and large-scale quantum computer architecture.

▲ Robots are synthesizing materials in Lawrence Berkeley National Laboratory’s A-Lab

Today, scientists have successfully discovered a variety of materials using deep learning models.

For example, an interdisciplinary research team at a private company has used AI to design millions of new materials. Nearly half of the new materials predicted by AI are stable enough to be grown in the laboratory.

In addition, AI can also be used to improve existing materials, optimize material composition, and reduce environmentally harmful substances.

▲ Example of density functional theory used to predict topological properties of materials

The National Science Foundation (NSF) has invested $72.5 million to design and develop new materials to solve major social challenges.

Specifically, the following materials areas that have encountered obstacles are expected to be solved by AI.

superconductor

Last summer’s room-temperature superconducting heat made the whole society feel the excitement of the approaching singularity.

Superconductors are essential for magnetic resonance imagers, particle accelerators, some experimental quantum computing techniques, and the nation's power grid because they can transmit electrical energy without loss.

However, superconductors face three problems.

First, the currently known superconductors must be cooled to close to absolute zero, which is minus 273 degrees Celsius, which requires the use of liquid nitrogen, making the equipment extremely expensive.

Second, unlike traditional conductors such as copper, existing superconductors are not malleable and will lose their superconductivity over time.

Third, the cost of precursor materials and processing into wires are also very expensive.

In the past, our attempts have relied on combinatorial chemistry approaches, which required screening large numbers of material combinations.

▲ Human beings have been working hard for more than 100 years to obtain superconductivity at room temperature and pressure.

As a result, many crucial materials were discovered by accident, with a lot of trial and error.

There are so many variables and the difficulty of making materials cheap that the discovery of superconductors is nearly impossible to solve using traditional methods.

AI will bring changes in three areas.

First, the predictive capabilities of AI models allow us to discover new materials by connecting and leveraging large amounts of data on existing materials, processing conditions and properties.

From this data set, patterns in materials in chemistry, physics and engineering can be identified, providing researchers with new approaches.

▲ Based on GNoME findings, showing how model-based filtering and DFT can act as a data flywheel to improve predictions

Second, AI models can predict performance (for example, predicting the phase timing of a qubit, the efficiency of a thermoelectric material, or the critical temperature of a superconductor), thereby reducing wasted testing of experiments on viable candidate materials.

Third, by combining process information with material composition, it is possible to set practical limits on material design and accelerate the commercialization process of new material applications.

In addition to “hard” materials such as superconductors, “soft” materials such as polymers and fluids also require huge data levels and predictive capabilities because of the complex structure-property relationships in materials science.

Moreover, the basic building blocks of quantum computers, such as cold atoms, topological insulators or superconducting qubits, can all be improved or generated by AI.

▲ Superconducting chip with 4 qubits

life sciences

The U.S. National Science and Technology Council believes that the tools, analysis and results driven by AI will fundamentally change the way we explore and understand the basic components of life, and will also affect living systems including agriculture and medicine.

Uncovering the mysteries of cell function

Deciphering the complex inner workings of cells has been a problem that has puzzled biologists for centuries because cells are extremely complex and interconnected structures.

And AI provides powerful tools for this.

For example, AI provides new perspectives on proteins.

The AI-based protein folding prediction system can use machine learning algorithms to predict the structures of millions of proteins.

▲ The scope of structural modeling based on large-scale deep learning extends from monomeric proteins to protein assemblies

These systems learn from data on known proteins and structures, as well as from basic chemical knowledge such as physical constraints on the distances between atoms.

Recently, researchers have also used AI to decipher the functions of proteins, including how proteins interact, thereby revealing molecular mechanisms such as cell signaling, metabolism, and gene regulation.

Artificial intelligence tools are also being used to design proteins to achieve specific binding to receptors and other targets.

AI-driven protein design has already led to success in developing vaccines and new drugs. Some of these design methods use the “diffusion model” and the fill and stroke techniques of image generation systems.

▲ Accurately predict protein structure and interactions using three-track neural networks

Build a basic model for biological sciences

A promising method for building bioinformatics simulation tools is to build multi-modal and multi-level bioscience basic models for overall cell modeling.

AI methods enable scientists to perform multimodal representations, or “embeds,” of many types of data, including protein sequences and structures, DNA, RNA expression data, clinical observations, imaging data, and data from electronic health records.

▲ Use RFAA for general biomolecular modeling

For example, EVO, a basic model that integrates large data sets, is developed to combine DNA, RNA and protein data to elucidate the interactions behind the overall function of the cell.

This multimodal, multilevel model can provide predictions of outcomes at various scales, from atoms to physiology, as well as the generation of molecules and behaviors.

Basic models in biological sciences are expected to allow scientists to explore the nature of health and disease, such as building cancer models and exploring cellular interactions and how the networks behind cancer can be disrupted or “cured” in simulations.

AI will guide drug development, reducing unnecessary waste by virtually screening potential therapeutic compounds before starting expensive and time-consuming experiments.

Five suggestions

To achieve the above technological advancements, the PACST Committee made the following five recommendations.

Recommendation 1: Share basic AI resources broadly and equitably

Broad support for easily accessible shared models, datasets, benchmarks, and computing power is critical to ensuring that academic researchers, national and federal laboratories, and smaller companies and nonprofits can use AI to create national benefit.

In the United States, one of the most promising pilot projects is the National Artificial Intelligence Research Resource (NAIRR). PCAST ​​recommends that NAIRR be expanded to the scale envisioned by the task force as soon as possible and fully funded.

A full-scale NAIRR, along with industry partnerships and other AI infrastructure at the federal and state levels, could serve as the cornerstone of AI infrastructure projects at the U.S. or international level, catalyzing high-impact research.

Recommendation 2: Expand secure access to federal data sets

PCAST ​​strongly recommends expanding existing secure data access pilot programs and developing federal database management guidelines to incorporate state-of-the-art privacy protection technologies.

This includes allowing approved researchers limited, secure access to federal datasets and allowing the release of anonymized datasets to resource centers such as NAIRR.

In addition, PCAST ​​hopes to further enforce such authorizations, including sharing AI models trained on federally funded research data and provide sufficient resources to support required actions.

Recommendation 3: Support basic and applied research in AI, including collaboration among academia, industry, national and federal laboratories, and federal agencies

The lines between federally funded academic research and private sector research are blurry. Many researchers move between academic institutions, nonprofit organizations, and private companies. Within this, private companies currently support a significant proportion of AI R&D.

In order to fully utilize the potential advantages of AI in the scientific field, it is necessary to support research on various promising and fruitful hypotheses and methods.

Funding agencies therefore need to take a broader stance on how they work with industry and which researchers can be supported, in order to promote innovative research and collaboration between different sectors.

Recommendation 4: Adopt principles of responsible, transparent and trustworthy use of AI at all stages of the scientific research process

In scientific research, the use of AI may produce results that are inaccurate, biased, harmful, or irreproducible. Therefore, these risks should be managed from the initial stages of the project.

PCAST ​​suggested that federal funding agencies could update their codes of conduct for responsible research to require researchers to provide plans for responsible use of AI. To minimize additional administrative burden on researchers, institutions should provide a model process for risk mitigation after enumerating the major risks.

At the same time, agencies such as the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) should continue to support scientifically based research on responsible and trustworthy AI.

These include standard benchmarks that measure AI properties such as accuracy, repeatability, fairness, resilience, and explainability; AI algorithms that monitor these properties and adjust when benchmarks fall outside defined ranges; and assess bias in data sets, and tools to differentiate between synthetic and real-world data.

Recommendation 5: Encourage innovative ways to integrate AI assistance into scientific workflows

Scientific careers are a great “sandbox” in which we can practice, study, and evaluate new collaboration paradigms between humans and AI assistants.

However, the goal here is not to pursue maximum automation, but to enable human researchers to achieve high-quality scientific research while using AI assistance responsibly.

Funding agencies should pay attention to the emergence of these new workflows and design flexible procedures, evaluation indicators, funding models, and challenging questions to encourage strategic experiments in organizing and executing scientific projects in new AI-assisted ways.

Furthermore, the implementation of these workflows provides opportunities for researchers from various disciplines to advance knowledge in the field of human-machine collaboration.

More broadly, we also need to update the incentives of funding agencies, academia and the scholarly publishing industry to support a wider range of scientific contributions. For example, curating high-quality and widely available datasets is not adequately recognized through traditional research productivity metrics.

References:

  • https://mathstodon.xyz/@tao/112355788324104561

Advertisement