Nature survey: Only 4% of scientists believe AI has become a “necessity” | 1,600+ people participated

AI papers are increasing dramatically, but only 4% of researchers really think it is a “rigorous need”? !

This conclusion comes from the latest survey by Nature.

Advertisement

More accurately, it is a survey of researchers who use AI tools in scientific research.

These were selected from more than 40,000 researchers who published in the last four months of 2022all over the world, from different disciplines.

In addition, “insiders” who develop AI tools and “outsiders” who do not use AI tools in research are also included in the investigation, with a total number of respondents reaching 1,600+.

The relevant results have been published under the title “AI and science: what 1,600 researchers think”.

Advertisement

What exactly do scientific researchers think of AI tools? Let’s look down.

What 1,600 researchers think about AI

The survey focused on researchers’ views on machine learning and generative AI.

In order to ensure the objectivity and validity of the survey results, as mentioned above, Nature contacted more than 40,000 scientists from all over the world who published papers in the last 4 months of 2022 via email, and invited readers of Nature newsletters to participate in the survey.

Finally, 1,659 respondents were selected. The specific sample composition is as follows:

Most of the respondents are from Asia (28%), Europe (nearly 1/3), and North America (20%).

Among them, 48% directly develop or research AI, 30% use AI in research, and 22% do not use AI in research.

Let’s look at the detailed results below.

According to the survey, among those who use AI in research,More than a quarter believe AI tools will become a “necessity” in their field within the next decade.

However, only 4% of people believe that AI tools are now a “necessity”, and 47% believe that artificial intelligence will be “very useful” in the future.

In contrast, researchers who do not use AI are not very interested. Even so, 9% believe these technologies will become “essential” over the next decade, and a further 34% say they will be “very useful.”

In the survey on opinions about machine learning, respondents were asked to choose the positive effects brought by AI tools. Two-thirds of the respondents believe that AI provides faster data processing, 58% believe that AI accelerates calculations that were previously unfeasible, and 55% mentioned that AI saves time and money.

The main negative effects that respondents believe AI may bring are: leading to greater reliance on pattern recognition rather than deep understanding (69%), possibly strengthening bias or discrimination in data (58%), and possibly increasing The probability of fraud (55%) and blind use may lead to unreproducible research (53%).

Let’s look at what researchers think about generative AI tools.

Most people agree that one of the great advantages of generative AI tools is summarization and translation, which can help non-native English-speaking researchers improve the grammar and style of their papers. Secondly, its ability to write code was also endorsed.

But generative AI also has some problems. Researchers are most concerned about inaccurate dissemination of information (68%), making plagiarism easier to detect and harder to detect (68%), and introducing errors or inaccuracies into papers/code (66%).

Respondents also added that they are concerned about the potential for falsified research, disinformation and perpetuating bias if AI tools used for medical diagnosis are trained on biased data.

In addition, according to usage frequency statistics,Even among researchers interested in AI, there are still only a few who regularly use large language models in their work.

Among all surveyed groups, the most common thing researchers use AI for is creative entertainment* unrelated to research; followed by using AI tools to write code, conceive of research ideas, and help write papers.

Some scientists are not satisfied with the output of large models. One researcher who used the large model to help edit the paper wrote:

It feels like ChatGPT has copied all the bad writing habits of humans.

Johannes Niskanen, a physicist at the University of Turku in Finland, said:

If we use AI to read and write articles, science will soon shift from “for humans by humans” to “for machines by machines.”

In this survey, Nature also delved into researchers’ views on the dilemmas facing the development of AI.

Dilemmas faced by AI development

About half of the researchers said they encountered obstacles in developing or using AI.

The biggest concerns for researchers working on developing AI are:There are insufficient computing resources, insufficient research funding, and insufficient high-quality data for AI training.

Those who work in other fields but use AI in research are more concerned about the lack of sufficiently skilled scientists and training resources, as well as security and privacy.

Researchers who do not use AI say they do not need AI or think AI is not practical, or they lack the experience and time to study these AI tools.

It is worth mentioning that business giants dominate AI computing resources and the ownership of AI tools are also issues of concern to respondents.

23% of AI tool developers said they work with or work for companies that develop AI tools (Google and Microsoft were most frequently mentioned), compared with only 7% of those who only use AI. experience.

Overall, more than half of respondents believe it is “very” or “somewhat” important for researchers using AI to collaborate with scientists from these companies.

In addition to development, there are also some problems in use.

Researchers have previously stated that blind use of AI tools in scientific research may lead to erroneous, false and irreproducible research results.

According to Lior Shamir, a computer scientist at Kansas State University in Manhattan:

Machine learning can sometimes be useful, but AI raises more questions than it helps. Scientists using AI without understanding what they are doing can lead to false discoveries.

Respondents had mixed opinions when asked whether journal editors and peer reviewers would be able to adequately review papers using artificial intelligence.

Among researchers who use AI in research but do not directly develop it, about half said they were unsure, a quarter believed the review was adequate, and a quarter believed the review was insufficient. And those researchers who directly develop AI tend to have a more positive view of the editing and review process.

In addition, Nature also asked respondents how concerned they were about seven potential impacts of AI on society.

The spread of misinformation emerged as the researchers’ top concern, with two-thirds saying they were “very concerned” or “very concerned” about it.

The least worrying thing is that AI may pose an existential threat to humanity.

Reference links:https://www.nature.com/ articles/d41586-023-02980-0

Advertisement