The field of generative AI has attracted $56 billion in venture capital worldwide in 2024, but scientists don't expect the technology to lead to AGI. (Image courtesy of Yuichiro Chino)
Current artificial intelligence (AI) methods are unlikely to produce models comparable to human intelligence, according to a new survey of experts in the field.
Of the 475 researchers surveyed, 76% said that scaling up large language models (LLMs) is “unlikely” or “extremely unlikely” to achieve artificial general intelligence (AGI) — a hypothetical milestone at which machine learning systems can learn as well as or better than humans.
This is a significant defying of the predictions of the tech industry, which has been arguing since the 2022 surge of generative AI that today's advanced AI models only need more data, resources, energy, and capital investment to surpass human intelligence.
Now, as adoption of the latest models appears to be slowing, most researchers surveyed by the Association for the Advancement of Artificial Intelligence believe tech companies are facing a dead end — and that financial injections won't solve it.
“I think it became obvious that shortly after GPT-4 came out, the benefits of scaling became incremental and expensive,” Stuart Russell, a computer scientist at the University of California, Berkeley, who helped produce the report, told Live Science. “[AI companies] have already invested too much and can’t afford to admit they were wrong [and] walk away from the market for a few years when they’re going to have to pay back investors who’ve invested hundreds of billions of dollars. So all they can do is double down.”
Reduction in profitability
LLM’s astonishing achievements in recent years are partly due to its core Transformer architecture. First developed in 2017 by Google, this type of deep learning architecture grows and learns by absorbing data provided by humans.
This allows the models to generate probabilistic patterns based on their neural networks (sets of machine learning algorithms designed to mimic the way the human brain learns), passing them on when prompted, with the accuracy of their responses increasing as the amount of data increases.
But scaling these models consistently requires huge financial and energy costs. In 2024, the generative AI industry attracted $56 billion in venture capital worldwide, much of it going toward building massive data centers whose carbon emissions have tripled since 2018.
Projections also show that the finite human-generated data needed for continued growth will likely be exhausted by the end of this decade. Once that happens, the alternatives will be to harvest users’ personal data or feed AI-generated “synthetic” data back into models, which could expose them to the risk of crashing due to errors that arise after they use their own data.
However, the experts who conducted the study argue that the limitations of current models are likely related not only to their resource intensity, but also to fundamental flaws in their architecture.
“I think the main problem with current approaches is that they all involve training large feedforward chains,” Russell said. “Chains have fundamental limitations as a means of representing concepts. This means that chains have to be huge to represent such concepts even approximately — like an exaggerated lookup table, in fact — which leads to huge data requirements and a fragmented representation with gaps. This is why, for example, ordinary human players can easily defeat ‘superhuman’ Go programs.”
Sourse: www.livescience.com