Select Language

English

Down Icon

Select Country

Spain

Down Icon

Division in the 'big tech' for the future of AI

Division in the 'big tech' for the future of AI

On the cover of their business plan for DeepMind , the artificial intelligence (AI) lab founded in 2010 by Demis Hassabis, Mustafa Suleyman, and Shane Legg , they wrote a single sentence: "To create the world's first artificial general intelligence."

His view, which remains valid today, is that traditional AI technologies were too "limited." They could perform brilliantly, but only after humans had laboriously trained them using large databases. This made AI excellent at tasks like analyzing spreadsheets or playing chess. But artificial general intelligence, known as AGI , had the potential to go even further.

Fifteen years later, tech CEOs are convinced that AI is the next big thing and are full of praise for its potential. Among them is Sam Altman , CEO of OpenAI . According to him, "AI could help humanity grow, increase abundance, accelerate the global economy, and contribute to the discovery of new scientific knowledge."

Hassabis, whose company DeepMind merged with Google to become one of the world's most influential AI labs, says AI has the potential to solve global problems such as curing diseases, helping people live healthier and longer lives, and finding new sources of energy.

Dario Amodei, CEO of Anthropic , who prefers to use the phrase "power AI" to describe AGI, believes it will likely be "smarter than a Nobel Prize winner in its most relevant field," and has described it as a "genius nation in a data center."

Yann LeCun, Meta's chief AI scientist and considered one of the "godfathers" of the technology, prefers to use the term artificial superintelligence (ASI) , as human intelligence isn't really that general: "We are very specialized, and computers can solve certain tasks much better than we can."

Lack of consensus

Regardless of the term ultimately chosen, there's growing talk about a technology that was once mere science fiction and now could become a reality. But just as Silicon Valley can't agree on exactly what AGI or SIA are, there's no consensus on what it will look like if or when it becomes a reality.

When DeepMind coined the term, it declared that AGI was an "AI that is at least as capable as a skilled adult at most cognitive tasks." But this definition raises more questions: What is a skilled adult? How do we know when we've performed most cognitive tasks? What are those tasks?

"For some, AGI is a scientific goal. For others, it's a religion. And for some, it's a marketing term," notes François Chollet, a former software engineer at Google. Consequently, there are a wide range of estimates about when it might arrive. Elon Musk believes AI technology that's smarter than humans will become a reality this year. Anthropic's Amodei puts the date at 2026. And Altman, who believes it will arrive during Donald Trump's presidency.

OpenAI and Anthropic have raised billions of dollars from investors to develop the technology and are backed by the White House's plans to halt AI regulation in order to stay ahead of China. OpenAI also has Trump's support for investing in data centers in the United States and the Middle East.

The IAG was mentioned 53% more times in the presentation presentations of the companies in the first quarter of 2025 that in the same period of the previous year.

But defining it is vital to understanding its implications and whether or not it should be a priority.

The EU hasn't ruled out suspending its AI law, partly for fear of hindering its development. The UK's Artificial Intelligence Security Institute is trying to understand what AI is in order to plan its security policy and research.

Even in its loosest definition, AGI would greatly accelerate computing, but at a very high financial and environmental cost. And if engineers do manage to create the technology, how can we ensure it will be used equitably and fairly?

What is IAG really?

For OpenAI, it's a technology that can be used to perform work that brings economic benefits. "We're trying to develop a highly autonomous system that can outperform humans at many tasks of economic value," says Mark Chen, the company's research director. According to him, a key feature is generality, the ability to perform tasks in a wide variety of fields: "It should be fairly autonomous and not need much help to accomplish its tasks. AI will be able to quickly bring what's in our heads to life and has the potential to help people create not just images or text, but entire applications."

But critics argue that this definition falls short of describing a truly intelligent system. "That's just automation , something we've been doing for decades," says Chollet, the former Google engineer.

DeepMind's Legg takes a different view: "I think typical human performance is the most natural, practical, and useful way to define the minimum requirements for an AI to be considered an AGI. A big problem with many AGI definitions is that they don't specify clearly enough what an AI system must be able to do to be considered an AGI."

For DeepMind, it must be "as capable as an experienced adult at performing most cognitive tasks. If people can routinely perform a cognitive task, then artificial intelligence must be able to do it to be an AI," Legg notes.

The Google-owned lab has established five levels of AI capabilities. AI models like OpenAI's ChatGPT, Google's Gemini, and Meta's Llama would only reach level one, or "emerging AI." So far, no general model has reached level two , which would allow it to outperform at least the 50th percentile of skilled adults, says Allan Dafoe, director of border security and governance at DeepMind.

Level three would require the model to be as good as at least the 90th percentile of skilled adults, level four would require the 99th percentile, and level five, superhuman AI or artificial superintelligence, would outperform 100% of humans.

What is the roadmap?

If there's no agreement on the goal, it's no wonder there are many theories about the best path to AI. OpenAI and Anthropic argue that the language models they're creating represent the best route. Their idea is that the more data and computing power the model is fed, the "smarter" it will be.

The startup behind ChatGPT has just unveiled its new "reasoning" model, o3, which solves more complex coding, math, and image recognition tasks. Some experts, such as economist Tyler Cowen, believe this is the closest technology to AGI.

For Chen, the next step toward AI would be to create models capable of acting independently and reliably. AI tools could then produce innovation and, ultimately, act as organizations similar to large structures of humans working together.

Another key feature is self-improvement . "It's a system that can improve itself, write its own code, and generate the next version of itself , which makes it even better," Chen adds. But critics say language models have countless weaknesses. They're still very inaccurate, they make things up, and they don't really "think," merely predicting the next likely word in a sentence.

According to a much-discussed article by Apple researchers, the new generation of reasoning models merely create the illusion of thinking, and their accuracy declines significantly when presented with complex tasks. Some experts also argue that language alone cannot capture all dimensions of intelligence and that broader models need to be developed to incorporate more dimensions.

LeCun of Meta is creating "world models," which attempt to encapsulate the physics of our world by learning from video and robotic data, rather than language. He argues that we need a more holistic understanding of the world to create superior AI.

Possible problems

The AI ​​industry is running out of data, having obtained most of it from the internet. Despite this, Altman stated in December that "AI will become a reality sooner than most people think, and it will matter much less than people think. Our next goal is to prepare OpenAI for what comes next: superintelligence."

According to IAG critics, this diversity of opinions highlights the companies' true motivations. Nick Frost , co-founder of the AI ​​startup Cohere , believes that "IAG is mostly a bubble that is raising capital off of this idea." And Antoine Moyroud, partner at Lightspeed Ventures, a venture capital firm that has invested in companies such as Anthropic and Mistral, notes that "with IAG, investors not only have the hope of hundreds of millions of dollars in revenue, but also the prospect of transforming how we generate GDP, potentially generating trillions of dollars in results. That's why people are willing to take on the risk with IAG."

Other issues

More and more people are turning to AI chatbots for friendship, companionship, and even therapy . But this is only possible because of the immense amount of human labor that allows AI chatbots to appear smarter—or more responsive—than they actually are.

Some wonder if AGI will be a good thing. "Biology, psychology, and education have not yet fully understood intelligence," says Margaret Mitchell, ethics director at the open-source AI company Hugging Face and co-author of a paper arguing that AGI should not be considered a North Star. Experts say this drive to develop a certain type of technology concentrates power and wealth in a small minority of people and exploits artists and creators whose intellectual property ends up in massive data sets without their consent and without compensation.

AGI also has a huge environmental footprint, as increasingly powerful models require tons of water and energy to train and operate in massive data centers. It also increases the consumption of highly polluting products, such as oil and gas.

It also raises ethical questions and potential social harms. In the race to develop the technology and benefit from its economic advantages, governments are neglecting regulations that would provide basic protections against AI technologies, such as algorithmic bias and discrimination.

There is also an influential minority—including researchers considered the founding fathers of modern AI, such as Yoshua Bengio and Geoffrey Hinton—who warn that, if left unchecked, AGI could lead to human extinction.

One of the dangers of the "AGI at all costs" idea is that it can foster bad science , Mitchell says. Other, more established subjects, such as chemistry and physics, have scientific methods that allow for rigorous testing. But computer science is a much newer and more engineering-focused field, with a tendency to make "wonderful, sweeping claims that aren't actually supported by research." And Cohere's Frosst warns that "politicians and businesses have a responsibility to reflect on the real risks of powerful technologies."

But creating reliable ways to measure and evaluate AI technologies in the real world is hampered by the industry's obsession with AI. "Until we achieve that, AI will be nothing more than an illusion and a buzzword," Mitchell concludes.

© The Financial Times Limited [2025]. All rights reserved. FT and Financial Times are registered trademarks of Financial Times Limited. Redistribution, copying, or modification is prohibited. EXPANSIÓN is solely responsible for this translation, and Financial Times Limited is not responsible for its accuracy.

Expansion

Expansion

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow