Select Language

English

Down Icon

Select Country

Spain

Down Icon

OpenAI and Microsoft were partners until now: Artificial General Intelligence just changed that

OpenAI and Microsoft were partners until now: Artificial General Intelligence just changed that

There was a time not so long ago when Microsoft seemed to be the Big Tech smart guy in the face of the advent of generative AI. Its investment in OpenAI seemed to give it almost parental control over the ChatGPT company. So much so that, in the soap opera-like revolt that ended with Sam Altman out of the company he founded for a few hours, Microsoft was quick to announce that it was hiring him to continue its plans within its own company.

Today, that collaboration has cooled. OpenAI's autonomous growth means Microsoft's parasitology isn't as strong. And now it could actually be on the verge of breaking away from the "son," removing the "father" from its authority.

This is the situation currently shaking the multi-billion-dollar alliance between Microsoft and OpenAI, one of the most ambitious and closely watched pacts in recent technological history. A small clause that seemed innocuous—and distant—has become the epicenter of a conflict that is redefining the balance of power in the age of artificial intelligence.

OpenAI has developed a five-level scale to classify the evolution of AI toward AGI. This is not just an academic exercise. This document, not yet officially published, calls into question when and how OpenAI will be able to declare that it has achieved artificial general intelligence. If it does, the agreement with Microsoft changes drastically .

The clause would prevent the Redmond-based tech company from accessing any model or profit derived from said AGI. And this, in the context of an investment of more than $13 billion, is a major corporate earthquake.

The Endgame Clause: Microsoft and OpenAI's AGI

Within the contract governing the collaboration between Microsoft and OpenAI, there is a clause that until recently seemed harmless: if OpenAI declares it has achieved AGI, Microsoft would lose access to future developments based on that technology . According to sources close to the negotiations, the clause was drafted as an ethical and strategic safeguard. But now that the possibility seems more real, it has become a bargaining chip.

Microsoft wants to modify that clause. It has even hinted that it could withdraw from the agreement if these restrictions are not removed. Meanwhile, OpenAI sees this provision as its greatest advantage : it allows it to retain control over its most advanced technology without indefinitely sharing it with its financial partner.

The timing is delicate. Both parties are renegotiating the contract in parallel with a corporate restructuring of OpenAI that could include new governance frameworks. The differences are not just legal: what's at stake is who defines what AGI is, when it is achieved, and what economic, technological, and political consequences such an announcement entails.

The Five Levels of General Ability: A Scale for Classifying the Future

The internal document, titled "Five Levels of General AI Capabilities," establishes a phased classification for understanding progress toward AGI. Each level represents a qualitative leap in the autonomy and capabilities of AI systems.

  • Level 1 : Systems that are fluent in language and perform basic tasks, at the level of a human beginner.
  • Level 2 : Systems capable of performing complex tasks, similar to those of an expert, although with supervision.
  • Higher levels (3 to 5): not publicly detailed, but it is understood that they involve an autonomous, adaptive AI with the ability to reason in varied contexts, surpassing the average human in efficiency.

This scale doesn't seek to set absolute dates or milestones. Its approach is gradual, and it avoids a single, closed definition of the concept of AGI. But by placing current models at Level 1 or 2, and anticipating that Level 3 will arrive "faster than we think," it becomes an uncomfortable benchmark. Any claim of having achieved AGI would be questioned through that same internal lens.

OpenAI, in fact, avoided publishing this work, possibly due to its contractual implications. Although the company officially attributes this to technical issues, multiple sources indicate that the risk of triggering the clause with Microsoft was a key barrier.

Who decides when we reach AGI?

The debate over what AGI is and when it arrives has become a power struggle. According to the contract, there are two possible triggering definitions:

  1. Unilateral definition : OpenAI's board can declare it has achieved AGI if, according to its charter, its systems outperform humans on most economically valuable tasks. At that point, Microsoft would lose access to future technology.

  2. Sufficient AGI : A concept introduced in 2023 that links AGI to the level of economic profit generated. In this case, Microsoft would have the right to validate the declaration, which introduces shared control.

This isn't just semantics. If OpenAI uses the first option, Microsoft could be excluded without a veto. If it chooses the second, it would be accepting a slower, more consensual mechanism. At the same time, the contract prohibits Microsoft from pursuing its own AGI with OpenAI's intellectual property, which limits its scope for action even if the relationship breaks down.

OpenAI on the tightrope: internal tensions and external strategy

The publication of the five-level document also sparked debate within OpenAI. Although it was well received among the research teams, several employees pointed out that negotiations with Microsoft were a barrier to its release. The report was edited, visually prepared, and subjected to technical review , suggesting it was almost ready for publication.

Sam Altman himself has downplayed the importance of labels. "The question of what AGI is doesn't matter that much," he said. However, in the same speech, he mentioned that the o1 model could already be at Level 2, and that they will reach Level 3 sooner than expected. This dual narrative—downplaying the concept publicly while using it internally as a metric for progress—reflects the strategic dilemma facing the company.

Altman has also stated that he expects to see AGI during Donald Trump's current term. This temporary mention gives a clear clue: this is not a hypothetical or futuristic discussion. The moment is approaching, and the decisions made in the coming months could redefine the map of global technological power.

A breakup foretold or a new phase of the pact?

Are we facing the end of the most influential alliance in modern artificial intelligence? Not necessarily. But we are facing an inevitable redefinition. What began as a synergistic relationship now looks more like a corporate tug-of-war , with AGI serving as both a bargaining chip and an existential threat.

The history of major technology alliances has always oscillated between collaboration and competition. Google and Apple, IBM and Microsoft, even Amazon and its suppliers . Now it's the turn of OpenAI and Microsoft, two players who need each other, but are beginning to diverge in their goals and pace.

The final question is the most troubling: when AGI arrives—if it ever does—who will control it? A board of directors? A visionary CEO? An international committee? Or, perhaps, an algorithm that we don't even know how to fully interpret today.

WhatsApp Facebook Twitter Linkedin Beloud Bluesky
eleconomista

eleconomista

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow