The Paradox at the Heart of Elon Musk’s OpenAI Lawsuit

0
60
The Paradox at the Heart of Elon Musk’s OpenAI Lawsuit


It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Mr. Musk sued OpenAI this week, accusing the company of violating the terms of its founding agreement and its founding principles. In his statement, OpenAI was founded as a non-profit organization that builds powerful AI systems for the benefit of humanity and makes its research results available to the public for free. But Mr. Musk argues that OpenAI broke that promise by creating a for-profit subsidiary that took billions of dollars in investments from Microsoft.

An OpenAI spokeswoman declined to comment on the lawsuit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, dismissed Mr. Musk’s claims, saying: “We believe the allegations in this lawsuit may be due to Elon’s regret, not today.” company to be involved,” a copy of the memo that I viewed continues.

On one level, the lawsuit reeks of personal anger. Mr. Musk, who founded OpenAI in 2015 with a group of other tech heavyweights and provided much of the initial funding but left in 2018 amid disputes with leadership, chafes at being left out of the AI ​​conversation. Its own AI projects haven’t gotten nearly as much traction as ChatGPT, OpenAI’s flagship chatbot. And the dispute between Mr. Musk and Sam Altman, the CEO of OpenAI, is well documented.

But amid all the antipathy, there’s one point worth highlighting, because it illustrates a paradox that’s at the heart of many AI discussions today – and a place where OpenAI has really been talking out of both sides of its mouth . They insist that their AI systems are incredibly powerful and that they are nowhere near matching human intelligence.

At the heart of the claim is a term known as AGI, or “artificial general intelligence.” Defining what constitutes AGI is notoriously difficult, although most people would agree that it is an AI system that can do most or all of the things the human brain can do. Mr. Altman has defined AGI as “the equivalent of an average human that you could hire as an employee,” while OpenAI itself defines AGI as “a highly autonomous system that outperforms humans at the most economically valuable work.”

Most AI company executives claim that AGI is not only possible, but also imminent. Demis Hassabis, the CEO of Google DeepMind, told me in a recent podcast interview that he believes AGI could arrive as early as 2030. Mr. Altman has said AGI may be as little as four or five years away.

Building AGI is the explicit goal of OpenAI and there are many reasons to achieve this goal above all others. A true AGI would be an incredibly valuable resource, capable of automating vast amounts of human labor and making its creators vast amounts of money. It’s also the kind of shiny, audacious goal that investors love to fund and that helps AI labs recruit top engineers and researchers.

But AGI could also be dangerous if it is able to trick people, or if it is deceptive or inconsistent with human values. The people who created OpenAI, including Mr. Musk, feared that an AGI would be too powerful to be owned by a single entity and that if they ever came close to building one, they would lose the control structure would have to change around them. to prevent it from causing harm or concentrating too much wealth and power in the hands of a single company.

Therefore, when OpenAI partnered with Microsoft, the company explicitly granted the tech giant a license that only applied to “pre-AGI” technologies. (The New York Times has sued Microsoft and OpenAI over their use of copyrighted works.)

Under the terms of the agreement, if OpenAI ever developed anything that met the definition of AGI – as set by OpenAI’s non-profit board – Microsoft’s license would no longer apply, and OpenAI’s board could decide to do whatever it wanted, to ensure that OpenAI’s AGI benefits all of humanity. That could mean many things, including open sourcing technology or complete shutdown.

Most AI commentators believe that today’s modern AI models do not qualify as AGI because they lack sophisticated reasoning skills and often make gross errors.

But in his statement of claim, Mr. Musk puts forward an unusual argument. He argues that OpenAI has already achieved AGI with its GPT-4 language model released last year, and that the company’s future technologies will qualify even more clearly as AGI

“Upon information and belief, GPT-4 is an AGI algorithm and is therefore expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint states.

What Mr. Musk is arguing here is a bit complicated. Essentially, he’s saying that because OpenAI has achieved AGI with GPT-4, it can no longer license it to Microsoft and that its board is obligated to make the technology and research more freely available.

In his complaint, he cites a Microsoft research team’s now infamous “Sparks of AGI” paper from last year, which argued that GPT-4 showed early evidence of general intelligence, including signs of human reasoning.

However, the complaint also notes that OpenAI’s board is unlikely to decide that its AI systems actually qualify as AGI, because once it does, it will have to make major changes to the way it does so uses technology and benefits from it.

Furthermore, he notes that Microsoft – which now holds a non-voting observer seat on OpenAI’s board after an upheaval last year that led to Mr. Altman’s temporary firing – has a strong incentive to deny that OpenAI’s technology qualifies as AGI. That would mean the end of their license to use this technology in its products and jeopardize potentially huge profits.

“Given Microsoft’s enormous financial interest in keeping the gate closed to the public, OpenAI, Inc.’s new, conflicted and compliant board will have every reason to delay determining that OpenAI has achieved AGI,” the complaint says. “On the contrary, OpenAI’s achievement of AGI, like ‘Tomorrow’ in ‘Annie,’ will always be a day away.”

Given his track record in questionable litigation, it’s easy to question Mr. Musk’s motives here. And as the head of a rival AI startup, it’s no surprise that he wants to embroil OpenAI in messy legal battles. But his lawsuit points to a real conundrum for OpenAI.

OpenAI, like its competitors, is eager to be seen as a leader in the race to build AGI and has a vested interest in convincing investors, business partners and the public that its systems are improving rapidly.

But because of the terms of its deal with Microsoft, OpenAI’s investors and executives may not want to admit that its technology actually qualifies as AGI, if and when it actually does.

That has put Mr. Musk in the strange position of asking a jury to decide what constitutes AGI and decide whether OpenAI’s technology has met the threshold.

The lawsuit has also put OpenAI in the strange position of downplaying the capabilities of its own systems while stoking expectations that a major AGI breakthrough is imminent.

“GPT-4 is not AGI,” OpenAI’s Mr. Kwon wrote in the memo to employees on Friday. “It is capable of solving small tasks in many professions, but the ratio of work done by a human to the work done by GPT-4 in the economy remains surprisingly high.”

The personal dispute fueling Mr Musk’s complaint has led some people to view it as a frivolous lawsuit – one commenter likened it to “suing your ex for remodeling the house after your divorce” – the is quickly rejected.

But even if dismissed, Mr. Musk’s lawsuit raises important questions: Who gets to decide when something counts as AGI? Are technology companies exaggerating or exaggerating (or both) when it comes to describing the capabilities of their systems? And what are the incentives behind various claims about how close or far away we might be from AGI?

A lawsuit from a resentful billionaire is probably not the right way to resolve these issues. But it makes sense to ask them, especially as AI progress continues to accelerate.



Source link

2024-03-02 10:03:55

www.nytimes.com