Sam Altman, CEO of OpenAI, admits that the benefits of AI may not be widely distributed

Sam Altman, CEO of OpenAI, admits that the benefits of AI may not be widely distributed

In a new essay on his personal blog, OpenAI CEO Sam Altman stated that the company is open to a “computational budget,” among other “strange-sounding” ideas, to “enable everyone on Earth to use a lot of AI” and ensure that the benefits of the technology are widely distributed.

“The historical impact of technological progress suggests that most of the parameters we care about (health outcomes, economic prosperity, etc.) improve on average and in the long run, but the increase in equality does not seem to be determined by technology and achieving it may require new ideas,” Altman wrote. “In particular, it seems that the balance of power between capital and labor could easily be disrupted, and this may require early intervention.”

The solutions to this problem, such as Altman’s concept of a “computational budget,” may be easier to propose than to execute. AI is already affecting the labor market, leading to job cuts and reductions in departments. Experts have warned that the AI boom could lead to mass unemployment if not accompanied by appropriate government policies and training and upskilling programs.

This is not the first time Altman has claimed that Artificial General Intelligence (AGI), which he defines as “an AI system capable of tackling increasingly complex problems, at a human level, across many fields,” is close. Whatever form it takes, this AI will not be perfect, Altman warns, in the sense that it may “require a lot of human oversight and direction.”

“[AGI systems] will not have the most novel ideas,” wrote Altman, “and they will be excellent at some things but surprisingly bad at others.”

But the real value of AGI will come from deploying these systems at scale, Altman argued. Like Anthropic CEO, OpenAI’s rival, Dario Amodei, Altman envisions thousands or even millions of hyper-capable AI systems tackling tasks “in all areas of knowledge work.”

One might assume this will be an expensive vision to bring to life. Indeed, Altman noted that “arbitrary amounts of money can be spent and continuous, predictable returns obtained” in AI performance. Perhaps that is why, according to reports, OpenAI is in talks to raise up to $40 billion in a funding round and has committed to spending up to $500 billion with partners on a massive data network.

However, Altman also maintains that the cost of using “a certain level of AI” reduces roughly tenfold every 12 months. In other words, expanding the limits of AI technology will not be cheaper, but users will gain access to increasingly capable systems along the way.

Economic and capable AI models from the Chinese startup DeepSeek and other companies seem to support this idea. There is evidence suggesting that training and development costs are also decreasing, but both Altman and Amodei have argued that massive investments will be required to achieve AGI-level AI and beyond.

As for how OpenAI plans to roll out AGI-level systems (assuming it can indeed create them), Altman said the company will likely make “some important and unpopular decisions and restrictions related to AGI safety.” OpenAI once promised that it would commit to stopping competition and start helping any “value-aligned” and “safety-conscious” project that approaches AGI before it, out of concern for safety.

Of course, that was when OpenAI intended to remain a non-profit organization. The company is in the process of converting its corporate structure to a more traditional for-profit organization. Reports say OpenAI aims to reach $100 billion in revenue by 2029, which is equivalent to the current annual sales of Target and Nestlé.

In light of this, Altman added that OpenAI’s goal as it builds a more powerful AI will be to “lean more toward individual empowerment” while simultaneously avoiding allowing “authoritarian governments to use AI to control their population through mass surveillance and loss of autonomy.” Altman recently said that he believes OpenAI has been on the wrong side of history regarding the openness of its technologies. While OpenAI has opened the source code of its technologies in the past, the company has generally favored a proprietary, closed-code development approach.

“AI will infiltrate all areas of the economy and society; we will expect everything to be intelligent,” Altman stated. “Many of us expect to have to give people more control over technology than we have historically had, including more open access, and accept that there is a balance between security and individual empowerment that will require trade-offs.”

Altman’s blog post comes ahead of this week’s AI Action Summit in Paris, which has already prompted other tech figures to share their own visions for the future of AI.

In a footnote, Altman added that, in fact, OpenAI does not plan to end its relationship with its close partner and investor Microsoft anytime soon using the term AGI. Reports suggest that Microsoft and OpenAI had a contractual definition of AGI (AI systems that can generate $100 billion in profits), which, once fulfilled, would allow OpenAI to negotiate more favorable investment terms. However, Altman stated that OpenAI “fully expects to partner with Microsoft long-term.”

error: Content is protected !!