OpenAI says it’s charting a "path to AGI" with its next frontier AI model

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California.
(Image credit: Getty Images)

OpenAI has revealed that it recently started work on training its next frontier large language model (LLM).

The first version of OpenAI’s ChatGPT debuted back in November 2022 and became an unexpected breakthrough hit which launched generative AI into public consciousness.

Since then, there have been a number of updates to the underlying model. The first version of ChatGPT was built on GPT-3.5 which finished training in early 2022., while GPT-4 arrived in March 2023. The most recent, GPT-4o, arrived in April this year.

Now OpenAI is working on a new LLM and said it anticipates the system to “bring us to the next level of capabilities on our path to [artificial general intelligence] AGI.”

AGI is a hotly contested concept whereby an AI would – like humans – be good at adapting to many different tasks, including ones it has never been trained on, rather than being designed for one particular use.

AI researchers are split on whether AGI could ever exist or whether the search for it may even be based on a misunderstanding of how intelligence works.

OpenAI provided no details of what the next model might do, but as its LLMs have evolved, the capabilities of the underlying models have expanded.

While GPT-3 could only deal with text, GPT-4 is able to accept images as well, while GPT-4o has been optimized for voice communication. Context windows have also increased markedly with each interaction, although the size of the models and technical details still remain secret.

Sam Altman, CEO at OpenAI, has stated that GPT-4 cost more than $100 million to train, per Wired, and the model is rumored to have more than one trillion parameters. This would make it one of, if not the biggest, LLM currently in existence.

That doesn’t necessarily mean the next model will be even larger; Altman has previously suggested the race for ever bigger models may be coming to an end. 

Smaller models working together might be a more useful way of using generative AI, he has said.

And even if OpenAI has started training its next model, don’t expect to see the impact of it very soon. Training models can take many months and that can be just the first step. It took six months of testing after training was finished before OpenAI released GPT-4.

New OpenAI safety committee given the green light

The company also said it will create a new ‘Safety and Security Committee’ led by OpenAI directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and Altman. This committee will be responsible for making recommendations to the board on critical safety and security decisions for OpenAI projects and operations.

One of its first tasks will be to evaluate and develop OpenAI’s processes and safeguards over the next 90 days. After that the committee will share their recommendations with the board.

Some may raise eyebrows at the safety committee being made up of members of existing OpenAI’s board.

Dr Ilia Kolochenko, CEO at ImmuniWeb and adjunct professor of cyber security at Capital Technology University, questioned whether the move will actually deliver positive outcomes as far as AI safety is concerned.

RELATED WHITEPAPER

“Being safe does not necessarily imply being accurate, reliable, fair, transparent, explainable and non-discriminative – the absolutely crucial characteristics of GenAI solutions,” Kolochenko said. “In view of the past turbulence at OpenAI, I am not sure that the new committee will make a radical improvement.”

The launch of the safety committee comes amidst greater calls for more rigorous regulation and oversight of LLM development. Most recently, a former OpenAI board member has argued that self-governance isn’t the right approach for AI firms and has argued that a strong regulatory framework is needed.

OpenAI has made public efforts to calm AI safety fears in recent months. It was among a host of major industry players to sign up to a safe development pledge at the Seoul AI Summit that could see them pull the plug on their own models if they cannot be built or deployed safely.

But these commitments are voluntary and come with plenty of caveats, leading some experts to call for stronger legislation and requirements for tougher testing of LLMs.

Because of the potentially large risks associated with the technology, AI companies should be subject to a similar regulatory framework as pharmaceuticals companies, critics argue, where companies have to hit standards set by regulators who can make the final decision on if and when a product can be released.

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of silicon.com.