The Information: OpenAI shows ‘Strawberry’ to feds, races to launch it

Two new The Information articles with insider information on OpenAI’s next models and moves.

They are paywalled, but here are the new bits of information:

  • Strawberry is more expensive and slow at inference time, but can solve complex problems on the first try without hallucinations. It seems to be an application or extension of process supervision

  • Its main purpose is to produce synthetic data for Orion, their next big LLM

  • But now they are also pushing to get a distillation of Strawberry into ChatGPT as soon as this fall

  • They showed it to feds

Some excerpts about these:

Plus this summer, his team demonstrated the technology [Strawberry] to American national security officials, said a person with direct knowledge of those meetings, which haven’t previously been reported.

One of the most important applications of Strawberry is to generate high-quality training data for Orion, OpenAI’s next flagship large language model that’s in development. The codename hasn’t previously been reported.

Using Strawberry could help Orion reduce the number of hallucinations, or errors, it produces, researchers tell me. That’s because AI models learn from their training data, so the more correct examples of complex reasoning they see, the better. But there’s also a push within OpenAI to simplify and shrink Strawberry through a process called distillation, so it can be used in a chat-based product before Orion is released. This shouldn’t come as a surprise, given the intensifying competition among the top AI developers. We’re not sure what a Strawberry-based product might look like, but we can make an educated guess.

One obvious idea would be incorporating Strawberry’s improved reasoning capabilities into ChatGPT. However, though these answers would likely be more accurate, they also might be slower.

Researchers have aimed to launch the new AI, code-named Strawberry (previously called Q*, pronounced Q Star), as part of a chatbot—possibly within ChatGPT—as soon as this fall, said two people who have been involved in the effort. Strawberry can solve math problems it hasn’t seen before—something today’s chatbots cannot reliably do—and also has been trained to solve problems involving programming. But it’s not limited to answering technical questions.

When given additional time to “think,” the Strawberry model can also answer customers’ questions about more subjective topics, such as product marketing strategies. To demonstrate Strawberry’s prowess with language-related tasks, OpenAI employees have shown their co-workers how Strawberry can, for example, solve New York Times Connections, a complex word puzzle.

But OpenAI’s prospects rest in part on the eventual launch of a new flagship LLM it is currently developing, code-named Orion.

It isn’t clear whether a chatbot version of Strawberry that can boost the performance of GPT-4 and ChatGPT will be good enough to launch this year. The chatbot version is a smaller, simplified version of the original Strawberry model, known as a distillation.

However, OpenAI is also using the bigger version of Strawberry to generate data for training Orion, said a person with knowledge of the situation. That kind of AI-generated data is known as “synthetic.” It means that Strawberry could help OpenAI overcome limitations on obtaining enough high-quality data to train new models from real-world data such as text or images pulled from the internet.

In addition, Strawberry could aid upcoming OpenAI agents, this person said.

Using Strawberry to generate higher-quality training data could help OpenAI reduce the number of errors its models generate, otherwise known as hallucinations, said Alex Graveley, CEO of agent startup Minion AI and former chief architect of GitHub Copilot.

Imagine “a model without hallucinations, a model where you ask it a logic puzzle and it’s right on the first try,” Graveley said. The reason why the model is able to do that is because “there is less ambiguity in the training data, so it’s guessing less.”

“We feel like we have enough [data] for this next model,” Altman said at an event in May, likely referring to Orion. “We have done all sorts of experiments including generating synthetic data.”

Strawberry has its roots in research. It was started years ago by Ilya Sutskever, then OpenAI’s chief scientist. He recently left to start a competing AI lab. Before he left, OpenAI researchers Jakub Pachocki and Szymon Sidor built on Sutskever’s work by developing a new math-solving model, Q*, alarming some researchers focused on AI safety.

The breakthrough and safety conflicts at OpenAI came just before OpenAI board directors—led by Sutskever—fired Altman before quickly rehiring him.

Last year, in the leadup to Q*, OpenAI researchers developed a variation of a concept known as test-time computation, meant to boost LLMs’ problem-solving abilities. The method gives them the opportunity to spend more time considering all parts of a command or question someone has asked the model to execute. At the time, Sutskever published a blog post related to this work.