OpenAI researchers warned board of AI breakthrough ahead of CEO ouster: Sources

According to sources, some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the start-up’s search for what is known as artificial general intelligence. PHOTO: REUTERS

SAN FRANCISCO – Ahead of OpenAI chief executive Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence (AI) discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board’s ouster of Mr Altman, the poster child of generative AI, the two sources said.

Prior to his triumphant return late on Nov 21, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader.

The sources, who spoke on condition of anonymity because they were not authorised to speak on behalf of the company, cited the letter as one factor among a longer list of grievances by the board leading to Mr Altman’s firing, among which were concerns over commercialising advances before understanding the consequences.

Reuters was unable to review a copy of the letter. The researchers who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to employees a project called Q* (pronounced Q-star) and a letter to the board before the weekend’s events, one of the sources said.

An OpenAI spokesperson said the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* could be a breakthrough in the start-up’s search for what is known as artificial general intelligence (AGI), one of the sources told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the source said. Although only performing mathematics on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source added.

Reuters could not independently verify the capabilities of Q* as claimed.

Veil of ignorance

Researchers consider mathematics to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do mathematics – where there is only one right answer – implies that AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalise, learn and comprehend. In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said, without specifying the exact safety concerns noted in the letter.

There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance, if the machines might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an “AI scientist” team, the existence of which multiple sources confirmed.

The group, formed by combining earlier “Code Gen” and “Math Gen” teams, was exploring how to optimise existing AI models to improve their reasoning and eventually perform scientific work, one of the sources said.

Mr Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and drew investment – and computing resources – from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration in November, Mr Altman last week teased at a gathering of world leaders in San Francisco that he believed AGI was in sight.

“Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime,” he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Mr Altman. REUTERS

Remote video URL

Join ST's Telegram channel and get the latest breaking news delivered to you.