OpenAI Creates Safety and Security Committee for Responsible AI Development

OpenAI Creates Safety and Security Committee for Responsible AI Development

OpenAI, the renowned artificial intelligence (AI) research lab, has announced the creation of a new safety and security committee to ensure the responsible development of AI technology. The committee will be led by CEO Sam Altman and other board members, and its main objective is to provide guidance and recommendations on safety practices at OpenAI.

This move comes after the dissolution of OpenAI’s “Superalignment” team, which focused on long-term AI risks. The team members have been reallocated to other areas of the company, leading to concerns about the company’s current trajectory. The former head of the “Superalignment” team, Ilya Sutskever, resigned earlier this month after almost a decade with OpenAI. Sutskever expressed his belief that safety and process had taken a backseat to the development of shiny new products at OpenAI.

“I would not anticipate that these other committee members would have anywhere close to an equal voice in any decisions,” says Sarah Kreps, a professor of government and director of the Tech Policy Institute at Cornell University. The committee will include Altman, OpenAI Chairman Bret Taylor, and board members Nicole Seligman and Adam D’Angelo.

The committee will play a crucial role in advising the board on safety and security matters, but it is clear that Altman’s voice will carry the most weight. However, OpenAI plans to include its technical and policy experts in the committee, including Jakub Pachocki, who succeeded Sutskever as chief scientist, and Aleksander Madry, who oversees OpenAI’s preparedness team.

OpenAI has also revealed plans to consult cybersecurity officials John Carlin, a former Justice Department official, and Rob Joyce, a former cybersecurity director for the National Security Agency.

In addition to the safety and security committee, OpenAI announced that it has begun training a new AI model to succeed GPT-4, the current language model powering its ChatGPT chatbot.

While OpenAI has not provided a timeline for the release of the new AI model, the company sees it as a step forward on its path to developing artificial general intelligence (AGI), which refers to AI systems that match or exceed human capabilities.

However, some experts have expressed concerns about OpenAI’s recent focus on AGI development, believing that it may be overshadowing the company’s previously strong emphasis on safety and ethical considerations.

“We’ve seen indications that OpenAI is going in an accelerated direction toward artificial general intelligence,” says Sarah Kreps. “It seems to be signaling that there’s less interest in the safety and alignment principles that had been part of its focus earlier.”

OpenAI’s new safety and security committee aims to address these concerns and ensure that safety remains a top priority in the development of AI technology. The committee will spend the next 90 days developing processes and safeguards, which will be shared with the board and the public in a future update.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.