OpenAI Shifts Core Values

OpenAI Shifts Core Values

OpenAI, the creator of ChatGPT, has recently made a subtle but significant change to its core values. The company, known for its commitment to thoughtful and collaborative work, has replaced these words with “intense and scrappy.” The shift was noticed on the company’s careers page, with a screenshot captured by the Internet Archive revealing the change between September 25 and October 16.

The new top value listed is “AGI focus,” referring to artificial general intelligence. AGI is the pursuit of creating intelligence on par with or greater than human-level intelligence. The updated values emphasize OpenAI’s dedication to building safe and beneficial AGI that will positively impact humanity’s future. Anything that does not contribute to this goal is considered out of scope.

This change in core values raises questions about OpenAI’s direction as a company. The Independent has reached out to OpenAI for comment on the implications of this shift. The update, first reported by Business Insider, comes at a time when the development of AGI is gathering momentum across the industry.

Some experts and academics have been revisiting their timelines for achieving AGI since the launch of ChatGPT last year. Predictions suggest that AGI’s arrival could have a profoundly destabilizing effect on the global economy. Some even fear that it could pose an existential threat to humanity itself. Swedish philosopher Nick Bostrom has warned that AGI would precede a stage called superintelligence, where computer intelligence surpasses human intelligence and development becomes uncontrollable and irreversible.

Bostrom’s thought experiment involving a rogue AI raises concerns about the potential consequences of AGI. He wrote, “The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off.” OpenAI has consistently advocated for a cautious approach to AGI development, recognizing its significant implications for society. They believe AGI could bring about advancements such as curing diseases and boosting the economy.

Last month, OpenAI’s CEO, Sam Altman, caused a stir when he claimed in a Reddit post that his company had achieved AGI “internally.” However, he later clarified that it was a joke and assured the community that AGI’s achievement would not be announced casually on Reddit.

OpenAI’s shift in core values reflects the changing landscape of AGI development. As the industry pushes forward, companies are adapting their approaches, recognizing the need to be agile and resourceful. By embracing intensity and scrappiness, OpenAI seems to be acknowledging the importance of being nimble in order to stay at the forefront of AGI research.

While this change might be unsettling for some, it is essential to remember that OpenAI has consistently emphasized the need for responsible development of AGI. As the pursuit of AGI intensifies, it is crucial for companies and researchers to balance ambition with caution. The future of AGI holds incredible promise, but it also requires careful consideration to ensure its benefits are maximized and potential risks are mitigated.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.