- OpenAI has introduced a five-level scale to measure AI progress towards AGI.
- The scale ranges from chatbots like ChatGPT to AI managing entire organizations.
- Ethical and safety concerns arise as AGI development progresses.
OpenAI has unveiled a new scale to chart the progress of its AI models towards artificial general intelligence (AGI), according to a recent report from Bloomberg.
AGI refers to AI systems that possess human-like intelligence, capable of performing most economically valuable tasks better than humans.
This scale aims to provide a clear framework for tracking advancements in this ambitious goal.
The Five Levels of AI Progress
OpenAI’s scale breaks down AI progress into five distinct levels. Currently, AI systems like ChatGPT are classified at Level 1. OpenAI is reportedly on the cusp of reaching Level 2, where an AI system could solve basic problems at the level of a human with a PhD.
This might be a nod to the upcoming GPT-5, which OpenAI CEO Sam Altman has described as a significant leap forward.
As we move up the scale, Level 3 represents AI that can handle tasks autonomously without human supervision.
At Level 4, the AI would invent new ideas and concepts. Finally, at Level 5, the AI would have the capability to manage tasks not just for individuals but for entire organizations.
The Importance of the Scale
The concept of levels makes sense for OpenAI and other developers. A structured framework helps track progress internally and could set a universal standard for evaluating AI models across the industry. However, achieving AGI is a monumental task that won’t happen overnight.
While some, including Altman, suggest AGI could be realized within five years, opinions on the timeline vary widely among experts. The challenges in computing power, financial investment, and technological advancements are immense.
Ethical and Safety Concerns
With the race towards AGI, ethical and safety concerns come to the forefront. The potential impact of AGI on society is a major topic of debate.
OpenAI’s recent actions have raised eyebrows, particularly after the dissolution of its safety team following the departure of co-founder Ilya Sutskever and researcher Jan Leike, who left due to concerns over the company’s safety culture.
Despite these concerns, OpenAI’s new scale aims to set concrete benchmarks for its models and those of its competitors.
By offering a structured approach, it helps prepare society for the potential impacts of AGI.
While the introduction of this scale is a step forward, the path to AGI is fraught with challenges. The ethical implications, the need for robust safety measures, and the vast technological hurdles mean that society must proceed with caution.
OpenAI’s scale provides a framework, but it’s the implementation and oversight that will determine how safely and effectively we reach the ultimate goal of AGI.