Debunking GPT-5 Rumors: OpenAI's Current Focus on Codex and DALL-E
If you are following the latest developments in artificial intelligence, you might have heard of GPT-4, the powerful language model developed by OpenAI that can generate coherent and diverse texts on almost any topic. You might also be wondering what’s next for OpenAI and whether they are working on an even more advanced version of GPT-4, dubbed GPT-5.
Well, according to Sam Altman, the CEO and co-founder of OpenAI, the answer is no. In a recent event at MIT, Altman revealed that the company is not currently training GPT-5 and “won’t for some time.” He also dismissed a recent open letter that urged OpenAI and other labs to pause the development of AI systems “more powerful than GPT-4,” saying that it was “missing most technical nuance” and “sort of silly.”
Why is OpenAI not training GPT-5?
Altman did not give a specific reason for why OpenAI is not training GPT-5, but he hinted that it was not a priority for the company at the moment. He said that OpenAI is “doing other things on top of GPT-4” that have more potential and more safety issues to address.
One of these things is Codex, a system that can generate and execute computer code based on natural language commands. Codex is powered by a version of GPT-4 that has been fine-tuned on a large corpus of code from various programming languages. Codex can perform tasks such as creating websites, designing games, and writing scripts.
Another thing that OpenAI is working on is DALL-E, a system that can generate realistic images from text descriptions. DALL-E is also based on a version of GPT-4 that has been fine-tuned on a large dataset of images and captions. DALL-E can create images of novel concepts, such as “an armchair in the shape of an avocado” or “a snail made of a harp.”
Both Codex and DALL-E demonstrate the versatility and creativity of GPT-4, as well as the challenges and risks that come with such powerful AI systems. For example, Codex could be used to automate software development, but it could also be used to create malicious code or exploit vulnerabilities. Similarly, DALL-E could be used to enhance visual communication, but it could also be used to create fake or misleading images.
What are the safety issues of GPT-4 and beyond?
Altman acknowledged that GPT-4 and its derivatives have “all sorts of safety issues” that are important to address and that were “totally left out” of the open letter that called for a pause in AI development. He did not elaborate on what these issues are, but some possible examples are:
Alignment: How can we ensure that GPT-4 and future systems align with our values and goals, and do not harm us intentionally or unintentionally?
Reliability: How can we ensure that GPT-4 and future systems are robust and trustworthy, and do not produce errors or failures that could have negative consequences?
Accountability: How can we ensure that GPT-4 and future systems are transparent and accountable, and do not evade or manipulate our oversight or regulation?
Fairness: How can we ensure that GPT-4 and future systems are fair and inclusive, and do not discriminate or marginalize certain groups or individuals?
Privacy: How can we ensure that GPT-4 and future systems respect our privacy and autonomy, and do not violate or exploit our personal data or preferences?
These are some of the questions that researchers, policymakers, and society at large need to grapple with as AI systems become more powerful and pervasive. Altman said that OpenAI is committed to building safe and beneficial AI for humanity, but he also admitted that there is no easy solution or consensus on how to achieve this.
What’s next for OpenAI and AI in general?
Altman did not reveal any concrete plans for what OpenAI will do next after GPT-4, but he hinted that the company will continue to push the boundaries of AI research and innovation. He said that OpenAI’s mission is to “create artificial general intelligence (AGI),” which is AI that can perform any intellectual task that humans can.
He also said that OpenAI’s vision is to “create artificial superintelligence (ASI),” which is AI that surpasses human intelligence in all domains.
As OpenAI continues to push the boundaries of AI research and innovation, it is crucial for researchers, policymakers, and society at large to engage in discussions and collaborations to ensure that AI is developed and used in a responsible and ethical manner.
No comments: