Starting today, all existing OpenAI API developers “with a history of successful payments” can access GPT-4. The company plans to open access to new developers by the end of this month, and after that start raising accessibility limits “depending on the availability of computing.”
“Millions of developers have requested access to the GPT-4 API since March, and the range of innovative products using GPT-4 is growing every day,” OpenAI writes in a blog post. “We envision a future where chat models can support any use case.”
GPT-4 can generate text (including code) and accept images and text as input, which is an improvement over GPT-3.5, its predecessor, which only accepted text. AI demonstrates results at the “human level” in various professional and academic tests. Like previous GPT models from OpenAI, GPT-4 was trained on publicly available data, including from public web pages, as well as data licensed by OpenAI.
The ability to understand images is not yet available to all OpenAI clients. To start, OpenAI is testing it with one partner, Be My Eyes . But there is no word yet on when it will open it to a wider client base.
It is worth noting that, like other best generative AI models to date, GPT-4 is not perfect. He “hallucinates” – invents facts and makes mistakes in reasoning, sometimes doing it very confidently. And it doesn’t learn from experience by failing to solve complex problems, such as introducing vulnerabilities into the code it generates.
In the future, OpenAI says it will allow developers to refine GPT-4 and GPT-3.5 Turbo, one of its recent but less capable text generation models (and one of the original models on which ChatGPT is based), with their own data, as it already is. has long been possible with several other OpenAI text generation models. This feature should come later this year, according to OpenAI.
Since the introduction of GPT-4 in March, competition in the field of generative AI has intensified. Anthropic recently expanded the context window for Claude, its flagship text-generating AI model that is currently in preview, from 9,000 to 100,000 tokens.
GPT-4 is holding the context window crown for now, which is currently 32,000 tokens. Generally speaking, models with a small context window tend to “forget” the content of even very recent conversations, leading them to drift off topic.
Additionally, today OpenAI announced that it is making the DALL-E 2 and Whisper APIs public. DALL-E 2 is an OpenAI image-generating model, while Whisper is a speech-to-text model. The company also said it plans to retire older API models in order to “optimize its computing power” (in the last few months, thanks in large part to the explosive popularity of ChatGPT, OpenAI has struggled to keep up with demand for its generative models).
Starting January 4, 2024, some of the older OpenAI models – notably GPT-3 and its derivatives – will no longer be available and will be replaced by new “base GPT-3” models that are believed to be more computationally efficient. Developers using older models will need to manually update their integrations before January 4th, and those who want to continue using the old fine-tuned models after January 4th will need to refine replacements based on the new GPT-3 base models.
Mobile App Development Best Practices – 04.10
iOS New and Deprecated APIs in iOS 17 Abstract Class vs. Protocol-Oriented Approach in Swift Comparing the Performance of the...
New and Deprecated APIs in iOS 17
In this video, I would like to share with you some things that were either deprecated or added in iOS...
Promova helps people with dyslexia learn languages
The new Promova feature comes just in time for National Dyslexia Awareness Month and is available on the platform for...
Notify – A simple note application with modern MVVM, Compose and Material3
Notify is a simple note application that is built with Modern Android development tools. This project showcases the Good implementation...
Mobile App Development Best Practices – 03.10
iOS MetaCodable – Supercharge Swift’s Codable implementations with macros meta-programming How to build a Tuist plugin and publish it using...
How to make and use BOM (Bill of Materials) dependencies in Android projects
By using a BOM dependency, you can avoid specifying the versions of each individual library in your app, and let...