On March 9, 2023, during the AI in Focus – Digital Kickoff in Germany, CTO of Microsoft Germany Andreas Braun casually said that the GPT-4 might come out next week. The rumor turned out to be true, as OpenAI released GPT-4 on March 14. With its multimodal ability and more human-like response, the successor of GPT-3 is currently available for ChatGPT plus users. Though, it does not have the video modality the rumors had previously suggested.
How powerful is GPT-4?
As the successor of the GPT-3, GPT-4 naturally is more powerful. One of the key differences between the two is the number of parameters they have been trained with. Parameters are numerical values that determine how a neural network processes data. GPT-3 was trained with 175 billion parameters. Many tech enthusiasts speculated that GPT-4 might be trained with 100 trillion parameters.
But in the technical report, it is revealed that GPT-4 was trained with 170 trillion parameters. This number surpasses what was rumored before release. This means that GPT-4 can handle way more complex and diverse language tasks than its predecessors. OpenAI claims they spent 6 months making GPT-4 safer and aligned. This resulted in a whopping 82% decrease in response for disallowed content that GPT-3.5.
Here is a scary look at where we are vs the knowledge GPT 4 will have. pic.twitter.com/iZUtaS1gNn
— Daniel Apke (@danapke) March 6, 2023
Another difference is that GPT-4 can generate text that mimics human behavior and speech patterns more accurately now. It can also handle language translation, text summarization, and other tasks in a more versatile and adaptable manner. The new model can process up to 25,000 words while ChatGPT can process only up to 3,000. It can also reason more than previous GPT models.
Using the new model
Modality means the input type that a language model works in. The biggest advancement is that GPT-4 also adds image modality to its capabilities, meaning it can generate and understand image and graphics content. This makes it more powerful than GPT-3 and GPT-3.5, both of which operate in one modality. That is why ChatGPT can only understand and reply in text. But with the new model, you can show it an image and ask questions about it. An example was shown in the introductory video wherein it was shown a picture of balloons tied with strings. When asked what would happen if you cut the strings, it answered, “The balloons will fly away.”
GPT-4 is available for ChatGPT Plus users with a $20-per-month subscription. The usage is also limited for the time being. OpenAI has revealed that Microsoft Bing has been using GPT-4. It was previously speculated anyways, given the fact that Microsoft is the largest shareholder of OpenAI. They also unveiled that Duolingo uses the new model for conversation while Khan Academy is creating an online tutor with it.
Competition in the industry
AI technology is advancing from image generation to video generation. Meta and Google showed off their versions of video modality last autumn with text-to-video functionalities. Though the videos were low resolution and looked pretty AI-ish, it was a big jump in AI technology. Microsoft is also bringing this function with higher resolution and more natural outputs through their newly-introduced software, the Microsoft 365 Copilot.
Discover a new way of working with Microsoft 365 Copilot—next-generation AI capabilities embedded in the Microsoft 365 apps you use every day. Learn more: https://t.co/fqTtN1tRVQ pic.twitter.com/gNjCQfGkdz
— Microsoft 365 (@Microsoft365) March 16, 2023
Google is also trying to get back on track by implementing AI in its search engine. They are promoting their Bard Language Model. Apart from that, they also released a resource paper last month introducing MUSE, an image generation model much faster than DALL-E 2. Meanwhile, Meta recently released its language model called LLaMaA. Chinese companies such as Baidu have also recently joined the fray with their own AI called ERNIE Bot.
Currently, Microsoft is dominating the field. Google has always used AI passively. They used AI in Google Lens, Maps, and nearly every other service to enhance their functionality. But the way Microsoft is using it is flashier, and it certainly caught people’s eye. With the release of GPT-4, they continue strengthening their position in the ongoing AI competition.
2023 is turning out to be the year of AI technology with the advancements from image generation to language models, and now video generation. Some are already marking GPT-4 as the primary stage of singularity, a hypothetical future point where technology goes out of control and human life and civilization are threatened.
All that being said, GPT-4 still suffers from the same problems as previous models. Providing misinformation and hallucinations are some of them. But the rate is much lower than the previous models. As such, OpenAI has invited developers to join the waitlist to access the GPT-4 API.
YouTube: GPT-4 Developer Livestream