OpenAI effectively opened Pandora’s box when it released ChatGPT into the wild, allowing anyone with an internet connection to access its vast knowledge and capabilities. But it also sparked a thirst for a more powerful artificial intelligence, with some individuals clamoring for the next version of the chatbot.
And finally, GPT-4, which stands for Generative Pre-Trained Transformer 4 has been launched on March 14, 2023, marking a significant milestone as the next stage of GPT’s evolution.
GPT-4 is a powerful large language model created by OpenAI that supports both text and image user inputs and responds with text outputs. It is the successor to GPT-3.5, which previously powered ChatGPT during it’s debut in Nov 2022.
The AI language model will focus on enhanced data management rather than an increase in parameters.
As an LLM, GPT-4 is designed with improved performance compared to its predecessors and is engineered to be more aligned with human values. Moreover, rumors over it’s 100 trillion parameters have finally been debunked.
What Can GPT-4 Do?
How much does GPT-4 cost
How to access to GPT-4
Why GPT-4 Doesn’t Have 100 Trillion Parameters
Sam Altman himself debunked this rumor and clarified that GPT-4 wouldn’t be as gigantic (in parameters, at least) as people like to expect.
A parameter is one of the aspects of LLMs that determines their performance.
He even called this rumor “complete bullshit” and said people are “begging to be disappointed.”

Unfortunately, I’m seeing a lot of content creators (who brand themselves as experts) still peddling this BS, which only adds fire to this complete nonsense.
But the CEO gave hints that the upcoming neural network may not even focus on expanding its parameters. Instead, it may concentrate on becoming more efficient in managing its existing data (175 billion parameters) to bring better results to users.
Bigger isn’t always better in the realm of large language models.
One of the best examples of this fact is Megatron 3, an LLM developed by NVIDIA and Microsoft.
Despite having larger data to crunch compared to GPT-3, it still lags behind OpenAI’s creation, which shows that larger parameters don’t always determine an LLM’s superiority.
In fact, deviating away from managing more data might be the next trend in LLMs, to which GPT-4 could possibly adapt.
Meta’s new LLM, called LLaMa, claims to perform better than GPT-3 in key areas, all while being 10x smaller than OpenAI’s model. According to its press release, it not only performs better but can even run on a single graphics processing unit (GPU).
Small could indeed be the next BIG thing in LLMs.
Today, LLMs like ChatGPT require advanced servers and massive computing power to run. But in the near future, they may be much smaller while maintaining (and even increasing) their capacities.
This could allow research groups, institutions, and even individuals to run advanced AIs on their laptops and even smartphones.
Will it Have Image, Audio, and Video Capabilities?
Altman said that GPT-4 would remain as a text-based platform. But in the same interview, he shared that LLMs would soon aim to be ‘multimodal.’ This means having the ability to deliver texts, images, audio, and videos on one platform.
It could bring a lot of possibilities, like turning text-based data into rich visuals such as graphics, charts, and even video presentations.
Moreover, you could even turn an entire script into a film, a comic book, and even a cartoon series literally at a push of a button.
And we may not need to wait for GPT-5 or 6 before we can experience them, as other LLM developers could get ahead and bring these capabilities much earlier.
Is GPT-4 Capable of Self-Improvement?
GPT-4 won’t be an artificial general intelligence (AGI), and Altman broke a million hearts by clarifying that the company does not have this type of AI yet.
AGI refers to an AI that can exercise human-level capabilities in numerous areas, including self-learning and improvement.
Today, users of software products and services are limited to a specific version, and despite frequent and prolonged use, they do not improve over time unless programmers intervene.
On the other hand, AGI would break free from these limitations by having the ability to continuously learn and improve through use, much like the human mind evolves throughout our lives.
Will it be Customizable?
Sam Altman did not directly say if GPT-4 would be customizable or not but encouraged the tech industry to develop such AI.
According to him, a “personalized” AI will be critical to cater to the different needs of businesses and individuals regarding automation.
The CEO said that consumers must have a variety of AIs to choose from. This can range from completely obedient and professional AIs to the boldest ones, which can spew out unrestricted suggestions.
These varying degrees will certainly fit into different applications, which can make AI, as a whole, a better tool for everyone.
Related Articles:
Conclusion
Whether GPT-4 would trigger another tidal wave of innovation for the AI industry is yet to be seen.
But one of the possibilities that could play out here is the gradual (instead of a leapfrog) improvements of OpenAI’s future models.
Sam Altman clearly stated that they would prefer to lean on safety, even at the cost of releasing products at a slower rate.
Given the immense potential of LLMs, the next logical step should be to prioritize safety and ensure that these platforms are secure for all users.
Join our newsletter as we build a community of AI and web3 pioneers.
The next 3-5 years is when new industry titans will emerge, and we want you to be one of them.
Benefits include:
- Receive updates on the most significant trends
- Receive crucial insights that will help you stay ahead in the tech world
- The chance to be part of our OG community, which will have exclusive membership perks