OpenAI effectively opened Pandora’s box when it released ChatGPT into the wild, allowing anyone with an internet connection to access its vast knowledge and capabilities. But it also sparked a thirst for a more powerful artificial intelligence (AI), with some individuals clamoring for the next version of the chatbot.
And finally, GPT-4, which stands for Generative Pre-Trained Transformer 4, has been launched on March 14, 2023, marking a significant milestone as the next stage of GPT’s evolution.
GPT-4 is a powerful large language model created by OpenAI that supports both text and image user inputs and responds with text outputs. It is the successor to GPT-3.5, which previously powered ChatGPT during its debut in Nov 2022.
As a large language model (LLM), GPT-4 is designed with improved performance compared to its predecessors and is engineered to be more aligned with human values. Moreover, rumors over its 100 trillion parameters have finally been debunked.
What Can GPT-4 Do?
GPT-4 is integrated with visual features, code generation, and debugging, demonstrating a higher level of reasoning, creativity, and conciseness compared to GPT-3.5. Furthermore, it can handle up to 25,000 words per session, allowing you to inquire about larger and more complex topics.
Note that its visual/image functionality is not fully available yet, but its demo has clearly shown that GPT-4 is powerful enough to execute this capability. This feature can seamlessly digitize written content and even aid visually impaired people. Although this functionality has yet to be released, you can explore GPT-4’s other mind-blowing features, which we covered extensively in our previous article.
OpenAI has also emphasized that GPT-4 boasts an improved capacity to resist malicious prompts by 82%, making it significantly safer than its predecessor.
On top of its previously described capabilities, this powerful language model can also accomplish a multitude of other astonishing tasks. These include instant transcription of lawsuits, inventing a new language complete with rich vocabulary, functioning as a sophisticated ASCII generator, ordering pizza on its own, and much more.
And this one deserves its own standalone sentence. GPT-4 can also serve as a supporting tech to help introverts (like me) survive awkward conversations by seamlessly generating responses based on what the other person is saying ーin real-time. ‘Love it!
But even with its plethora of fascinating and remarkable features, we have merely begun to uncover GPT-4’s full potential. In fact, we might not even have enough time to explore its full capabilities, as rumors suggest that GPT-5 could make its debut as soon as early 2024.
Can GPT-4 Connect to the Internet?
Yes, GPT-4 can now access the internet via Bing Chat, a ChatGPT plug-in called ‘Browsing’, and other OpenAI-approved plug-ins for the chatbot. Plug-Ins are arguably the biggest innovation in ChatGPT yet as they allow the chatbot (together with GPT-4) to interact with the internet and other third-party information sources beyond its database.
Although GPT-4 currently lacks the ability to explore the World Wide Web on its own, the introduction of Plug-Ins has greatly expanded its capabilities, and will likely revolutionize the way we interact with the Internet in the near future.
How much does GPT-4 Cost?
The price for using GPT-4 API depends on what specific model you’re using. It also depends on the number of words you input and GPT-4 generates.
Here’s a matrix that determines each aspect’s prices. Note that 1,000 tokens are roughly equivalent to 750 words.
GPT-4 Model | Context Length | Price for Your Word Input (Prompt tokens) | Price for GPT-4’s Word Output (Sampled Tokens) |
Standard GPT-4 and GPT-4 0314 | 8,000 | $0.03 per 1,000 tokens | $0.06 per 1,000 tokens |
GPT-4 32K and GPT-4 32K-0314 | 32,000 | $0.06 per 1,000 tokens | $0.12 per 1,000 tokens |
If you’re not planning to use an API, but still want to experience GPT-4’s superpowers, you can upgrade your free ChatGPT account to ChatGPT Plus for $20 (plus tax). Once upgraded, you can select from two variations of GPT-3.5, or the all-powerful GPT-4.
How to Access GPT-4
There are three ways to access the GPT-4 model: By upgrading to ChatGPT Plus, signing up on OpenAI API waitlist, or applying to its Research Access Program.
Subscribing to ChatGPT Plus is a breeze, but getting approved for API access involves meeting specific requirements just to be considered. In addition to your email address, the waitlist for API access will also require the name of the company you work for and your organization ID. You must also clearly state how you’re planning to use the API. OpenAI has also emphasized that developers who participated in its OpenAI Evals will be given priority access.
Moreover, the company is also willing to provide GPT-4 access to researchers who are conducting studies on the impacts of artificial intelligence on society through its Research Access Program. Note that merely providing valid information on these waitlists does not guarantee access to GPT-4 model as all applications are subject to approval.
GPT-4’s Parameters Remain a Mystery
OpenAI hasn’t disclosed GPT-4’s exact parameters, along with other critical aspects of the LLM, when it launched the language model last March 2023.
But one thing’s for sure, it is not running on the rumored 100 trillion parameters, which OpenAI CEO Sam Altman himself has reiterated. And just to reassure everyone, he even called this rumor “complete bullshit”.
As the exact parameters of GPT-4 remain unknown, there are two possible clues as to how OpenAI may have improved the performance of the new LLM. The first possible explanation is that the company may have used GPT 3.5’s existing 175 billion parameters more efficiently, rather than expanding them further. In fact, Altman already gave hints in the past that this was a viable decision to make.
The second one is that OpenAI may have expanded its parameters, but not to the extreme level of a hundred trillion parameters. There were hints that the company could have pursued this approach, including its emphasis on GPT-4’s “broader general knowledge.”
But bigger isn’t always better in the realm of LLMs.
One of the best examples of this fact is Megatron 3, an LLM developed by NVIDIA and Microsoft. Despite having larger data to crunch compared to GPT-3, it still lags behind OpenAI’s creation, which shows that larger parameters don’t always determine an LLM’s superiority.
In fact, deviating away from managing more data might be the next trend in LLMs, to which GPT-4 could have possibly adapted.
Meta’s new LLM, called LLaMa, claims to perform better than GPT-3 in key areas, all while being 10x smaller than OpenAI’s model. According to its press release, it not only performs better but can even run on a single graphics processing unit (GPU).
Small could indeed be the next BIG thing in LLMs.
Today, LLMs like ChatGPT require advanced servers and massive computing power to run. But in the near future, they may turn smaller while maintaining (and even increasing) their capacities. This could allow research groups, institutions, and even individuals to run advanced AIs on their laptops and even smartphones.
Will it Have Image, Audio, and Video Capabilities?
For now, GPT-4 remains a text-based platform, but it is expected to gradually roll out its image processing capabilities. There is no official word yet on whether it will introduce audio and video capabilities.
But since it was built as a ‘multimodal model’, which means having the ability to process various types of media, there is a high chance that it might offer audio and video features soon. These capabilities could bring a lot of possibilities, like turning text-based data into rich visuals such as graphics, charts, and even video presentations.
Moreover, you could even turn an entire script into a film, a comic book, and even an animated series literally with a few clicks. And we may not need to wait for GPT-5 or 6 before we can experience them, as other LLM developers could get ahead and bring these capabilities much earlier.
Is GPT-4 Capable of Self-Improvement?
GPT-4 doesn’t have the ability to self-improve yet, as this capability can only be theoretically achieved by artificial general intelligence (AGI). This refers to an AI that can exercise human-level capabilities in numerous areas, including self-learning and improvement. And Altman broke a million hearts by clarifying that the company does not have this type of AI yet.
Today, users of software products and services are limited to a specific version, and despite frequent and prolonged use, they do not improve over time unless programmers intervene. On the other hand, AGI could break free from these limitations by having the ability to continuously learn and improve through use, much like how the human mind evolves throughout our lives.
Will it be Customizable?
Sam Altman did not directly say if GPT-4 would be customizable to suit users’ and companies’ individual needs. But he said that consumers must have a variety of AIs to choose from and enough room to personalize their artificially intelligent assistants.
This can range from having completely obedient and professional AIs to the most unrestrictive models, which can entertain controversial topics. These varying degrees will certainly fit into different applications, which can make AI, as a whole, a better tool for everyone.
For now, the closest customization we can do in GPT-4 is by equipping ChatGPT Plus with OpenAI-approved Plug-Ins, which can significantly expand the capabilities of the already powerful LLM.
Related: 16 Ways to Use ChatGPT to Make Life Easier
Limitations and Risks of GPT-4
OpenAI has been upfront in disclosing that GPT-4 remains an imperfect language model and is still susceptible to issues such as hallucinations, social biases, and adversarial prompts or jailbreaking.
Despite the strengthened safeguards against these risks, GPT-4 is not completely immune to these issues, as users continue to find ways to bypass its guardrails. Therefore, there is still a risk that the language model could succumb again to these problems.
And arguably the biggest problem that GPT-4 is facing right now is its vulnerability to jailbreaking. In fact, just days after its launch, Alex Albert, a computer science student from the University of Washington immediately discovered multiple flaws in GPT-4’s system.
It’s worth noting that Albert isn’t simply a “rogue jailbreaker,” but rather an individual who has made it his mission to create jailbreaking methods to educate AI users and encourage AI companies to address vulnerabilities in their LLMs.
Some of his most concerning revelations include GPT-4’s vulnerability from prompt injections and the possibility of leaking its own system prompt with minimal effort from hackers. He also creates and shares advanced jailbreaking techniques that highlight existing vulnerabilities in GPT-4 that could potentially result in serious problems.
Check out: How to Jailbreak ChatGPT
What Jobs Could be Replaced by GPT-4 in the Future?
Customer service representatives, transcriptionists, copywriters, email marketers, proofreaders, and translators, are some of the professions that could be replaced by GPT-4 in the near future.
Rowan Cheung, an author who focuses on the latest AI trends, tweeted an interesting result generated by GPT-4 just days after the powerful LLM was launched. The viral tweet has shown the top 20 jobs that are likely to be automated by OpenAI’s language model soon.
But the most interesting aspect of this post wasn’t the listed jobs, but the indicated human traits that GPT-4 could automate in the future. These traits include speed and accuracy, communication and empathy, attention to detail, research and organization, creativity and writing, critical thinking and judgment, and so much more.
Indeed, although GPT-4 has the potential to replace certain jobs and skills, it is essential to consider the existing limitations and risks associated with the technology before fully embracing it for human-centric tasks. As AI continues to advance rapidly, it is conceivable that the ‘streamlining phase’ required to overcome them might be shorter than anticipated, which means LLMs could take on these jobs sooner than expected.
Is OpenAI Already Working on GPT-5?
Sam Altman denied that the company is already developing and training a more advanced large language model called GPT-5, which some even believe could be the world’s first AGI (not us though).
“We are not and won’t for some time. So in that sense, it was sort of silly.”
OpenAI CEO Sam Altman responding to the question of whether they are already developing GPT-5
He clarified that OpenAI is currently focused on tapping the many potentials of GPT-4 and making it safer for users. The rumor began swirling just a month after GPT-4 was officially released to the public, and at a time when everybody is still trying to figure out how to get the most out of this advanced language model.
Conclusion
Whether GPT-4 would trigger another tidal wave of innovation for the AI industry is yet to be seen. But one of the possibilities that could play out here is the gradual (instead of a leapfrog) improvements of OpenAI’s future models.
Altman clearly stated that they would prefer to lean on safety, even at the cost of releasing products at a slower rate. Given the immense potential of LLMs, the next logical step should be to prioritize safety and ensure that these platforms are secure for all users.
Join our newsletter as we build a community of AI and web3 pioneers.
The next 3-5 years is when new industry titans will emerge, and we want you to be one of them.
Benefits include:
- Receive updates on the most significant trends
- Receive crucial insights that will help you stay ahead in the tech world
- The chance to be part of our OG community, which will have exclusive membership perks