For years, Google’s search engine reigned supreme, and the way we people have sought answers to the most interesting questions on the web has mostly stayed the same.
However, with the advent of ChatGPT, the game has been redefined. This AI chatbot revolutionized our search by generating quick and human-like answers to simple questions. In fact, we don’t even have to click any links anymore!
Check out: ChatGPT vs Google Search
But, as it turns out, ChatGPT has serious flaws. Users soon discovered that it tends to produce what some call “fluent bullshit,” information that appears credible but isn’t factual. And it may take time before its creator, OpenAI, can solve this problem.
Now, the question is: Can we have the best of both worlds – Google’s superior search capabilities and ChatGPT’s instantaneous responses?
Enter Retrieval Augmented Reality, a promising technology that may be the answer we’ve been waiting for. In this article, we will delve deeper into this cutting-edge tech to discover if it truly has the potential to be the ultimate Google (and ChatGPT) killer.
What is Retrieval Augmented Generation?
Retrieval-augmented generation is a technology that utilizes a large language model (LLM) to process both queries and relevant documents to generate responses.
As a refresher, LLM is a massive data bank of languages equipped with an algorithm that processes these data to respond to users’ queries and other needs.
What retrieval-augmented generation does is search relevant documents for a specific topic and use LLM to process that information. This means it could generate factual and accurate answers while maintaining a ChatGPT-like output.
While the execution of this technology at scale would be complex, it holds the promise of revolutionizing the way we access information. Meta’s Atlas and DeepMind’s RETRO are some of the biggest ventures in this emerging field.
Moreover, this tech can unlock the full potential of language models and bridge the gap between the vast expanse of knowledge on the internet and our ability to access it.
Why Won’t Google Swiftly Adopt It?
The potential of retrieval-augmented generation is undeniable, but while search engines have the resources to deploy this tech, it may take some time before we can actually see it.
Their business models are built upon users clicking on ads placed alongside search results. But since LLMs deliver text that directly answers a query, it is unclear where ads would fit into the product.
This is a problem that Google needs to solve before fully embracing this technology and replacing the traditional way of finding answers on the web. However, for startups or even second-tier players like Microsoft’s Bing, the stakes may not be as high, and they may be more willing to take bolder steps.
But Google has already placed massive labyrinths to protect its business from potential threats that may arise in the future. One of them is Chrome web browser and Android mobile operating systems, which direct millions of users to its search engine.
Furthermore, its extensive network of advertisers and high-tech ad system enables it to tap into users’ attention in a more profitable way than its rivals.
This gives it the financial leverage to outbid competitors for traffic, thereby convincing browsers to make Google their default search engine. These multiple layers of protection make it a daunting task for challengers to usurp Google.
Fixed Memories: LLM’s Achilles Heel
ChatGPT is powered by a large language model, making it adept at responding to and solving queries in a more human-like fashion.
While the tool excels on basic questions, more complex or specialized requests may result in a misguided response. In fact, most of today’s LLMs (not just ChatGPT) can confidently generate fictional and inaccurate answers. Yes, algorithms can be willfully ignorant, too.
Now, let’s take a look at LLMs on a slightly technical level.
GPT-3, ChatGPT’s older brother, has 175 billion parameters and with the use of 16-bit, floating-point bytes, would take up 350GB of storage space. This is more than enough memory to store the entire Wikipedia, which only takes up 150GB.
This serves as a reminder of the vast potential for language models to store and disseminate knowledge seamlessly.
However, while LLMs may possess a formidable amount of stored knowledge, it still pales compared to the massive information readily available on the good-old internet.
Some estimates that the total amount of data on the web has already reached a staggering 5 billion GB. Moreover, today’s search engines can access the web’s ocean of data and explore its deepest parts to provide outputs to users’ questions.
LLMs, meanwhile, despite their impressive capabilities, suffer the same problem of limited memory, which limits their capability to generate fact-based and in-depth answers.
You might also like:
- ChatGPT could be a ticking time bomb for national security
- Can we finally communicate with animals through artificial intelligence?
Join our newsletter as we build a community of AI and web3 pioneers.
The next 3-5 years is when new industry titans will emerge, and we want you to be one of them.
- Receive updates on the most significant trends
- Receive crucial insights that will help you stay ahead in the tech world
- The chance to be part of our OG community, which will have exclusive membership perks