However, as the viral chatbot continuously strives toward massive success, some question the quality and originality of its generated content. Should you use ChatGPT-generated content? The emerging arguments opened opportunities for promising tech to rise, pitting AI against AI.
But this goes beyond just ChatGPT since AI detection tools are ahead of the whole spectrum of chatbots.
Will this hinder LLM-based artificial intelligence? Let’s dive deeper.
The First GPT-3 AI Detection Tool
Originality.AI claims to be the first AI detection tool that’s trained to identify GPT-3 text-generated content, which is an autoregressive language model that incorporates deep learning techniques to produce responses similar to those of a human. But wait; it’s not the only thing it can do.
This detection tool can also gauge the plagiarism and percentage of AI content from other GPT variants, including GPT-2, GPT-NEO, and GPT-J. Its founder and CEO, Jonathan Gillham, even tested the tool’s capability to read ChatGPT’s new model GPT-3.5 to determine if it can keep up with the fast-progressing chatbot.
His experiments included three models, GPT-3, GPT-3.5, and ChatGPT. The results revealed that the tool was able to detect AI-generated text from GPT variants, getting an average accuracy of 99.41%.
Out of the three models, GPT-3 had the highest average score at 99.95%, followed by GPT-3.5 at 99.65% and ChatGPT at 98.65%. The fact that GPT-3.5, an advanced version of GPT-3, had a slightly lower score is to be expected. Gillham also added that ChatGPT had the lowest result due to its personalization feature.
These impressive results demonstrate Originality.AI’s ability to effectively detect AI-generated content, even from models with different combinations of prompts and personalization. But is this enough to make ChatGPT powerless?
Can Originality.AI Fully Outsmart ChatGPT?
According to Gael Breton, co-founder at Authority Hacker, web publishers may have to deal with AI-generated content soon. To test the capabilities of the AI detection tool Originality.AI, Breton conducted a series of experiments involving different types of AI content.
In the first test, Originality.AI was able to accurately identify machine-generated content that had been copied and pasted from a prompt to write a short article about affiliate marketing mistakes. The tool also performed well in the second test, correctly identifying AI-made content but had an intervention from a human writer to fix errors.
The third test, involving an article written by a freelance writer, resulted in Originality.ai identifying a small percentage of AI content. Breton speculated that this may have been due to the use of the well-known writing assistant Grammarly.
However, the tool in the fourth test didn’t perform as great as the previous experiments, in which Breton used an AI language model to write an article in the style of SEO expert Brian Dean. While it’s convincing, the content itself was only average.
These tests suggest that Originality.AI is generally effective at identifying AI content, particularly in cases where humans revise the works. However, it still has some weaknesses, as demonstrated by the fourth test. Web publishers should be aware of the limitations, especially since there’s still room for improvement.
Another Twitter user, Royce, also shared the result of testing Originality.AI through a reply. He first commanded ChatGPT to create a blog post about why space is fun to explore using an expert researcher’s perspective.
According to him, the tool will mark your content plagiarized if the settings remain on default. However, if you know how to tweak it, the result will be original.
Gillham was also well aware of the challenges that Originality.AI faces, as he was able to experience them first-hand during his experiment. He also had solutions in mind to improve the tool, such as retraining the model. Who knows, it might catch up soon.
OpenAI is Proactively Working on AI Detection in ChatGPT
Believe it or not, OpenAI has a trick up its sleeve with a new project that detects ChatGPT-generated content. Yes, you read that right!
OpenAI is working its best to align the use of ChatGPT with human values, specifically in avoiding the production of plagiarized content (both for academic and non-academic purposes), mass generation of propaganda, and more through its watermark project.
As we all know, a watermark is an almost transparent logo commonly present on images. The same applies to ChatGPT, but it uses cryptography that works like a secret code.
Will this be another game-changer? There’s no other way but to wait and see. However, it’s also vital to remember Sam Altman‘s (OpenAI’s CEO) reminder about using ChatGPT. He emphasized that the chatbot is still in its early stage and that entirely depending on it in everything you do is a mistake.
But what he promised is a hope that they’ll continue to progress and work on the aspects that need attention, which is something most of us would want to see.
You might also like:
Join our newsletter as we build a community of AI and web3 pioneers.
The next 3-5 years is when new industry titans will emerge, and we want you to be one of them.
- Receive updates on the most significant trends
- Receive crucial insights that will help you stay ahead in web3
- Access whitelist spots of the most hyped NFT drops
- The chance to be part of our OG community, which will have exclusive membership perks