How to Detect ChatGPT Plagiarism: Is It Even Possible?

ChatGPT plagiarism has become a hot topic in educational institutions across the globe. Here's everything you need to know.

ChatGPT has turned the academic and business worlds upside down with its ability to generate coherent, well-written copy about pretty much any subject on earth in a matter of seconds.

Its remarkable abilities have seen students of all educational levels turn to the chatbot – as well as its rivals, such as Bard– to write complex essays that would otherwise take hours to finish.

This has kickstarted a global conversation about a new phenomenon, often referred to as “ChatGPT plagiarism”. This guide covers the tools businesses and educational institutions are using to detect ChatGPT plagiarism, the dangers of cheating with ChatGPT – and whether using ChatGPT even counts as plagiarism at all.

How to Detect ChatGPT Plagiarism

To detect ChatGPT plagiarism, you need an AI content checker. AI content checkers scan bodies of text to determine whether they've been produced by a chatbot such as ChatGPT or Bard, or by a human. However, as we’ll cover later on, these tools are far from reliable.

It’s slightly harder to detect plagiarism when it comes to code, something ChatGPT can also generate capably. There’s not quite the same ecosystem of AI detection tools for code as there is for content.

However, if you’re in a university environment, for example, and you’re submitting code well beyond your technical level, your professor or lecturer may have some very reasonable suspicions that you’ve asked ChatGPT to help you out.

The Most Popular AI and ChatGPT Plagiarism Checker Tools Reviewed

Since ChatGPT’s launch in November 2022, lots of companies and educational institutions have produced AI content checkers, which claim to be able to distinguish between artificially generated content and content created by humans. Now, a lot of company's are using Google's chatbot Bard too, which uses a different language model.

However, the purported accuracy of even the most reputable AI content detection tools is fiercely disputed and court cases between students falsely accused of using AI content and education have already materialized.

The bottom line is this: No tool in this space is 100% accurate, but some are much better than others.

GPTZero

GPTZero is a popular, free AI content detection tool that claims that it's “the most accurate AI detector across use-cases, verified by multiple independent sources”.

However, Back in April, a history student at UC Davis proved that GPTZero – an AI content detection tool being used by his professor – was incorrect when it labeled his essay as AI-generated.

We tested GPTZero by asking ChatGPT to write a short story. GPTZero, unfortunately, was not able to tell that the content was written by an AI tool:

GPTZero plagiarism test

Originality.ai

Originality.ai is certainly one of the more accurate AI content detection tools currently available.

The company conducted its own study into AI content detection tools in April of this year, within which it fed 600 artificially generated and 600 human-generated blocks of text to its own content detection system, as well as other popular tools that claim they do similar.

As you can see from the results below, Originality.ai outperformed all of the tools included in the test:

originality AI palgiarism test

The only downside to Originality.ai is that there isn’t a free plan, and you can’t even test it out for free as you can with the other apps included in this article. it costs $20 for 2,000 credits, which will let you check 200,000 words.

Copyleaks AI Content Detector

Copyleaks is a free-to-use AI content detector that claims to be able to distinguish between human-generated and AI-generated copy with 99.12% accuracy.

Copyleaks will also tell you if specific aspects of a document or passage are written by AI, even if other parts of it seem to be written by a human.

Copyleaks says it's capable of detecting AI-generated content created by “ChatGPT, GPT-4, GPT-3, Jasper, and others”, and even claims that “once newer models come out we’ll be able to automatically detect it.”

CopyLeaks Costs $8.33 per month for 1,200 credits (250 words of copy per credit).

In a test carried out by TechCrunch in February 2023, however, Copyleaks incorrectly classified various different types of AI-generated copy, including a news article, encyclopedia entry, and a cover letter as human-generated.

Furthermore, Originality.ai’s study referenced above only found it to be accurate in 14.50% of cases – a far cry from the 99.12% accuracy claim CopyLeaks makes.

However, when we tested it, it did seem to be able to pick up that the text we entered was generated by ChatGPT:

copyleaks ai detector

Turnitin AI Detector

Turnitin is a US-based plagiarism detection company that is deployed by a variety of universities to scan their students’ work. Turnitin is designed to detect all kinds of plagiarism but revealed in April that it’s been investing in an AI-focused team for some time now.

Turnitin says that it can “detect the presence of AI writing with 98% confidence and a less than one percent false-positive rate in our controlled lab environment.”

However, the company also says that content if it flags a piece of content as AI-generated, this should be treated as an “indication, not an accusation”. The true accuracy of Turnitin’s AI detector has been disputed by the Washington Post, as well as other sources.

Turnitin’s AI content detection software is currently free, but the company says in an FAQ on its website that they’re moving to a paid licensing program in January 2024 – the price of which is not specified.

OpenAI Text Classifier

OpenAi, owners of ChatGPT, used to have its own plagiarism checker. We know this, because we used it ourselves when originally writing this article. However, as of July, the company withdrew the tool, stating that it wasn't accurate enough.

That definitely aligns with our own experience when we tested it. When we showed it a short story, written by its own ChatGPT tool, the checker didn't pick up on the fact that it was AI generated.

As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. – OpenAI blogpost

You can see our original example of the checker missing the fact that text was AI written, below:

OpenAI text classifier test

Does AI Content Detection Actually Work?

No AI content detection tool is 100% reliable – our tests prove that pretty resoundingly.

However, none of the tools we’ve discussed today actually claim to be 100% accurate, and very few claim to be absolutely free of false positives. Others, like GPTZero, post disclaimers about taking their results as gospel.

A number of university students accused of using artificial intelligence to produce essays have already been forced to prove that their work was original.

In Texas, in March, a professor falsely failed an entire class of students after wrongfully accusing them of using ChatGPT to write essays. There is also a collection of reports – and studies like the one conducted by Originality.ai – that suggest that even the most capable plagiarism checkers aren’t nearly as accurate as they claim.

Even Turnitin’s AI content detector isn’t foolproof. In the recent, relatively small test conducted by the Washington Post we discussed earlier, its accuracy fell far short of the 98% they claim to be able to produce.

Originality.ai, on the other hand, is certainly one of the more robust ones available – and even its detection technology isn’t right every single time.

Besides, if false positives exist in any capacity, then there will always be room for students to claim their work is original and has simply been misidentified.

Is Using ChatGPT or Bard Plagiarism?

It’s debatable whether ChatGPT is in fact plagiarism at all. Oxford Languages defines plagiarism as “the practice of taking someone else's work or ideas and passing them off as one's own.”

ChatGPT is not a person, and it’s not simply reproducing the work and ideas of other people when it generates an answer. So, by the dictionary definition, it’s not outright plagiarism.

Even if it was doing that, if you were honest about where it came from (i.e. ChatGPT), arguably, that wouldn’t be plagiarism anyway.

However, some schools and universities have far-reaching plagiarism rules and consider using chatbots to write essays as such. One student at Furman University failed his philosophy degree in December after using ChatGPT to write his essay. In another case, a professor at Northern Michigan University reported catching two students using the chatbot to write essays for their class.

Using ChatGPT to generate essays and then passing this off as your own work is perhaps better described as “cheating” and is definitely “dishonest”.

The whole point of writing an essay is to show you’re capable of producing original thoughts, understanding relevant concepts, carefully considering conflicting arguments, presenting information clearly, and citing your sources.

There’s very little difference between using ChatGPT in this way and paying another student to write your essay for you – which is, of course, cheating.

With regard to Google's Bard, the answer is a little more complicated. The same line of logic used above applies to Bard as it does to ChatGPT, but Bard has been marred by accusations of plagiarism and incorrectly citing things it pulls from the internet in a way ChatGPT hasn't. So, using Bard might lead to you inadvertently plagiarizing other sources (more on this below).

The Dangers of Cheating With ChatGPT

Christopher Howell, an Adjunct Assistant Professor at Elon University, recently asked a group of students to use ChatGPT for a critical assignment and then grade the essays it produced for them.

He reported in a lengthy Twitter thread (the first part of which is pictured below) that all 63 students who participated found some form of “hallucination” – including fake quotes, and fake and misinterpreted sources – in their assignments.

Professor talking about chatgpt mistakes

Does ChatGPT Plagiarize in Its Responses?

No – ChatGPT isn’t pulling information from other sources and simply jamming it together, sentence by sentence. This is a misunderstanding of how Generative Pre-trained Transformers work.

ChatGPT – or more accurately the GPT language model – is trained on a huge dataset of documents, website material, and other text.

It uses algorithms to find linguistic sequences and patterns within its datasets. Paragraphs, sentences, and words can then be generated based on what the language model has learned about language from sequences in these datasets.

This is why if you ask ChatGPT the same question at the same time from two different devices, its answers are usually extremely similar – but there will still be variation, and sometimes, it offers up completely different answers.

Does Bard Plagiarize in Its Responses?

ChatGPT's biggest rival, Google's Bard has had significantly more issues with plagiarizing content since its launch than its more popular counterpart. Technology website Tom's Hardware found that Bard had plagiarized one of its articles, and then proceeded to apologize when one of its staff called it out.

More recently, in May 2023, PlagiarismCheck told Yahoo News that they generated 35 pieces of text with Bard, and found it plagiarized above 5% in 25 of them by simply paraphrasing existing content already published on the internet.

One big difference between Bard and ChatGPT that can perhaps explain this is that Bard can search the internet for responses, which is why it tends to deal better with questions relating to events after 2021, which ChatGPT struggles with. However, this seems to also mean it pulls data from sources in a less original way and cites its sources more often.

These examples may have been blips, but it's good to know the risks if you're using Bard for important work.

Do Other AI Tools Plagiarize?

Unfortunately, yes – and some companies have already embarrassed themselves by using AI tools that have plagiarized content. For example, CNET – one of the world's biggest technology sites – was found to be using an AI tool to generate articles, and wasn't transparent about it at all.  Around half of the articles that CNET published using AI were found to have some incorrect information included.

To make matters worse, Futurism, which launched an investigation into CNET's AI plagiarism, said that “The bot's misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original”.

AI tools that don't generate unique, original content – be it art or text – have the potential to plagiarize content that's already been published on the internet. It's important to understand exactly how the language model your AI tool is using works and also have tight oversight over the content it's producing, or you could end up in the same position as CNET.

Should You Use ChatGPT for Essays or Work?

Using ChatGPT for Essays

The fact that ChatGPT doesn't simply pull answers from other sources and mash sentences together means businesses have been able to use ChatGPT for a variety of different tasks without worrying about copyright issues.

But its internal mechanics also mean it often hallucinates and makes mistakes. It's far, far from perfect – and although it's tempting to get ChatGPT to write your essay for university or college, we'd advise against it.

Every educational institution's specific submission guidelines will be slightly different, of course, but it's vastly likely that it is already considered “cheating” or plagiarism” at your university or school. Plus, regardless of how accurate they are, educational institutions are using AI content detectors, which will improve over time.

Using ChatGPT at Work

Of course, lots of people are using ChatGPT at work already – it's proving useful in a wide range of industries, and helping workers in all sorts of roles save valuable time on day-to-day tasks.

However, if you are using ChatGPT at work, we'd advise being open with your manager or supervisor about it – especially if you're using it for important activities like writing reports for external stakeholders. It's one of the more immediate ethical considerations relating to AI that businesses need to answer.

We'd also strongly advise both heavily editing and closely reviewing all of the work you're using ChatGPT, Bard, or any other AI tool to generate. It's unwise to put sensitive personal or company information into any chatbot – we know ChatGPT saves and uses user data, but there isn't much public information about where these chats are stored or OpenAI's security infrastructure.

Using Other AI Tools for Essays or Work

Of course, Bard and ChatGPT aren't the only AI chatbots out there. However, we'd be hesitant to throw our support behind any smaller AI tools that aren't backed by powerful language models. They won't be as well-resourced, and you're unlikely to find them as useful if you do experiment with using them for work.

The same rules still apply, however – be open with your manager and get sign-off on using them, don't input any sensitive company data, and always review the answers you're given.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is a Lead Writer at Tech.co. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today