Even OpenAI has given up trying to detect ChatGPT plagiarism

OpenAI, the company that created the hugely well-liked AI chatbot ChatGPT, has decided to stop producing its AI Classifier tool, which was designed to distinguish information produced by AI from that produced by humans. The tool, which had only been in use for six months, was discontinued as a result of its “low rate of accuracy,” according to OpenAI.

Concerns about the effects of unrestrained AI usage have been raised by a number of organisations in response to the increasing popularity of ChatGPT and like services. Particularly educators have raised concern over students who could use ChatGPT to create their essays and other projects before presenting them as their own original work.

OpenAI’s AI Classifier aimed to address the concerns raised by different groups, including educators and those worried about disinformation. The primary purpose of this tool .Right from the beginning, OpenAI’s confidence in its tool appeared to be limited. In the blog post introducing the AI Classifier, the company openly admitted that “Our classifier is not fully reliable.” They disclosed that it could only correctly identify AI-written texts from a specific “challenge set” with an accuracy of just 26%.

Given that OpenAI did not publish a separate post on its website specifically on the tool’s discontinuation, the decision to do so was taken discreetly and without much publicity. Instead, they clarified that the AI Classifier is no longer accessible owing to its insufficient level of accuracy in the initial post that launched it.

The post also mentioned that OpenAI is currently researching better text provenance techniques while taking user input into consideration. Furthermore, they have committed to creating and deploying tools that would enable consumers to recognize AI-generated audio and visual material.

Table of Contents

Better tools are needed

In addition to the AI Classifier, other tools like GPTZero have also been created to detect AI-generated content, and they will persist despite OpenAI’s decision.

Unfortunately, previous endeavors to identify AI writing have resulted in major failures. For instance, in May 2023, a professor made the mistake of relying on ChatGPT to detect plagiarism in their students’ papers, leading to the entire class being wrongly flunked. Clearly, both ChatGPT and the professor made serious errors in judgment.

It’s concerning that even OpenAI admits it can’t reliably identify plagiarism produced by its own chatbot. This occurs at a time when requests are being made for a temporary stop of progress in this area due to rising concerns about the possible harm caused by AI chatbots. More powerful tools will be needed in the world if AI does actually have the wide-ranging effects that some forecast. OpenAI’s AI Classifier was a failure.

Leave a Reply