The creators of ChatGPT, OpenAI, are releasing a new tool that will, hopefully, contribute to the ethical and responsible use of AI writing tools – an AI classifier trained to differentiate between human-written and AI-generated text.
This new, “imperfect tool” comes with a set of limitations that OpenAI is very transparent about. They also warn the tool is “not fully reliable” and therefore should be used only as a guide, due to likely inaccuracies.
Despite all the flaws of the AI classifier, however, it is a step forward in the fight against the malicious use of AI.
Limitations of the new classifier by OpenAI
In their announcement, OpenAI shares transparently some of the main limitations of the tool.
According to the company, the classifier is unreliable for short texts under 1,000 characters. But, it may even label longer texts incorrectly. Its performance is also much worse in other languages. So their recommendation is to use it for the English language only at this point.
In the case of a highly predictable text, the tool cannot distinguish whether it’s been written by a human or an AI tool. Also, in some cases, it’s possible to edit AI-generated text to avoid detection by the classifier..
Finally, it’s important to note, it can happen that the classifier incorrectly labels human-written text as AI-generated text.
So if you are planning to use this new tool, use it as a complementing tool. And keep in mind its limitations, and most importantly, its limited reliability.
Check it out and share your feedback
Since the new tool is still work-in-progress, it’s free and available to the public to test it out and share their feedback on the usefulness of the new AI classifier.
To read the full announcement, and test the classifier, check out this article by OpenAI.