Advertisement

ChatGPT creator OpenAI makes new tool for detecting automated text amid fear over future

Google’s DeepMind has pioneered advances in artificial intelligence since its founding in 2010, with the ultimate goal of creating a human-level AI (Alan Warburton / Better Images of AI / CC)
Google’s DeepMind has pioneered advances in artificial intelligence since its founding in 2010, with the ultimate goal of creating a human-level AI (Alan Warburton / Better Images of AI / CC)

The creator of ChatGPT, the viral new artificial intelligence system that can generate seemingly any text, has created a new tool aimed at spotting that same automatically created writing.

OpenAI said that it had built the system as an attempt to stop the dangers of AI-written text, by allowing people to more easily spot it.

Such threats include automated misinformation campaigns, for instance, or allowing chatbots to pose as humans. It should also help protect against “academic dishonesty”, it suggested, which comes amid an increasing fear that such systems could allow students to cheat on homework and other assignments.

But it said the system is still “not fully reliable”. It can only correctly identify 26 per cent of AI-written text as being created by such a system, and incorrectly labels human text 9 per cent of the time.

It gets more reliable as the length of the text increases, and is better when used on text from more recent AI systems, OpenAI said. It is recommended only for English text, and the company warned that AI-written text can be edited to stop it being identified as such.

The company said that it was releasing an early version of the system, despite those limitations, in an attempt to improve its reliability.

But it stressed that people should not use it as a “primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text”.

It might also never be able to spot all text that was originally created by an AI system, too. While OpenAI will be able to update the system based on new workarounds, “it is unclear whether detection has an advantage in the long-term”, it warned.

As it announced the new classifier, it said that it was aware that identifying AI-written text had become a particular concern among educators. It said that it was part of an effort to help people deal with artificial intelligence in the class room, but may also prove useful to journalists and researchers.

It admitted that more work is required, however, and said that it was “engaging with educators in the US to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations, and we will continue to broaden our outreach as we learn”. It asked teachers, parents and others concerned about the issue of AI in academic settings to reach out and provide feedback, as well as consult the existing information that is available on its website.