AI is still in its infancy but we’re already getting glimpses of how it might affect culture. In early 2019, OpenAI built a text generator so good, it’s considered too dangerous to release. For writers, this poses an existential threat, but it could have even worse consequences on society.
Like the threat of deepfakes, unidentifiable mass-generated content that’s created by AI could find its way into everyone’s lives and create an environment where it’s almost impossible to know what is true and what isn’t. It could be used to trick search engines, send automated email campaigns, populate social feeds, and intentionally cause chaos and confusion.
To fight what may perhaps be inevitable, a team of researchers from Harvard University and the MIT–IBM Watson AI Lab are working on a tool that uses AI to spot text written by AI. Will Knight, a reporter for MIT Technology Review, reported how the new tools works.
Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.
The tool isn’t perfect though. Knight said that
the students were able to spot only half of all fakes on their own, but 72% when given the tool. He also said the researchers’ goal was to create a tool that could work in conjunction with humans to spot AI-generated copy.