
The rapid development of models of AI and the generation of natural language poses a serious challenge today! They can automatically generate fake news, fake reviews, misleading articles, fake social accounts, and tweets to spread misinformation that can lead to severe consequences.
Luckily, some researchers are motivated to develop simple methods for auto-generated text detection. The result? GLTR or Giant Language Model Test Room, a tool that can differentiate machine-generated text from human-written text to non-expert readers.
This AI-powered tool was created by researchers from Harvard University and MIT-IBM Watson Lab to detect whether an algorithm of language model generated a specific piece of text. GLTR improved the human detection rate of fake text from 54 percent to 72 percent without prior training, according to researchers.
GLTR uses a set of statistical baseline methods to detect artifacts of generation across typical sampling schemes. The phrases generated by AI text generators may be grammatically correct, but they may be meaningless. GLTR works by identifying such statistical patterns across a sixty-word window — thirty words on each side of any given word in the text— and spotting the most predictable word sequence.
Genuine text tends to have a healthy mix of yellow, red, and purple words. If the text highlighted is mostly greens and yellow, it gives a strong indication that a machine could generate it. Researchers said GLTR is aimed at educating and raising awareness of generated text.

In another experiment by the researchers behind GLTR, they asked Harvard students to identify AI-generated text — first without the tool, and then with the help of highlighting it. The students detected only half of all the fakes on their own, but 72 percent when given the tool.
Initiatives such as GLTR initiatives are useful not only in the detection of fake text, but also in the fight to catch fake news, deep fakes, and twitter bots. It is noteworthy that Botometer uses techniques of machine learning to determine if an account is operated by a human or by an algorithm of software. The tool correctly identifies about 95% of the time a bot account. Although none of these methods are foolproof, they emphasize the need to create collaborative human-AI systems to address socio-technological issues collectively.