Loading...
The GLTR demo enables forensic inspection of the visual footprint of a language model on input text to detect whether a text could be real or fake. It is a collaborative effort between Hendrik Strobelt, Sebastian Gehrmann, and Alexander Rush from the MIT-IBM Watson AI lab and Harvard NLP.

Please read the detailed intro about GLTR.

Each text is analyzed by how likely each word would be the predicted word given the context to the left. If the actual used word would be in the Top 10 predicted words the background is colored green, for Top 100 in yellow, Top 1000 red, otherwise violet. Try some sample texts from below and see for yourself if you can spot the difference between machine generated text and human generated text or try your own. (Tip: hover over the words for more detail)

The histograms show some statistic about the text: Frac(p) describes the fraction of probability for the actual word divided by the maximum probability of any word at this position. The Top 10 entropy describes the entropy along the top 10 results for each word.

Disclaimer: This version of GLTR was made in 2019 to test against GPT-2 text. It might not be helpful to detect texts for recent models (ChatGPT). Follow us on mastadon (hen@vis.social) or threads (hendrik.strobelt) . You can also try the RADAR demo which uses a newer approach: https://radar-app.vizhub.ai/

Test-Model:

Quick start - select a demo text:
or enter a text:

Tweet about GLTR
MIT-IBM Watson AI lab and Harvard NLP