https://huggingface.co/spaces/tomg-group-umd/Binoculars
Over a wide range of document types, Binoculars detects over 90% of generated samples from ChatGPT (and other LLMs) at a false positive rate of 0.01%, despite not being trained on any ChatGPT data
Is there a correlation between Binoculars score and sequence length? Such correlations may create a bias towards incorrect results for certain lengths. In Figure 12, we show the joint distribution of token sequence length and Binocular score. Sequence length offers little information about class membership
However, just by eyeballing the graph in figure 12 the false positive rate will probably be somewhat higher for 100-200 word texts. I'm using that length because that's the rough length of text I'd use the tool on, though.
I'll run it on at least 100 pieces of gpt4-generated text that I generate, and at least 100 pieces of non-llm text from around the web that I browse. Both generations and human text will be the kinds of things I'd usually generate or read myself, which may have different properties than what the paper used.