What will the false negative rate of the LLM detector 'Binoculars' be in my personal testing, for 100-200 word texts?
2
27
190
resolved Jan 24
100%100.0%
0-1%
0.0%
1-5%
0.0%
5-10%
0.0%
10-25%
0.0%
25-50%
0.0%
50-100%

https://huggingface.co/spaces/tomg-group-umd/Binoculars

Over a wide range of document types, Binoculars detects over 90% of generated samples from ChatGPT (and other LLMs) at a false positive rate of 0.01%, despite not being trained on any ChatGPT data

Is there a correlation between Binoculars score and sequence length? Such correlations may create a bias towards incorrect results for certain lengths. In Figure 12, we show the joint distribution of token sequence length and Binocular score. Sequence length offers little information about class membership

However, just by eyeballing the graph in figure 12 the false positive rate will probably be somewhat higher for 100-200 word texts. I'm using that length because that's the rough length of text I'd use the tool on, though.

I'll run it on at least 100 pieces of gpt4-generated text that I generate, and at least 100 pieces of non-llm text from around the web that I browse. Both generations and human text will be the kinds of things I'd usually generate or read myself, which may have different properties than what the paper used.

Get Ṁ600 play money

🏅 Top traders

#NameTotal profit
1Ṁ134
2Ṁ54