
Will we have a reliable detector software to differentiate between LLM generated and human generated text by 2023 end?
65
1.7kṀ9820resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
Resolves N/A if already happened. But i don't think it has. Everything I've tried so far gets no better than 20-30%.
Prefer to have something i can try myself and check.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ272 | |
2 | Ṁ79 | |
3 | Ṁ22 | |
4 | Ṁ18 | |
5 | Ṁ15 |
People are also trading
Related questions
By 2025 end, will it be generally agreed upon that LLM produced text/code > human text/code for training LLMs?
11% chance
Will an open-source text-to-music LLM model be able to create an entire album as convincing as a human by end of 2025?
26% chance
Will we have a popular LLM fine-tuned on people's personal texts by June 1, 2026?
50% chance
Will LLM based systems have debugging ability comparable to a human by 2030?
68% chance
By the end of 2035, will real working lie detection exist?
48% chance
Will there be any simple text-based task that most humans can solve, but top LLMs can't? By the end of 2026
64% chance
Will any widely used LLM be pre-trained with abstract synthetic data before 2030?
72% chance
By 2028 will we be able to identify distinct submodules/algorithms within LLMs?
76% chance
Will there be a very reliable way of reading human thoughts by the end of 2025?🧠🕵️
9% chance