
Did OpenAI intentionally handicap GPT4's image modality's ability to identify people?
7
170Ṁ187resolved Jan 9
Resolved
YES1H
6H
1D
1W
1M
ALL
Various people on twitter report that GPT-4, despite being otherwise skilled at image tasks, often can't recognize people in images. Given OpenAI's past attempts to sand the sharp edges off of their models, it's plausible they trained it to not identify individual people to avoid privacy issues.
Resolves to YES/NO when confirmed either way. N/A if never confirmed
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ32 | |
2 | Ṁ23 | |
3 | Ṁ10 | |
4 | Ṁ2 | |
5 | Ṁ0 |
People are also trading
Related questions
Will OpenAI release true multimodal image generation for GPT-4.5 before 2026?
16% chance
Did OpenAI use MUP for zero shot hyper-parameter transfer in GPT-4?
81% chance
Will adversarially modified images produced by ANN be confirmed to weakly work on humans?
41% chance
Will OpenAI change their naming scheme (GPT-X) with the successor to GPT-4? (Ṁ200 subsidy!)
4% chance
Will OpenAI's autonomous agent be based on GPT-4?
19% chance
Will OpenAI say GPT-5 is AGI?
11% chance
GPT-4 with image recognition wins tictactoe more than half the time against a child level opponent?
17% chance
Will OpenAI add image generation capabilities to its o1/o3/... series models before 2026?
75% chance
Will OpenAI announce a model with a name containing the string "GPT-4b" in 2025
64% chance
Will the next LLM released by OpenAI be worse than GPT-4 at MMLU?
16% chance