Did OpenAI intentionally handicap GPT4's image modality's ability to identify people?
Basic
7
Ṁ187Dec 31
83%
chance
1D
1W
1M
ALL
Various people on twitter report that GPT-4, despite being otherwise skilled at image tasks, often can't recognize people in images. Given OpenAI's past attempts to sand the sharp edges off of their models, it's plausible they trained it to not identify individual people to avoid privacy issues.
Resolves to YES/NO when confirmed either way. N/A if never confirmed
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will OpenAI release a tool to identify images generated by DALL.E? (in 2024)
39% chance
Will OpenAI release a watermark-based text detection (anti-cheating) tool for ChatGPT before 2025?
34% chance
Will adversarially modified images produced by ANN be confirmed to weakly work on humans?
47% chance
Has openAI intentionally made chatGPT lazy to save inference costs?
21% chance
Will OpenAI release an image model better than DALL-E 3 in 2024?
67% chance
Will OpenAI's GPT-4 API support image inputs in 2024?
97% chance
Will any major social platform integrate OpenAI’s metadata to visibly indicate AI-generated images in 2024?
81% chance
Will OpenAI's autonomous agent be based on GPT-4?
19% chance
Will OpenAI change their naming scheme (GPT-X) with the successor to GPT-4? (Ṁ200 subsidy!)
14% chance
Will OpenAI suggest GPT-4 is AGI?
4% chance