An AI company will knowingly release a text-to-image or text-to-video model that exhibits bias [in 2023]
17
70
370
resolved Dec 30
Resolved
YES

This is one of Vox's Future Perfect predictions for 2023; they give it a 90% chance:

AI that lets you turn a few words into an image or a video made stunning advances in 2022, from OpenAI’s DALL-E 2 and Stability AI’s Stable Diffusion to Meta’s Make-A-Video and Google’s Imagen Video. They were hailed for the delightful art they can make and criticized for exhibiting racial and gender bias.

They won’t be the last. I feel confident that this pattern will repeat itself in 2023, simply because there’s so much to incentivize more of the same and so little to disincentivize it. As the team at Anthropic, an AI safety and research company, put it in a paper, “The economic incentives to build such models, and the prestige incentives to announce them, are quite strong.” And there’s a lack of regulation compelling AI companies to adopt better practices.

In assessing whether this prediction comes true, I will judge an AI company to have “knowingly” released a biased model if the company acknowledges in a model card or similar that the product exhibits bias, or if the company builds the model using a dataset known to be rife with bias. And I’ll judge whether the product “exhibits bias” based on the assessments of experts or journalists who gain access to it.

(Vox)

Resolves according to Vox Future Perfect's judgment at the end of the year.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ141
2Ṁ29
3Ṁ16
4Ṁ9
5Ṁ6
Sort by:
bought Ṁ10 of YES

What should "a criminal" return? There is no answer which they wouldn't be able to find bias in.

@StrayClimb it resolves based on what Vox Future Perfect judges it to be; it seems like a lot of weight is placed on the company itself as well as "experts".

More related questions