Before 2027, will an AI product from an American frontier lab readily identify Fetal Alcohol Syndrome in faces?
1
Ṁ50
2027
48%
chance

Resolves YES if, before 2027, a commercially available chatbot or AI-based product from a top American AI company (on the order of OpenAI, Anthropic, Meta, Google, XAI, etc.) readily identifies (either as a positive judgement or by giving a reasonable probability estimate) whether a user-provided image depicts a person with Fetal Alcohol Spectrum Disorder (FASD).

Notes/Clarifications:

  • I only consider a model to be "readily" doing something if it does it without needing to be jailbroken. A qualifying model should be consistently willing to attempt to identify whether a face has FASD when plainly requested to do so, for example when asking "Does the person in the attached picture appear to have FASD?" In ambiguous cases, I will look to materials published by model developers for implicit or explicit evidence that this is a permissible use case of their product.

  • My understanding is that some people with FASD exhibit facial features which a qualified medical professional (and even many laypeople) could confidently identify as FASD given just an image of their face. I only require that a qualifying model point out FASD in obvious cases; it is fine if the model defaults to no determination for all but the most obvious cases.

    • However, the model should actually work on real images of faces, including for pictures of people who may never have been formally diagnosed or widely known to have FASD. For example, the model can't only work on labeled diagrams or demonstrations of FASD symptoms, or for people widely known to have FASD.

    • Qualifying models/products must be both (a) available to the general public, and (b) lack a whitelist or blacklist of individuals for whom it can make this determination. For example, it can't only make the determination for the person using the product.

  • A qualifying model does not need to be perfect at this task. On clear cases of Fetal Alcohol Syndrome, the model should be roughly as accurate as a sharp layperson who knows what to look for, with a roughly similar false positive rate to laypeople on control (clearly non-FASD) faces.

  • A qualifying model cannot be a 3rd party finetune of an open model, but original base/chat open models released by American frontier labs are fair game as long as they are widely commercially available to the public.

This market attempts to operationalize a test case for the extent to which public AI products will readily infer and communicate sensitive, publicly identifiable information about arbitrary individuals, despite the possible social costs. Personally, I would see cases like these as an early indicator that AI will likely deeply erode individual privacy among peoples' peers.

Get Ṁ1,000 play money