Please await further detail on the definition of “AI X-Safety”
The key question here is whether I am primarily on a “AI X-Safety quest.”
If I’m prioritizing a career or projects in areas like AI alignment research, threat modeling, control, governance, etc. the market probably resolves YES.
However, if I’m prioritizing a career or projects in areas that might be motivated by AI X-safety, but for which the link is indirect, it probably wouldn’t count. Examples of indirect links: Working on a prediction market platform in hopes that improving human rationality would decrease x-risk. Becoming a therapist for AI researchers. etc.
There is some level of ambiguity about whether certain “indirect paths” count as “AI X-Safety.” My guess is that this won’t be an issue for this market in practice.