
Rohit Krishnan wrote a post where he proposes the "Strange Equation", modelled after the "Drake Equation", and uses it to help structure probability discussions about the advent of "Scary AI":
https://www.strangeloopcanon.com/p/agi-strange-equation
(Most people don't try to name things like this after themselves, but traditionally that tends to happen anyway, so I'm personally calling it the "Krishnan Equation")
Apparently Jon Evans also created a version of this equation, with different parameters a few months before here:
https://aiascendant.substack.com/p/extropias-children-chapter-7
So this market is about whether the Krishnan Equation or the Evans Equation will become the dominant "Drake Equation" for AI X-risk.
This might resolve on the basis of subjective evidence of whatever seems most popular, but I'll reach for whatever objective (or semi-objective) measures I can find: google trends if either gets popular enough, citations, keyword search hits in google and social media, etc.
These posts are terrible because they split the problem into a bunch of conjuncts when the problem clearly doesn't require every conjunct. They also seem to me to underestimate the probabilities of the conjuncts. See discussion of the multi-stage fallacy and Lifland on framing things in terms of goal states vs. catastrophe. I think Carlsmith's report on power-seeking AI gives a better split than these posts, and has had significant attention in EA, so maybe it will count as "another equation", but see the linked articles.