Will I significantly deconvert Eliezer Yudkowsky from Bayesianism by the end of 2025?
➕
Plus
47
Ṁ10k
2025
3%
chance

I recently had a twitter thread where I briefly pointed out the problems with Bayesianism for Eliezer Yudkowsky. Obviously as a major advocate of Bayesianism, he disagreed in the thread itself. But I think my position has a lot of merit, so I suspect he will get convinced by it as he thinks it through.

This market as a general rule resolves YES if any of the following have occured by the end of 2025:

  • Eliezer writes a blog post advocating positions similar to LDSL/backbone conjecture

  • Eliezer withdraws his position in the twitter thread

  • Eliezer endorses vitalism, mesmerism, paganism, or similar (I am ambivalent about whether platonism or the great chain of being counts here; I'm inclined to say 'yes'), or withdraws on bayesianism (unsure whether withdrawing on reductionism should count; I'm inclined to say 'no')

That said, if e.g. Eliezer objects to resolving this YES, or the general consensus disagrees with the resolution by the time of resolution

Get
Ṁ1,000
and
S3.00
Sort by:
sold Ṁ1,254 NO

Still not gonna happen but I'm selling to unlock my mana

"But I think my position has a lot of merit, so I suspect he will get convinced by it as he thinks it through."

I feel like this is a pretty big compliment to pay someone, but that it would usually get overshadowed by perceived overconfidence/arrogance.

How does it resolve if Yudkowsky converts to infra-Bayesianism?

@VanessaKosoy I would say then it resolves NO since infra-Bayesianism is even more uncomputable and Gnostic than Bayesianism which is the opposite direction of my proposal. Though if he simultaneously advocates for LDSL+backbone and infra-Bayesianism then the former counts to resolve it as YES.

@tailcalled "infra-Bayesianism is even more uncomputable and Gnostic than Bayesianism" reads as a non sequitur to me: whether Bayesianism/infra-Bayesianism is computable or not depends on the prior. In some sense infra-Bayesianism is "more computable" since it allows for priors that are simultaneously tractable and compatible with reality. No idea what you mean by "gnostic". But it's your market, so I can just take this as given...

@VanessaKosoy Gnosticism is the belief that the world was created by a malevolent deity (good fit for Murphy) and that inner knowledge provides the way to escape them (good fit for maximin on the object level and agent foundations on the meta level). I'd say The Goddess of Everything Else is a good illustration of rationalist tendencies towards Gnosticism in general: https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/ Though Infrabayesianism treats Gnosticism less like a learned fact and more like an ontological necessity, thereby taking it a step further.

@VanessaKosoy I guess it's true in general that there's no ranking of computability, but generally AI uses very generic priors, and a generic prior that works like Infrabayesianism seems less computable than a generic prior that works like Bayesianism. I guess you could argue that this is a silly perspective to take because the point of Infrabayesianism is to allow priors that are more tractable at the cost of being less realistic.

@tailcalled Dunno where are you getting "generic prior that works like infrabayesianism seems less computable". One of my goals is proving the existence of rich tractable priors, and although overall it's still very much an open problem, it definitely hasn't been proved impossible. (And "infra" doesn't seem to make the problem much more difficult, as far as we know.)

Also I don't endorse any connection to Gnosticism, IB has nothing to do with malevolent deities.

Bayesian probability distributions can be seen as a free algebra which extends a structure to have weighted choice. As I understand it, infrabayesianism is similar but also extends it to have worst-case choice. Weighted choice is computationally very amenable to Monte-Carlo methods, whereas worst-case choice is not. That's basically where I see the computational difficulty. Though admittedly this skips over the Bayesian update, which is pretty central and pretty intractable. I don't understand the Infrabayesian update well enough to be sure whether the tractability reverses once you bring updates into the game, but I think the answer is no?

@tailcalled To give an example of a setting where learning (and hence asymptotic Bayes-optimality) is tractable, consider small infra-MDPs: https://arxiv.org/abs/2010.15020. This is fully analogous to learning algorithms for small unambiguous (i.e. ordinary) MDP. The update is not very relevant since feasible learning algorithms usually don't explicitly represent the posterior.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules