Will the 'Logical Induction' paper be built on by 3+ theory or ML papers before 2025?
➕
Plus
14
Ṁ1414
Jan 1
10%
chance

In Logical Induction, Garrabrant et al. "present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time." This algorithm has a number of nice properties like handling Godelian uncertainty, and learning statistical patterns in logical formulas. However, the applicability of this work remains unclear with, to my knowledge, only two papers having built on it significantly: Rational inductive agents, and 'Forecasting using incomplete models'.

Will 3 or more technical papers appear on Arxiv, post-2022, which build on Garrabrant's et al.'s work before 2025? This count will include empirical work which uses a more tractable approximation of the Garrabrant method.

Get
Ṁ1,000
and
S3.00
Sort by:

I haven't looked in detail but seems plausible that this work qualifies: https://link.springer.com/article/10.1007/s10701-024-00755-9#Abs1

It would be fun to create a PM with real money incentivising people to make these papers.

what would it look like to build a neural network system that used insights from logical inductors to make ai safer? would it integrate with any of the key points of my "ai safety subcomponents as I see it today" list? https://manifold.markets/L/will-this-overview-of-safety-resear

predictedNO

@L Would probably relate to your bullet "it seems like most of safety boils down to a trustable uncertainty representation. the thing I want to formally verify is that the net knows when to stop and ask the detected agent whether an outcome it expects is appropriate."

betting no because I think the people who are about to build it haven't read it.

predictedNO

@L (and thus won't quite get the same internal dynamics, probably.)

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules