If a machine is only capable of asking T/F questions is it considered an ASI if it does a good enough job at it?
If a machine is only capable of asking T/F questions is it considered an ASI if it does a good enough job at it?
14
1kṀ1602
2050
84%
chance

Technical question. Please consider the theoretical limitations here.

Can a machine align a feeble human mind to objective truth if the human agrees to answer enough T/F questions?

The machine will get progressively better because it keeps track of every response and is insanely good at drilling down your cruxes with reality.

  • Update 2025-09-01 (PST) (AI summary of creator comment): Step 1: The AI presents the user with a proposition.

Step 2: The user provides a response in the form of a confidence (value between 0 and 1).

Step 3: The AI records that user's response and uses all the information it has collected to generate a new relevant proposition to be evaluated.

  • The AI repeatedly presents new propositions to be evaluated by the human.

  • This leads the human on an enlightenment journey, guiding them through the most important cruxes in philosophy.

  • The AI asks the questions.

  • Humans give answers.

Get
Ṁ1,000
to start trading!


Sort by:
2mo

To be clear, this is how the process works.

Step 1. The AI presents the user with a proposition.

Step 2. The user provides a response in the form of a confidence (value between 0 and 1)

Step 3. The AI records that user's response and uses all the information it has collected to generate a new relevant proposition to be evaluated.

The overall process is one where the AI repeatedly presents new propositions to be evaluated by the human.

This leads the human on an enlightenment journey guiding them through the most important cruxes in philosophy.

The AI asks the questions.

Humans give answers.

10mo

arguably all our software just operates on T/F

you can ask an AI:

here is the ASCII table, now tell me in T/F answers that I will parse as ASCII, what the meaning of life is

At which it answers:

T, T, F, ...

if only 1 T/F, per question, then you can just ask subsequent dummy questions of the form

what is the nth bit of the answer to the meaning of life?

2mo

@Jono3h This is backwards.

bought Ṁ1 NO10mo

I guess it could approximate probability estimates with a sorting algorithm that searches for whether a value is in a given range by narrowing it down yes or no. So if it's limited to asking yes/no questions yes. If it has to "think" entirely in yes or no, individual proposition by individual proposition, then no. There just isn't enough time in the universe to do epistemics that way.

10mo

This is also limited by the T/F question difficulty

2mo

@JohnCarpenter I don't understand.

@Krantz I think I read the question incorrectly. "Answering" instead of "asking"

Wouldn’t it be limited to objectively resolvable questions? Not “Does my boyfriend like my shirt.” Or “Is there life beyond Earth.” How would it learn to approach such a question? To my mind an intelligent person should be able to approach them.

What about matters of fact that are also legitimately unresolvable? “Which character in Nabokov’s Pale Fire is the protagonist?” wouldn’t be settled just because an AI said it is Charles Kinbote, and “Is Charles Kinbote the protagonist of Pale Fire” wouldn’t either.

@ClubmasterTransparent I think I understood the market differently.

Human: Does my boyfriend like my tshirt?

ASI: Did he say anything about it? ... Did he take a glance over it? ... Does it contain any images? ... Are the images considered offensive in your community?

And it continues the Socrat method until the human understands what is most likely to be true. So I do not see any problem with unresolvable questions. If AI knows that the question is considered unanswerable, it can literally hint by questioning

-Do you think that scenario is possible? -Are there any criteria to define a protagonist? -Have you ever seen a scientific paper about "the last prime number"?

@KongoLandwalker Thank you. I thought the AI was literally limited to answering Yes and No. Like playing 20 Questions.

How will this AI manage under multiple defensible outcomes? Pale Fire still a fine example. Your AI could leave one human convinced that Kinbote is the protagonist, but then the next human leaves equally certain it’s John Shade or the ghost of Shade’s daughter or some omniscient third presence. Similarly for “Is Lupine a better language than Klingon?” “Is Johnny Cash’s song What Is Truth still relevant today?” “Was Gerald Ford right to pardon Nixon?”

I may well be missing something! But this yes/no AI seems vulnerable to ending up in a freshman dorm room using Socratic method on “Why do you consider Led Zeppelin II superior to It Takes A Nation Of Millions To Hold Us Back” and “Why do you refuse to be the pink armies”.

2mo

@KongoLandwalker This is backwards.

2mo

@ClubmasterTransparent This is backwards.

10mo

I don't understand the question.

10mo

@Snarflak It's an inquiry into how well an AI could educate everyone if we only allow it to ask questions.

10mo

@Krantz What does that have to do with superintelligence?

2mo

@Snarflak That's what I'm asking you.

What’s the resolution criteria for this betting market?

It seems like you’re trying to make a poll.

10mo

@Santiago Yep, probably should have been a poll. Sry

I will resolve as yes iff it is widely accepted by experts in the field of AI (>95%) that an interpretable constitutional collective intelligence demonstrably acheives worthy successor criteria and is generally accepted as a 'superintelligence' if it were to be given enough data.

I'll resolve it as no if anyone can provide an instance of something such a system could not acheive.

10mo

It'd be a fun experiment on the economy if we pay people crypto to keep answering it.

This should be a good thing for society because it might end up being the only way to make money after AI takes our old jobs.

But would you call it AI?

It's technically a decentralized constitutional collective intelligence.

But it appears to also have the capacity to take over the world via persuasion.

It pays people to answer leading questions that lead the user to understand why they're wrong.

Lead a horse to water and all.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Or create your own play-money betting market on any question you care about.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like betting still use Manifold to get reliable news.
ṀWhy use play money?
Mana (Ṁ) is the play-money currency used to bet on Manifold. It cannot be converted to cash. All users start with Ṁ1,000 for free.
Play money means it's much easier for anyone anywhere in the world to get started and try out forecasting without any risk. It also means there's more freedom to create and bet on any type of question.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules