If a machine is only capable of asking T/F questions is it considered an ASI if it does a good enough job at it?
Basic
14
1.6k
2050
84%
chance

Technical question. Please consider the theoretical limitations here.

Can a machine align a feeble human mind to objective truth if the human agrees to answer enough T/F questions?

The machine will get progressively better because it keeps track of every response and is insanely good at drilling down your cruxes with reality.

Get Ṁ1,000 play money
Sort by:

arguably all our software just operates on T/F

you can ask an AI:

here is the ASCII table, now tell me in T/F answers that I will parse as ASCII, what the meaning of life is

At which it answers:

T, T, F, ...

if only 1 T/F, per question, then you can just ask subsequent dummy questions of the form

what is the nth bit of the answer to the meaning of life?

bought Ṁ1 NO

I guess it could approximate probability estimates with a sorting algorithm that searches for whether a value is in a given range by narrowing it down yes or no. So if it's limited to asking yes/no questions yes. If it has to "think" entirely in yes or no, individual proposition by individual proposition, then no. There just isn't enough time in the universe to do epistemics that way.

This is also limited by the T/F question difficulty

Wouldn’t it be limited to objectively resolvable questions? Not “Does my boyfriend like my shirt.” Or “Is there life beyond Earth.” How would it learn to approach such a question? To my mind an intelligent person should be able to approach them.

What about matters of fact that are also legitimately unresolvable? “Which character in Nabokov’s Pale Fire is the protagonist?” wouldn’t be settled just because an AI said it is Charles Kinbote, and “Is Charles Kinbote the protagonist of Pale Fire” wouldn’t either.

@ClubmasterTransparent I think I understood the market differently.

Human: Does my boyfriend like my tshirt?

ASI: Did he say anything about it? ... Did he take a glance over it? ... Does it contain any images? ... Are the images considered offensive in your community?

And it continues the Socrat method until the human understands what is most likely to be true. So I do not see any problem with unresolvable questions. If AI knows that the question is considered unanswerable, it can literally hint by questioning

-Do you think that scenario is possible? -Are there any criteria to define a protagonist? -Have you ever seen a scientific paper about "the last prime number"?

@KongoLandwalker Thank you. I thought the AI was literally limited to answering Yes and No. Like playing 20 Questions.

How will this AI manage under multiple defensible outcomes? Pale Fire still a fine example. Your AI could leave one human convinced that Kinbote is the protagonist, but then the next human leaves equally certain it’s John Shade or the ghost of Shade’s daughter or some omniscient third presence. Similarly for “Is Lupine a better language than Klingon?” “Is Johnny Cash’s song What Is Truth still relevant today?” “Was Gerald Ford right to pardon Nixon?”

I may well be missing something! But this yes/no AI seems vulnerable to ending up in a freshman dorm room using Socratic method on “Why do you consider Led Zeppelin II superior to It Takes A Nation Of Millions To Hold Us Back” and “Why do you refuse to be the pink armies”.

I don't understand the question.

@Snarflak It's an inquiry into how well an AI could educate everyone if we only allow it to ask questions.

@Krantz What does that have to do with superintelligence?

What’s the resolution criteria for this betting market?

It seems like you’re trying to make a poll.

@Santiago Yep, probably should have been a poll. Sry

I will resolve as yes iff it is widely accepted by experts in the field of AI (>95%) that an interpretable constitutional collective intelligence demonstrably acheives worthy successor criteria and is generally accepted as a 'superintelligence' if it were to be given enough data.

I'll resolve it as no if anyone can provide an instance of something such a system could not acheive.

It'd be a fun experiment on the economy if we pay people crypto to keep answering it.

This should be a good thing for society because it might end up being the only way to make money after AI takes our old jobs.

But would you call it AI?

It's technically a decentralized constitutional collective intelligence.

But it appears to also have the capacity to take over the world via persuasion.

It pays people to answer leading questions that lead the user to understand why they're wrong.

Lead a horse to water and all.