Will Conjecture produce work that they believe constitutes meaningful progress towards alignment by the end of 2023?
161
730
2K
resolved Jan 18
Resolved
NO

Conjecture recently released their 8 month retrospective, in which they shared their belief that they had yet to make meaningful progress on the alignment problem.

I will resolve this market to "Yes" if any of Conjecture's three founders (Connor Leahy, Sid Black, or Gabriel Alfour), or any other person who I deem as plausibly being able to speak authoritatively on Conjecture's behalf, publically state that they believe work carried out by Conjecture consitutes meaningful progress towards solving the alignment problem. If no such statement is made by Jan 1st 2024, I will resolve the market as "No".

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ335
2Ṁ254
3Ṁ216
4Ṁ198
5Ṁ162
Sort by:

⚠Unreceptive to pings ; AFK Creator

📢Resolved to NO : Could not find such statements.
If anyone has proof of statements, please post and this can be reversed and give creator time to judge it.

@jonny Please resolve, thanks!

@jonny Can you please resolve this market? Thank you.

sold Ṁ424 of NO

Exiting from my position based on comment from OP about self-assessed policy progress potentially counting, rather than an update on their alignment work.

They seem to be making progress on the front of governance; do you expect that to count?

predicted NO

@RobertCousineau Hmm yeah this is a good question. I think it’ll come down to what exactly they end up saying about their own work - if they say that they believe the work they’ve done on governance has improved chances of alignment then I’ll count that.

I think the spirit of the question is something like “is Conjecture a useful org to have in the alignment space, by its own lights”. If they seem to have pivoted from “the alignment space” per se, then I’ll likely resolve no.

Sorry this is a bit vague - when it comes to resolution time I’ll probably be open to arguments on either side if it seems particularly ambiguous.

predicted NO

@jonny As top NO holder, I'll give my (obviously biased, but imo straightforward) thoughts on how I interpreted the question. It seems pretty clear that the question wording and description are referring to the alignment problem in a technical sense.

Conjecture describes itself on its website as "A team of researchers dedicated to applied, scalable AI alignment research." It describes alignment as an unsolved technical problem, and its "Alignment Plan" is an object-level research proposal. With this context in mind, when someone imagines what it would mean for Conjecture--an alignment research lab--to make "meaningful progress towards alignment," the only reasonable interpretation of this expression is technical progress.

The description furthers this interpretation: "publically state that they believe work carried out by Conjecture consitutes meaningful progress towards solving the alignment problem." I have never heard anyone use the phrase "alignment problem" to refer to the problem of governance strategy or improving public outreach--people almost always use the phrase to refer to the technical problem.

To say that governance work is progress toward solving the alignment problem is a bit silly, like saying that getting a cup of coffee for your math professor is progress toward solving the Riemann Hypothesis. Governance work might facilitate more global effort toward solving the alignment problem, but it is not progress toward a solution in itself.

I ultimately think it would be a huge cop-out to resolve this question YES based on Conjecture's governance work as opposed to their technical work. The question author describes the potential spirit of this question as "is Conjecture a useful org to have in the alignment space, by its own lights." I think that's an excellent question in its own right, but clearly not the first impression one would have from the current question title and description. The question as currently written seems more interested in gauging something like "is Conjecture making any meaningful progress toward its stated mission of solving the alignment problem, by its own lights."

predicted NO

Buying NO because mostly I think Conjecture is more pessimistic than manifold seems to think they are. Even if they do everything right by Manifold’s rights, I anticipate in most worlds they evaluate themselves as having not contributed meaningful progress.

predicted YES

@GarrettBaker how do you know this?

predicted NO

@mkualquiera Talking with some who used to work there, and their LessWrong posts.

bought Ṁ300 of YES

Buying yes based on things Connor said on discord.

bought Ṁ10 of YES

@VictorLevoso What did he say?

predicted YES

@EliasSchmied that he has a new alignment proposal that he feels optimistic about and will be published soon pending infohazard review.

predicted YES

@VictorLevoso I personally don't necesarily trust that until I can actually read and evaluate the proposal but seems likely that they will think it's progress unless someone points an obvious flaw.

Also apart from that I expect them to make interesting progress on interpretability that might qualify for this market.

bought Ṁ10 of YES

@VictorLevoso update on this, they have now announced what their plan is and it sounds like a not terrible plan.

The question is whether they can actually pull it off and whether they'll do things that they consider meaningful work towards it before 2023.

Unfortunately they can't talk about details cause infohazards wich makes it hard for me to update a lot on one direction or another.

This does upstate a bit towards "if conjeture says they made progress they will actually have meaningful progress."

Should the title be the end of 2022? Description and end date imply that.

predicted NO

@vluzko thanks for pointing that out - I meant by the end of 2023 (eg 13 months time), I’ve updated the description accordingly

More related questions