
Conjecture recently released their 8 month retrospective, in which they shared their belief that they had yet to make meaningful progress on the alignment problem.
I will resolve this market to "Yes" if any of Conjecture's three founders (Connor Leahy, Sid Black, or Gabriel Alfour), or any other person who I deem as plausibly being able to speak authoritatively on Conjecture's behalf, publically state that they believe work carried out by Conjecture consitutes meaningful progress towards solving the alignment problem. If no such statement is made by Jan 1st 2024, I will resolve the market as "No".
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ335 | |
2 | Ṁ254 | |
3 | Ṁ216 | |
4 | Ṁ198 | |
5 | Ṁ162 |
People are also trading
@RobertCousineau Hmm yeah this is a good question. I think it’ll come down to what exactly they end up saying about their own work - if they say that they believe the work they’ve done on governance has improved chances of alignment then I’ll count that.
I think the spirit of the question is something like “is Conjecture a useful org to have in the alignment space, by its own lights”. If they seem to have pivoted from “the alignment space” per se, then I’ll likely resolve no.
Sorry this is a bit vague - when it comes to resolution time I’ll probably be open to arguments on either side if it seems particularly ambiguous.
@jonny As top NO holder, I'll give my (obviously biased, but imo straightforward) thoughts on how I interpreted the question. It seems pretty clear that the question wording and description are referring to the alignment problem in a technical sense.
Conjecture describes itself on its website as "A team of researchers dedicated to applied, scalable AI alignment research." It describes alignment as an unsolved technical problem, and its "Alignment Plan" is an object-level research proposal. With this context in mind, when someone imagines what it would mean for Conjecture--an alignment research lab--to make "meaningful progress towards alignment," the only reasonable interpretation of this expression is technical progress.
The description furthers this interpretation: "publically state that they believe work carried out by Conjecture consitutes meaningful progress towards solving the alignment problem." I have never heard anyone use the phrase "alignment problem" to refer to the problem of governance strategy or improving public outreach--people almost always use the phrase to refer to the technical problem.
To say that governance work is progress toward solving the alignment problem is a bit silly, like saying that getting a cup of coffee for your math professor is progress toward solving the Riemann Hypothesis. Governance work might facilitate more global effort toward solving the alignment problem, but it is not progress toward a solution in itself.
I ultimately think it would be a huge cop-out to resolve this question YES based on Conjecture's governance work as opposed to their technical work. The question author describes the potential spirit of this question as "is Conjecture a useful org to have in the alignment space, by its own lights." I think that's an excellent question in its own right, but clearly not the first impression one would have from the current question title and description. The question as currently written seems more interested in gauging something like "is Conjecture making any meaningful progress toward its stated mission of solving the alignment problem, by its own lights."
@EliasSchmied that he has a new alignment proposal that he feels optimistic about and will be published soon pending infohazard review.
@VictorLevoso I personally don't necesarily trust that until I can actually read and evaluate the proposal but seems likely that they will think it's progress unless someone points an obvious flaw.
Also apart from that I expect them to make interesting progress on interpretability that might qualify for this market.
@VictorLevoso update on this, they have now announced what their plan is and it sounds like a not terrible plan.
The question is whether they can actually pull it off and whether they'll do things that they consider meaningful work towards it before 2023.
Unfortunately they can't talk about details cause infohazards wich makes it hard for me to update a lot on one direction or another.
This does upstate a bit towards "if conjeture says they made progress they will actually have meaningful progress."
@vluzko thanks for pointing that out - I meant by the end of 2023 (eg 13 months time), I’ve updated the description accordingly