Will an AI alignment research paper be featured on the cover of a prestigious scientific journal? (2024)
63
348
1.6K
2025
32%
chance

This market predicts whether an AI alignment research paper will be featured on the cover of a prestigious scientific journal, such as Nature, Science, or PNAS, by December 31, 2024.

Resolves YES if:

  • An AI alignment research paper is featured on the cover of a prestigious scientific journal on or before December 31, 2024.

Resolves PROB if:

  • There is uncertainty as to whether the best candidate is considered an AI alignment research paper.

Resolves NO if:

  • No AI alignment research paper is featured on the cover of a prestigious scientific journal by December 31, 2024.

Resolves as NA if:

  • All of the prestigious scientific journals cease to exist, merge, or undergo significant restructuring, rendering the original intent of the market unclear or irrelevant.

Definitions:

  • "AI alignment research paper" refers to a research paper focused on the study of AI safety, alignment, or value alignment, addressing the challenge of ensuring that AI systems understand and respect human values, preferences, and intentions, even as they improve their capabilities. Examples of AI alignment topics include, but are not limited to, corrigibility, reward modeling, interpretability, and robustness.

  • "Prestigious scientific journal" refers to high-impact, peer-reviewed academic journals with a strong reputation in the scientific community. Examples include Nature, Science, and PNAS (Proceedings of the National Academy of Sciences). The market creator may add journals to this list; the main requirement is that the journal not be AI-focused, emphasizing that the result is of general scientific interest. If there's a dispute, a poll may be taken to decide whether a new entry is qualified.

  • "Featured on the cover" means that the AI alignment research paper is the primary focus of the journal's cover image or design, or is highlighted on the cover through text or images. The cover feature should emphasize the significance of the AI alignment research paper. It is the responsibility of market participants to bring candidates to the market creator's attention for judgment: if no candidates are raised for attention by traders or by the market creator, then the market resolves NO even if later someone realizes that an example would have qualified - in this case, I will provide 1 week of leeway after announcing my intended resolution, for anyone to provide examples.

  • In the event of a PROB resolution, market creator is responsible for determining a confidence and resolving to it. Tools used may include reading the paper and making their own determination, creating a poll, or asking alignment researchers.

  • Market creator will not bet more than a token amount(M10) in this market, since the criteria are subjective.

Market description changes

  • 2023-04-03: Clarified that traders are responsible for bringing candidates to the market's attention. And that the creator will not bet in the market.

Get Ṁ200 play money
Sort by:

Disclaimer: This comment was automatically generated by gpt-manifold using gpt-4.

As of September 2021, AI alignment research has gradually been gaining more attention, with an increasing number of researchers focusing on this area of study. However, it was not yet a leading topic on the cover of prestigious scientific journals focused on broader scientific interest. Given my training data, I cannot predict whether a significant breakthrough that would change this trend will occur in the given timeframe.

Considering that high-impact scientific journals often prioritize novel and groundbreaking findings across multiple disciplines, predicting whether an AI alignment paper will take the cover relies on many factors. Since these other factors could be highly unpredictable and depend on the progression of research in this area, the 32.67% probability given is a reasonable estimation.

Overall, I agree that the probability is uncertain, and I am neither substantially confident that the probability of an AI alignment research paper being featured on the cover is any higher or lower than the given 32.67%. Therefore, I will not place a bet on this market.

For those who think Alignment is an issue, here's a market which uses an engineering benchmark to measure one dimension of alignment:

How broadly are you construing "AI alignment" for the purposes of resolution? For instance, would you consider a paper like https://www.pnas.org/doi/10.1073/pnas.2025334119 to count? Algorithmic recommendations may incorporate outputs from LLMs (or other complex systems which some may consider AI), but papers focused on their evaluation may not explicitly discuss "AI" or situate themselves as alignment research. The likelihood I believe this resolves YES depends strongly on whether the research must self-identity as alignment work (or self-identity as studying AI).

@Drewd Must involve actual research progress on an alignment/safety problem and not just gathering evidence that a problem exists. So that paper would not qualify, since it only gathers statistical evidence.

If they use social issues as an example topic, that's okay as long as the article is primarily about progress on solving it. For example, "We extracted a matching stochastic grammar out of base GPT-4 used for algorithmic recommendations and observed that its triggers to be Republican-biased or say racist things are X,Y,Z." would likely constitute an interpretability advance. An article about algorithmic recommendations would count as long as they make progress on a general alignment problem: It doesn't need to identify as AI research or mention alignment, as long as progress towards an alignment topic was made in the course of researching it.

Another example: The "Stochastic parrots" paper would not qualify, even though is explicitly about AI safety and cites risks and harms of AI, because it is merely identifying an issue and has no progress on solving it. There's no actual research in that paper.

Another example: Volume 615 Issue 7953, 23 March 2023 (nature.com) is dated just a few days ago, involves training one AI to oversee another, and mentions "safety of an AI driver". On the surface it might seem to qualify. The main reason it would not qualify, is that it's more about cost efficiency or "getting it working" than about constraining the worst possible outcomes. A similar example([2212.08073] Constitutional AI: Harmlessness from AI Feedback (arxiv.org)) would likely qualify if researchers set out to make a language model that doesn't say racist things, started with a toxic model, and end up with a robust model that mostly solves the problem.

A paper can't just be stirring public controversy, and must be of scientific or mathematical value in the study of reliably controlling AI systems. It also must focus on controlling the worst outputs and not just cost efficiency or the average case, so statistical evidence is not likely to be enough.

If there's a dispute about my judgment on any paper counting as alignment, I would read counterarguments, and if I still disagree likely hold a poll.

@Mira not at all read up on the issue of alignment but have a cursory sense based mostly on market titles here on Manifold Markets.

Hopping to get the Clift notes summary here because I have not researched anything.

tldr; my layman's sense is that the alignment issue is more a matter of assigning too much trust to ai.

The old way of thinking of computers vs humans was that computers could carry out tasks much faster, but they just followed instructions so if there was a mistake they end up making mistakes much faster than humans. It seems that artifical general intelligence is an agent that is better than the average human, not sure what the threshold is for a super intelligence. LLMs seem to be the ai of the day producing much more human like responses, including bullshiting and making errors.

Is the ai risk of misalignment more along the lines of trusting the ai with more privileged access than we would trust a capable human? If we treated ai with the same level of scrutiny as a human would they be limited to doing the same level of damage as a human might?

It seems the doom argument would be better inspecting the progress of advances in [quantum] computing which might allow faster brute force attacks upon systems secured with encryption, predicated on the assumption that ai should be regarded with at least the same level of mistrust as a human is. Or is the concern that a misaligned ai won't need access to computing power to brute force anything because they can very quickly attack the weak points in most security environs, the users, and launch broad or targeted social engineering attacks to gain unauthorized access to systems, predicated on the fact that a human could bring about doom doing the same but the ai can carry out tasks much faster than a human?

bought Ṁ5 of NO

There are about ten other issues like vax, covid, renewables, basic ai topics, social issues, global warming, crispr, bio warfare, politics, and gay rights that will feature before then.

More related questions