Will I find that the PIBBSS Fellowship was a success?
Basic
8
Ṁ413
resolved Dec 12
Resolved
NO

Success here is ballparked as two fellows (out of however many) producing outputs or insights that I find to be of high quality. I may resolve the question positively if other win conditions are met, such as one insight being worth organizing the whole fellowship, etc. Note that I have a negativity bias.

This question resolves whenever the PIBBSS Fellowship posts a summary document somewhere, or negatively by 06/2023. Note that I don't particularly plan to do much of my own research: I'll read that document and defer to it with regards to object-level facts, and will attempt to read any linked documents. The fellowship's website is: https://www.pibbss.ai/

Feb 1, 3:09pm: See also: https://forum.effectivealtruism.org/posts/Ckont9EtqkenegLYv/introducing-the-principles-of-intelligent-behaviour-in for example projects.

Get
Ṁ1,000
and
S3.00
Sort by:
predictedNO

The organizers just posted an overview: <https://ea.greaterwrong.com/posts/zvALRCKshYGYetsbC/reflections-on-the-pibbss-fellowship-2022>, forcing me to resolve the market ("This question resolves whenever the PIBBSS Fellowship posts a summary document somewhere, or negatively by 06/2023").

The post does NOT link to any participant outputs (!!), but instead contains this snippet:

  • Research progress: Out of 20 fellows, we find that at least 6–10 made interesting progress on promising research programs.

    • A non-comprehensive sample of research progress we were particularly excited about includes work on intrinsic reward-shaping in brains, a dynamical systems perspective on goal-oriented behavior and relativistic agency, or an investigation into how robust humans are to being corrupted or mind-hacked by future AI systems. 

Per the resolution criteria, "I don't particularly plan to do much of my own research: I'll read that document and defer to it with regards to object-level facts, and will attempt to read any linked documents".

The object-level facts are:

  • Organizers find that "Out of 20 fellows, we find that at least 6–10 made interesting progress on promising research programs."

  • Organizers report that participants produced progress at least on:

    • intrinsic reward-shaping in brains

    • a dynamical systems perspective on goal-oriented behavior and relativistic agency

    • an investigation into how robust humans are to being corrupted or mind-hacked by future AI systems

  • Organizers also write: "While we are fairly happy with the research progress, we think this insufficiently translated into communicable research output within the timeframe of the fellowship. Accordingly, we believe that a, if not “the”, main dimension of improvement for the fellowship lies in providing more structural support encouraging more and faster communicable output to capture the research/epistemic progress that has been generated."

In addition:

  • The text goes on about how there are other meta-level benefits that are not strictly speaking, research output

  • There aren't actually links to research outputs

  • I think that from the example outputs, I'd find:

    • intrinsic reward-shaping in brains: probably not a high-quality insight

    • a dynamical systems perspective on goal-oriented behavior and relativistic agency: maybe a high-quality insight.

    • an investigation into how robust humans are to being corrupted or mind-hacked by future AI systems: probably not a high-quality insight.

  • People in these kinds of programmes have a *massive* positivity bias.

Overall, I am resolving this as NO. My model of some of the organizers would say that they would consider this a success because of the meta-level gains, which proved necessary to make object-level gains, and that I should give more weight to unpublished outputs.

This is the first coordinated effort to boost multi-disciplinary AI alignment research that I know of. I think there will be plenty of low hanging fruit to pick up.
Even if I and others consider it a success, Nuno. might be more cynical.
Reference class used: AI safety camps, which have far less supervision, far less selection, far less pay.
Buying YES on the idea that 1) cohorts are the new effective way to run education https://future.a16z.com/cohort-based-courses/ and 2) I'm impressed by the mentors I recognize
I started this market at 15%. But I would be lower (maybe 5%?) if I didn't know the organizers and they didn't seem so optimistic. Hopefully this site might help me track of whether I'm calibrated about the success of projects.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules