Will at least 25 top-100 universities have a faculty-led technical AGI safety effort by the end of 2023?
13
74
250
resolved Jan 1
Resolved
NO

Top-100 universities are determined by the QS 2023 rankings: https://www.topuniversities.com/university-rankings/world-university-rankings/2023

“Technical AGI safety effort” can be demonstrated by either:

  • At least three research papers or blog posts in one year that discuss catastrophic risks (risks of harm much worse than any caused by AI systems to date, harming more than just the developer of the AI system and its immediate users) that are specific to human-level or superhuman AI.

    or

  • One blog post, paper, tweet, or similar that clearly announces a new lab, institute, or center focused on the issues described above, presented in a way that implies that this team will involve multiple people working over multiple years under the leadership of a tenure-track faculty member (or equivalent as below).

Further details:

  • Papers or posts must credit a tenure-track faculty member (or someone of comparable status at institutions without a tenure system) as an author or as the leader of the effort.

  • The paper or post must discuss specific technical interventions to measure or address these risks or, if not, it must both be by a researcher who primarily does technical work related to AI and be clearly oriented toward an audience of technical researchers. Works that are primarily oriented toward questions of philosophy or policy don't count.

  • Citing typical work by authors like Nick Bostrom, Ajeya Cotra, Paul Christiano, Rohin Shah, Richard Ngo, or Eliezer Yudkowsky as part of describing the primary motivation for a project will typically suffice to show an engagement with catastrophic risks in the sense above.

  • A "lab, institute, or center" need not have any official/legal status, as long as it is led by a faculty member who fits the definition above.

I will certify individual labs/efforts as meeting these criteria if asked (within at most a few weeks), and will resolve YES early if we accumulate 25 of these.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ45
2Ṁ10
3Ṁ7
4Ṁ2
5Ṁ2
Sort by:

I was only able to get up to about 15 in an initial survey, and even then that's using looser criteria than I'm applying here. Resolving no.

Tracking some relevant information here:

https://manifold.markets/Hedgehog/will-at-least-15-top100-universitie

Will at least 15 top-100 universities have a faculty-led technical AGI safety effort by the end of 2023?
9% chance. Top-100 universities are determined by the QS 2023 rankings: https://www.topuniversities.com/university-rankings/world-university-rankings/2023 “Technical AGI safety effort” can be demonstrated by either: At least three research papers or blog posts in one year that discuss catastrophic risks (risks of harm much worse than any caused by AI systems to date, harming more than just the developer of the AI system and its immediate users) that are specific to human-level or superhuman AI. or One blog post, paper, tweet, or similar that clearly announces a new lab, institute, or center focused on the issues described above, presented in a way that implies that this team will involve multiple people working over multiple years under the leadership of a tenure-track faculty member (or equivalent as below). Further details: Papers or posts must credit a tenure-track faculty member (or someone of comparable status at institutions without a tenure system) as an author or as the leader of the effort. The paper or post must discuss specific technical interventions to measure or address these risks or, if not, it must both be by a researcher who primarily does technical work related to AI and be clearly oriented toward an audience of technical researchers. Works that are primarily oriented toward questions of philosophy or policy don't count. Citing typical work by authors like Nick Bostrom, Ajeya Cotra, Paul Christiano, Rohin Shah, Richard Ngo, or Eliezer Yudkowsky as part of describing the primary motivation for a project will typically suffice to show an engagement with catastrophic risks in the sense above. A "lab, institute, or center" need not have any official/legal status, as long as it is led by a faculty member who fits the definition above. I will certify individual labs/efforts as meeting these criteria if asked (within at most a few weeks), and will resolve YES early if we accumulate 25 of these. Show Less

How many exist right now?

Berkeley, Oxford, Cambridge, NYU… Does Stanford count?

Those are also the ones that most immediately came to mind for me, and I'd bet that at least four of those five (and probably all five) would count if we looked into it more closely. I'll bet there are a few more that fit this definition through publications.

I'll be keeping an eye out for this all year. If there's no clear positive resolution by the end of the year, I'll see if I can use the Semantic Scholar API to look around for evidence that we might otherwise miss. If anyone beats me to that, great!

bought Ṁ100 of NO

absolutely not possible. nobody who bets yes on this market will possibly be able to contribute to it happening. even running on spite from me attempting to throw down the gauntlet with an exaggerated, sarcastic, obviously miscalibrated no bet, there's no way yes bettors could possibly make this happen. you can't prove me wrong, can't be done

More related questions