Will there be office space in Berkeley for independent AI alignment researchers at the start of April 2023?
28
80
Ṁ1.7KṀ550
resolved Apr 3
Resolved
NO1D
1W
1M
ALL
Resolves positive if, as of April 3, 2023, there is an office space for at least 5 independent AI alignment researchers in Berkeley. To qualify, the researchers need to not be full-time employees at an AI safety organization (e.g., Anthropic, FAR AI, Redwood Research, SERI MATS, or grad students at the Center for Human-Compatible AI), and the office space needs to be free of charge to the researchers. A sufficient (though not necessary) condition for being independent is receiving funding from LTFF or Open Phil to do independent research.
Get Ṁ200 play money
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ38 | |
2 | Ṁ28 | |
3 | Ṁ27 | |
4 | Ṁ20 | |
5 | Ṁ19 |
Sort by:
More related questions
Related questions
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
33% chance
Will an AI alignment research paper be featured on the cover of a prestigious scientific journal? (2024)
32% chance
Will there be significant protests calling for AI rights before 2030?
50% chance
Will the Center for AI Safety run another large-scale philosophy research fellowship in 2024?
43% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
48% chance
Will Tetraspace have published a research paper on AI alignment by March 1, 2025?
48% chance
Will OpenAI announce a major breakthrough in AI alignment in 2024?
42% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
49% chance
Will the US government require AI labs to run safety/alignment evals by 2025?
39% chance
Will any of the Top 3 labs release an initial commitment regarding AI consciousness prior to January 1, 2025?
23% chance