
Will the project "Federal AI Regulation" receive any funding from the Clearer Thinking Regranting program run by ClearerThinking.org?
Remember, betting in this market is not the only way you can have a shot at winning part of the $13,000 in cash prizes! As explained here, you can also win money by sharing information or arguments that change our mind about which projects to fund or how much to fund them. If you have an argument or public information for or against this project, share it as a comment below. If you have private information or information that has the potential to harm anyone, please send it to clearerthinkingregrants@gmail.com instead.
Below, you can find some selected quotes from the public copy of the application. The text beneath each heading was written by the applicant. Alternatively, you can click here to see the entire public portion of their application.
Why the applicant thinks we should fund this project
My project involves drafting proposed regulations, sharing them with regulators, meeting with regulators to see what might be stopping them from adopting such regulations, and helping regulators to overcome those obstacles. I plan to meet with mid-level civil servants in a wide variety of US federal departments, including Commerce, Transportation, Homeland Security, and Health and Human Services. As part of this process, I will also consult with AI safety experts (to learn more about what they believe would reduce the risk of accidental superintelligence) and with AI policy researchers (to learn more about what kinds of regulations they believe will do no harm to the reputation of effective altruism).
I expect that at least some regulators will be interested in cooperating to issue new regulations. However, if all regulators are unwilling to discuss binding regulations, I will instead work with them to expedite the publication of official reports, surveys, and plans that describe the government's approach to AI safety. Several such reports are required by law and are overdue. Getting these reports published will make AI safety issues appear more relevant and respectable within the government, making it easier for future advocates to issue binding regulations.
Here's the mechanism by which the applicant expects their project will achieve positive outcomes.
Today, the default approach to AI in the US federal government is to uncritically encourage faster and stronger AI. By prompting the government to develop internal rules for when AI is or is not compliant, I hope to build a habit among AI researchers of asking whether a new AI tool is safe. That habit could eventually prompt a key researcher to slow down or double-check their work, preventing or delaying the release of an unfriendly superintelligence. Radically interfering with AI research simply hands control over the future to someone with fewer scruples, but using the government to gently encourage AI researchers to be more responsible could help US researchers get safer results without losing the AI arms race.
How much funding are they requesting?
$150,000 USD.
What would they do with the amount just specified?
My usual hourly rate for private sector clients is $200/hour. I am willing to cut that in half in order to make good use of your fund's resources, because I believe in the cause. Assuming a 30% allowance for taxes, a buffer of 10% for unanticipated expenses, and an allowance of about $5,000 for printing, travel, and renting spaces for occasional events, I could use $150,000 to cover 1,000 hours of my time.
With 1,000 hours of work, I believe I would be able to draft several high-quality regulations, find and reach most of the relevant officials, make a strong case to those officials, repeatedly follow up with those officials to make sure the regulations stay on their radar, and host a few events to create positive PR in support of the regulations.
Here you can review the entire public portion of the application (which contains a lot more information about the applicant and their project):
https://docs.google.com/document/d/1SY9ZerT3fxKTm3viafEBs_qmT17FvYS3kSwmlvzZTHE/
Sep 20, 3:44pm:
Close date updated to 2022-10-01 2:59 am
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ576 | |
2 | Ṁ276 | |
3 | Ṁ144 | |
4 | Ṁ143 | |
5 | Ṁ64 |