Will the project "Federal AI Regulation" receive any funding from the Clearer Thinking Regranting program run by ClearerThinking.org?
Remember, betting in this market is not the only way you can have a shot at winning part of the $13,000 in cash prizes! As explained here, you can also win money by sharing information or arguments that change our mind about which projects to fund or how much to fund them. If you have an argument or public information for or against this project, share it as a comment below. If you have private information or information that has the potential to harm anyone, please send it to clearerthinkingregrants@gmail.com instead.
Below, you can find some selected quotes from the public copy of the application. The text beneath each heading was written by the applicant. Alternatively, you can click here to see the entire public portion of their application.
Why the applicant thinks we should fund this project
My project involves drafting proposed regulations, sharing them with regulators, meeting with regulators to see what might be stopping them from adopting such regulations, and helping regulators to overcome those obstacles. I plan to meet with mid-level civil servants in a wide variety of US federal departments, including Commerce, Transportation, Homeland Security, and Health and Human Services. As part of this process, I will also consult with AI safety experts (to learn more about what they believe would reduce the risk of accidental superintelligence) and with AI policy researchers (to learn more about what kinds of regulations they believe will do no harm to the reputation of effective altruism).
I expect that at least some regulators will be interested in cooperating to issue new regulations. However, if all regulators are unwilling to discuss binding regulations, I will instead work with them to expedite the publication of official reports, surveys, and plans that describe the government's approach to AI safety. Several such reports are required by law and are overdue. Getting these reports published will make AI safety issues appear more relevant and respectable within the government, making it easier for future advocates to issue binding regulations.
Here's the mechanism by which the applicant expects their project will achieve positive outcomes.
Today, the default approach to AI in the US federal government is to uncritically encourage faster and stronger AI. By prompting the government to develop internal rules for when AI is or is not compliant, I hope to build a habit among AI researchers of asking whether a new AI tool is safe. That habit could eventually prompt a key researcher to slow down or double-check their work, preventing or delaying the release of an unfriendly superintelligence. Radically interfering with AI research simply hands control over the future to someone with fewer scruples, but using the government to gently encourage AI researchers to be more responsible could help US researchers get safer results without losing the AI arms race.
How much funding are they requesting?
$150,000 USD.
What would they do with the amount just specified?
My usual hourly rate for private sector clients is $200/hour. I am willing to cut that in half in order to make good use of your fund's resources, because I believe in the cause. Assuming a 30% allowance for taxes, a buffer of 10% for unanticipated expenses, and an allowance of about $5,000 for printing, travel, and renting spaces for occasional events, I could use $150,000 to cover 1,000 hours of my time.
With 1,000 hours of work, I believe I would be able to draft several high-quality regulations, find and reach most of the relevant officials, make a strong case to those officials, repeatedly follow up with those officials to make sure the regulations stay on their radar, and host a few events to create positive PR in support of the regulations.
Here you can review the entire public portion of the application (which contains a lot more information about the applicant and their project):
https://docs.google.com/document/d/1SY9ZerT3fxKTm3viafEBs_qmT17FvYS3kSwmlvzZTHE/
Sep 20, 3:44pm:
Close date updated to 2022-10-01 2:59 am
I am the grant proposer, and I wanted to briefly respond to two of your concerns. First, it's true that most HLS grads make more than $200/hr. However, most HLS grads also sell out to the highest corporate bidder. I've been trying to do good all through my career, so even my 'private' clients are typically individual homeowners, workers, tenants, patients, and so on, rather than Amazon or Wal-Mart.
Second, of course I'm aware that a typical lobbying campaign involves Congressional allies, major interest groups, and other heavy hitters -- especially if you're hearing about the lobbying campaign in the news. However, many smaller regulations are passed all the time, very quietly, as part of the government's routine business of slowly making marginal improvements to its own policies. I believe the regulations I'm working on are small enough that they could be passed with only modest support from the EA community. I will organize people to write positive comments during the notice and comment period, look for people who can introduce me to key policymakers, and get as much done as I can.
@JasonGreenLowe I very much appreciate your direct reply to my comment about the suspiciously low rate. Your clients are lucky to be getting $1000/hr talent at such a significant discount.
On your second point I agree with you that such regulations do get passed all the time, but I do not necessarily agree that simply getting any particular “regulation” through the system is in and of itself a good KPI. I think it would be helpful to know more about your general philosophy on the optimal ways to approach regulation of AI systems, perhaps even focus on a specific use case or user group (e.g. medical devices & FDA). This will make it easier to set goals that are both attainable and laudable.
Taking a scatter shot approach of simply ‘organizing people to work toward a thing that’s good’ is hard for someone like me, who works in this field, to get excited about.
You know this is total BS because it says "JD from Harvard" but then says "my usual rate for private sector clients is $200/hour"! LMAO! A real JD from Harvard gets you $200/hr for pro bono cases, $600/hr if you are a terrible lawyer, $1000/hr if you are average among your fellow alum.
As Gigacasting says, cutting the rate in half because it's non-profit work, then adding back 30% for taxes, is slight of hand and a bad indicator.
Much more importantly, I am deeply unconvinced that the proposed method will work: going to regulators as a private citizen concerned about an issue is not a strategy that successful people pursue. The applicant seems well-meaning and harmless, but to be a lawyer who is missing some important components of how regulations are crafted in the real world. To pick one example, at no point does the proposer talk about meeting with existing tech lobbying groups to figure out what sort of regulations would be acceptable to them while accomplishing their goals.
The positive case for funding this, in my view, is that generating draft legislation is actually pretty valuable, and identifying what legal mechanisms are in place and who exercises them under what conditions will be necessary. If we want CFIUS to put limitations on AI labs, is that viable? What could that accomplish? Those sorts of detail questions will be really useful for AI policymakers.
But "I will draft a good idea and will go to regulators about it" is not the sort of plan made by people who manage to make regulatory/legal changes. Finding congressional champions, getting support / non-opposition from tech industry lobbying groups (if you are fine waiting for a scandal, anti-tech groups can also work, but if you want to do it today you can't have tech lobbyists fighting you), those are necessary and don't appear at all in this proposal.
If funded, I expect that the proposed regulations either will not be implemented, or will be so watered-down as to be meaningless. Making that concrete, a survey of AI practitioners at top labs will not find that they care about the regulations, two years after they're implemented, 60% probability, or think that they contribute to AI safety after selecting for people who care about AI safety, 85% probability.
That said, generating the draft regulations and identifying the relevant mechanisms could be really useful, so I would recommend rejecting at this stage and asking for a resubmission. I've reached out to the author to meet up at EAG DC to discuss potential plans for improvements.
I'm generally pessimistic for regulatory-based approaches to AGI notkillingeveryone, but this is relatively cheap to try, and I think would give good information in terms of how receptive regulators were and what was achievable, given a qualified applicant who can make as good a shot at it as is viable to do.
I think $300k to maybe get a foot in the door for future regulations which we think might at least slow things down a little later, or to show that this is not a good avenue and allow future safety efforts to be focused on more promising venues, probably pays off about as well as anything else in alignment work right now.
A thing that would swing me to NO more substantially would be if anyone was already working on this- if so value of information might be signfiicantly reduced.
@jbeshir Withdraw this comment in light of Celer's above, which is considerably better informed!
@NuñoSempere First, this is not a measure of success toward achieving useful regulation of AI. Also, this application amounts to 'Harvard law school grad convinces friend(s) who work(s) on the Hill to slip specific language to help them get paid for being well connected'. I am working on an extended response against this application.