Will we fund "Extending cause prioritization research to the behavioral sciences"?
➕
Plus
52
Ṁ19k
resolved Oct 7
Resolved
YES

Will the project "Extending cause prioritization research to the behavioral sciences" receive any funding from the Clearer Thinking Regranting program run by ClearerThinking.org?

Remember, betting in this market is not the only way you can have a shot at winning part of the $13,000 in cash prizes! As explained here, you can also win money by sharing information or arguments that change our mind about which projects to fund or how much to fund them. If you have an argument or public information for or against this project, share it as a comment below. If you have private information or information that has the potential to harm anyone, please send it to clearerthinkingregrants@gmail.com instead.

Below, you can find some selected quotes from the public copy of the application. The text beneath each heading was written by the applicant. Alternatively, you can click here to see the entire public portion of their application.

Why the applicant thinks we should fund this project

Most approaches to reducing existential risks or improving the future require positively influencing certain crucial decisions, attitudes, or behaviors at some point. Our knowledge of which strategies are most effective at promoting such improvements and how they work is dangerously unreliable and incomplete. Unless we identify and close these gaps in our knowledge, many efforts to improve the future might be doomed. On the bright side, a recent analysis found that some past efforts to improve global health and development interventions through behavioral science R&D projects have been very cost-effective (Kremer, Gallant, Rostapshova, & Thomas, 2021). Therefore, investing in (the right) behavioral science research is likely key to closing those dangerous gaps.  

Despite their potential importance, applied behavioral science research on improving the decisions, attitudes, beliefs, ways of thinking, values, and behaviors that are crucial from the perspective of Effective Altruism are neglected by academic funding agencies. Moreover, EA organizations rarely fund any behavioral science projects at all. One major reason is that behavioral scientists and grantmakers lack the tools to forecast the social impact of potential behavioral science projects and compare it against relevant benchmarks (e.g., cash transfers). This leaves us extremely uncertain about which of the millions of potential projects behavioral scientists could undertake would be highly impactful. This uncertainty makes it very difficult for grantmakers to identify or instigate behavioral science projects that are more cost-effective at improving the future than applying the tools and knowledge we already have in more established areas like global health. The lack of proper incentives, support, and clarity on what is essential leaves behavioral scientists stumbling in the dark. Consequently, behavioral science is missing out on the best opportunities to improve the future.

To overcome these problems, our project will shed light on which behavioral science research topics are most important and how cost-effective it is to fund research on those topics. To achieve this, we will develop a cost-effectiveness analysis method that can be applied to potential topics of behavioral science research. The method developed in this project will enable funders, such as the Future Fund and Open Philanthropy, academic and governmental funding agencies, prioritization researchers, and behavioral scientists, to derive quantitative estimates of how much alternative research projects might improve the future. We will apply our method to produce a list of high-impact behavioral science research topics with estimates of their expected cost-effectiveness in improving the future. We will primarily evaluate use-inspired basic behavioral science research and the development, evaluation, and improvement of behavioral science interventions. In the following, we will refer to this spectrum of activities as potentially useful behavioral science research.


Given our preliminary results, it seems likely that we will discover overlooked ways to improve the future that could be more than 100 or even more than 1000 times as cost-effective as unconditional cash transfers. Under reasonable assumptions about how the value of research varies across different topics, estimating the value of research on 20 topics will increase the total amount of good that can be done in a cause area by 15%-42% if the error variance of our estimates is between 50% and 200% of the average benefit of research (see Table 1).

Here's the mechanism by which the applicant expects their project will achieve positive outcomes.

  1. The application of our method might lead to the discovery that behavioral science research on certain neglected questions is much more cost-effective at improving the future than the best existing interventions.

  2. The discovery of these crucial open questions and low-hanging fruit might convince grant makers inside and outside the EA community to fund research on the most impactful questions we identified.

  3. Our list of high-impact research topics and the availability of funding for such research would motivate and enable more behavioral scientists to conduct more impactful research. This research will generate insights and interventions that enable us to positively influence decisions crucial for humanity's long-term survival and the well-being of future generations of humans and animals.

  4. The availability of our method could bring the criteria that (academic) funding agencies use to decide which scientific research projects to fund into closer alignment with the principles of effective altruism.

  5. Over time, these developments could shift the values and priorities of academia towards the principles of effective altruism.

How much funding are they requesting?

$499,177


What would they do with the amount just specified?

The requested amount will allow us to develop, assess, refine, and apply a method for estimating the impact of behavioral science research on different topics. It would allow us to generate estimates of the expected cost-effectiveness of research on about 30 promising behavioral science topics and compare them to the effectiveness of existing interventions in global health and development. 


We plan to invest the funds to advance the proposed project roughly as outlined in this budget spreadsheet. The largest proportion of the funds will be used to hire a quantitative researcher (possibly Dr. Matthias Stelter) who will work on developing, assessing, and applying the cost-effectiveness analysis method full-time for two years. The second-largest portion of the funds will pay for four research assistants, who will help the team build and test numerous quantitative models. The third-largest portion of the budget will pay for a behavior scientist (possibly Dr. Izzy Gainsburg) who will support the modeling efforts with subject-matter expertise and coordinate the contributions of the consultants and research assistants (project management). The fourth-largest share of the funds will be spent on consultancies from behavioral science experts. Each consultant will be paid to share their knowledge of a research topic in their area of expertise with the team that is analyzing the cost-effectiveness of such research. We will also use some of the funds to pay external experts to critique our models, find flaws in our methodology, and provide impartial advice and feedback.

Here you can review the entire public portion of the application (which contains a lot more information about the applicant and their project):

https://docs.google.com/document/d/19sj8ke5XZesniup12S4NuRXe_LSA9bO3wZbxQtYnG24/edit#

Sep 20, 3:44pm:

Close date updated to 2022-10-01 2:59 am

Get
Ṁ1,000
and
S3.00
Sort by:

I wrote a really long comment, but then when I clicked a link, my browser navigated off the page without warning me and deleted my comment. Will try to reproduce what I wrote.

On priors, I find it implausible that behavioral science interventions could do better than GiveWell top charities.

In regard to the preliminary cost-effectiveness estimate given in the EA Forum post, it looks insanely optimistic to the point of being impossible. Given the quoted numbers of 32 acts of kindness per day with each act producing an average of 0.7 happy hours, that's 22 happy hours produced per person-day of acts of kindness. If you said people's acts of kindness increased overall happiness by 10%, I'd say that sounds too high. If you say it produces 22 happy hours, when the average person is only awake for 17 hours...well that's not even possible.

I am also very skeptical of the reported claim that a one-time intervention of "watching an elevating video, enacting prosocial behaviors, and
reflecting on how those behaviors relate to one’s value" (Baumsteiger 2019) can produce an average of 1600 additional acts of kindness per person. That number sounds about 1000x too high to me.

In general, psych studies are infamous for reporting impossibly massive effects and then failing to replicate. The cost-effectiveness given by the EA Forum post involves a conjunction of several impossibly massive effects, producing a resulting cost-effectiveness that I would guess is about 100,000x too high.

In conclusion:

  1. I think the applicants are way off about how promising this area is.

  2. I wouldn't want to fund this area unless the applicants produced a moderately compelling preliminary cost-effectiveness estimate, which they did not.

  3. The quality of the provided cost-effectiveness analysis does not give me confidence that the applicants will do a good job.

predictedNO

note: quoted cost-effectiveness is 1.1 cents per happy hour produced. Multiplying by 100,000x gives $1100 per happy hour, which is far worse than GiveDirectly. (That might still be generous, there's a good chance that the reported effects actually don't exist at all.)

This still doesn't tell us the cost-effectiveness of the proposed research project, which is what the application was for. The upside to the research project basically entirely comes from the small probability that the intervention turns out to be way more cost-effective than I think it is. That's harder for me to estimate, but suffice it to say that I think further research on the intervention from Baumsteiger (2019) (or similar) has very low EV.

If I were to fund prioritization research on behavioral science (or any other neglected area), I might do something like this:

1. grant someone $5–10K to come up with preliminary cost-effectiveness estimates of a few areas they think are most promising

  1. pay a skeptical quant-y type $5–10K to red-team the CEEs / do some sort of adversarial collaboration

Then, if anything comes out looking particularly promising, you could allocate more funding to it.

predictedNO

I wouldn't put my probability too much lower because the thing I'm criticizing is not the same as the thing they're proposing. But my criticism still far lessens the EV of the proposal IMO

Thank you for engaging with and critiquing the cost-effectiveness analysis, @MichaelDickens! There seem to be a few misunderstandings I would like to correct.

The CEE in the linked Guesstimate looks optimistic to the point of being impossible. Given the quoted numbers of 32 acts of kindness per day with each act producing an average of 0.7 happy hours, that's 22 happy hours produced per person-day of acts of kindness. If you said people's acts of kindness increased overall happiness by 10%, I'd say that sounds too high. If you say it produces 22 happy hours, when the average person is only awake for 17 hours...well that's not even possible.

The value you calculated is the sum of the additional happiness of all the people to whom the person was kind.  This includes everyone they interacted with that day in any way. This includes everyone from the strangers they smiled at, to the friends they messaged, the colleagues they helped at work, the customers they served, their children, their partner, and their parents and other family members.  If you consider that the benefit for the kindness might be benefited over more than a dozen people, then 22 hours of happiness, might be no more than 1-2 hours per person.

 

I am also very skeptical of the reported claim that a one-time intervention of "watching an elevating video, enacting prosocial behaviors, and reflecting on how those behaviors relate to one’s value" (Baumsteiger 2019) can produce an average of 1600 additional acts of kindness per person. That number sounds about 1000x too high to me.

The intervention by Baumsteiger (2019) was a multi-session program that lasted 12 days and involved planning, performing, and documenting one's prosocial behavior for 10 days in a row. The effect sizes distribution in the Guesstimate model is based on many different studies, some of which were even more intensive. 

 

In general, psych studies are infamous for reporting impossibly massive effects and then failing to replicate. 

Most of the estimates are based on meta-analyses of many studies. The results of meta-analyses are substantially more robust and more reliable than the result of a single study.  

@MichaelDickens I can't tell you how many times I have done this, so frustrating. You get a tip for your troubles!

predictedYES

Generally speaking, I hold a somewhat strong position in favor of casting a wider net in terms of rigorous cause exploration, mitigating things like value lock in and doing our best to ensure as many effective ways to improve the world are considered. So with that I think even just coming into reading this, I have a slight bias to support the project (which is well written, there is evidence of previous work, lots of attempts at getting quantitative estimates, references to other literature) but I'm otherwise skeptical of some of the details (with more elaboration below but e.g. uncertainties about method, lower confidence in simulation data, huge budget).

In a nutshell, I'd probably fund a really really small fraction of this - if specified and perhaps revised further- to the extent a minimum viable product can be obtained, probably with funding no more than $30,000. Asking for shy of half a million is... a lot, especially given uncertainties in the method. Asking for two years with the methodological uncertainties, also seems overly optimistic.

What would I fund under this broad topic? Really, the preliminary work. An open competition (like Open Phil's recent Cause Exploration Prize) to source viable behavioral science interventions and/or a systematic review on the topic with the explicit expectation of the work going through peer review. More optimistically, even their 6 months plan with small revisions of expectations. A lot can be done to really see how viable the ideas here are with $10,000 even (~lower boundary for an open competition).

If you received $30,000 USD from this regranting program six weeks from now, what would your plan be for the six months following that? Please be really concrete about what you’re trying to get done.


If I received $30,000 I would use it to hire four students as research assistants for six months. We will first formalize the approach I have used in my preliminary work into a general method that other people can apply to other research topics. We will then conduct a preliminary assessment of the method with the four student research assistants. Furthermore, we will test to which extent different people applying the method reach similar estimates (objectivity) and whether the method can reliably discern research topics with a very high expected impact from research topics with a low expected impact (validity). With $30,000 we would be able to collect a data set with a total of 16 estimates of the expected value of research on eight topics (two independent estimates per topic). This would provide a first indication about the viability of the approach, but it would not be enough to reach a confident assessment. Additional funding would be required to refine the method and to conduct the actual prioritization research.

I'm thinking immediately of the Cause Exploration Prizes by Open Phil which ran roughly from the end of May till the start of August, let's call it ~3 months. They had 150 good faith submissions at $200 each or $30,000. They also had 20 honorable mentions at $500 each, for a total of $10,000. I think all of the honorable mentions are a decently high bar and I'm a priori skeptical there are better behavioral science interventions. To me personally, that doesn't compare favorably with 4 people working for 6 months to get work done on 8 topics. It already sounds that this work could be done, or at least sped up and diversified in an open competition format as one possible alternative method. I think a competition format probably can bring a diversity of ideas that 4 people with a supervisor cannot.

The other theme that comes through is that there is preliminary work to be done to even ensure this approach is viable, e.g. in terms of objectivity and validity. Seems unreasonable to offer full funding to this level of uncertainty.

Now about the approach in the project.

To overcome these problems, our project will shed light on which behavioral science research topics are most important and how cost-effective it is to fund research on those topics. To achieve this, we will develop a cost-effectiveness analysis method that can be applied to potential topics of behavioral science research. The method developed in this project will enable funders, such as the Future Fund and Open Philanthropy, academic and governmental funding agencies, prioritization researchers, and behavioral scientists, to derive quantitative estimates of how much alternative research projects might improve the future. We will apply our method to produce a list of high-impact behavioral science research topics with estimates of their expected cost-effectiveness in improving the future. We will primarily evaluate use-inspired basic behavioral science research and the development, evaluation, and improvement of behavioral science interventions. In the following, we will refer to this spectrum of activities as potentially useful behavioral science research.

So the big question here for me is how? What is the method, really specifically?

First, "which behavioral science research topics are most important" - for instance, I would be doing a systematic search of RCTs as a first guess but how would you evaluate importance. What's the framework?

Second, "develop a cost-effectiveness analysis method" - unclear why new methods need to be developed. Sounds like this should be more explicitly stated. I can think of good and bad arguments here but it's not the readers job to be doing this heavy lifting, and I'd imagine for a generalist funder this can be an uncertainty.

Third, "use-inspired basic behavioral science research and the development, evaluation, and improvement of behavioral science interventions"... I'm not sure at all what this means. I imagine this is setting out an inclusion criterion but it's really unclear what this means under the hood, i.e. what are examples of potential interventions.

Looking at the previous research  this seems to be heavy on the probabilistic / simulation side (and some of the guesstimate numbers, like 32 prosocial behaviors per day, seem odd to me at face value). Different from most CEAs that I've seen that more heavily rely on actual intervention results (even if most CEAs are then adjusted probabilistically to address e.g. uncertainties, moral weights etc.). I'm generally less confident about more simulation heavy exercises. It seems some of the arguments about why this project could work are also simulations ("According to our simulations, the recommended level of research funding would likely produce interventions that are 78% to 252% more cost-effective at doing good than the best interventions that happen to be already available today.")

Finally, if some people think of cash transfers as paternalistic, imagine what would be said about behavioral interventions. I think the form of interventions matters. And what good is a theoretically, simulation-based high effective intervention if it's not acceptable to the targeted population? I think lots more could be said about the harm aspect.

I'm still going yes on this because I think exploring a different area rigorously is worthwile but also going yes with the hope this is seriously revised in terms of scale and work done.

Thank you for engaging with the proposal, @RinaRazh!

I see you would like to see the project's method laid out in detail. If you like, you can read a very detailed description of the methodology in this document.

I'm not convinced, but I think the ambition is there. I wouldn't bet $499,177 on it, but 10% of that. I'd like to see more theoretical work cited than is, though. Only a limit order for me at 46% this seems properly rated.

Please note that this account bought some shares in this market in error. Once this error was noticed, we then sold them all. This account has a policy of not betting in its own markets.

predictedNO

There is no proposal in this proposal. It is entirely speculative and non-specific, except the budget which is extremely specific and oddly less speculative than the idea for which the money will be used. Obviously smart people can do good work, but to paraphrase Taleb these grants shouldn’t reward people for their credentials but rather there battle scars.

@BTE I am uncertain if you noticed my response to Rina. If not, I would like you to know that can read a very detailed description of the methodology in this document.

@FalkLieder I had not seen it, thank you for sharing it directly! I will definitely read through it today.

Behavioral science is the Andrew Yang of scientific fields.

Not clear why there isn’t a hard policy not to give any money to the grant requestor and people they know.

Why would anyone ever fund this “give us money so we can think and discover 1000x returns for you”

From “$5k to save a life” to “send us $500k to think up ideas we haven’t thought of yet” 🤔

☁️

Related questions

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules