Will we fund the "Happier Lives Institute"?
76
60
560
resolved Oct 20
Resolved
YES

Will the project "Happier Lives Institute" receive any funding from the Clearer Thinking Regranting program run by ClearerThinking.org?

Remember, betting in this market is not the only way you can have a shot at winning part of the $13,000 in cash prizes! As explained here, you can also win money by sharing information or arguments that change our mind about which projects to fund or how much to fund them. If you have an argument or public information for or against this project, share it as a comment below. If you have private information or information that has the potential to harm anyone, please send it to clearerthinkingregrants@gmail.com instead.

Below, you can find some selected quotes from the public copy of the application. The text beneath each heading was written by the applicant.

Why the applicant thinks we should fund this project

HLI conducts and promotes research into the most cost-effective, evidence-based ways to improve global wellbeing, and then takes that information to key decision-makers. 

So far, we’ve looked quite narrowly at GiveWell-style ‘micro-interventions’ in low-income countries to see how taking a happiness approach changes the priorities. This sort of analysis is quite straightforward - it’s standard quantitative economic cost-effectiveness -  but we’re not convinced that these sorts of interventions are going to be the best way to improve global wellbeing. 

We’ve hired Lily to expand our analysis more broadly: Are there systemic changes that would move the world in the right direction, not just benefit one group? What should be done to improve wellbeing in high-income countries? A world without poverty isn’t a world of maximum wellbeing, so how could moving towards a more flourishing society today impact the long-term? These are harder, more qualitative analyses, but no one has tried to tackle them before and we think this could be extremely valuable.

Here's the mechanism by which the applicant expects their project will achieve positive outcomes.

Unlike other charity evaluators, we look beyond standard measures of wealth and health to identify global problems that have been unduly neglected and underfunded such as mental health and chronic pain.

We then identify outstanding funding opportunities that can address these problems by evaluating the cost-effectiveness of interventions, policies, and charities using subjective wellbeing measures.

We communicate our findings to researchers, philanthropists, and policymakers and convince them to redirect resources from less cost-effective programs to our recommended interventions.

Ultimately, this will result in a greater amount of wellbeing per dollar spent.

What would they do with the grant?

Two years' salary for our new Grants Specialist, Dr Lily Yu.

$100,000 (or more) as seed funding to help us establish our new grantmaking fund, make some early-stage grants, and attract more funders.

Here you can review the entire public portion of the application (which contains a lot more information about the applicant and their project):

[This link was removed at the request of HLI, after the market had closed.]

Sep 20, 3:43pm:

Close date updated to 2022-10-01 2:59 am

Oct 15: The link to HLI's application form was removed.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ4,823
2Ṁ1,099
3Ṁ409
4Ṁ200
5Ṁ198
Sort by:
predicted YES

I think first place will swing on the outcome of this market. If YES I will move to first and if NO @jbeshir has it in the bag while I will probably drop to 3rd.

predicted YES
Happier lives institute

PROMPT: happier lives institute

predicted YES

The suspense is killing me!

predicted YES

I reposted my comment about the HLI grant to the EA forum. Helmetedhornbill commented and highlighted common concerns from this comment section. I responded and am posting my thoughts also here. I thought this may be still relevant, perhaps a little late...

1.

I focused on doing an overview of the HLI and the problem area because compared to other teams it seemed as one of the most established and highest quality orgs within the Clearer Thinking regranting round. I thought this may be missed by some and is a good predictor of the outcomes.

2.

I focused on the big-picture lens because the project they are looking for funding for is pretty open-ended.

So far, we’ve looked quite narrowly at GiveWell-style ‘micro-interventions’ in low-income countries to see how taking a happiness approach changes the priorities. This sort of analysis is quite straightforward - it’s standard quantitative economic cost-effectiveness -  but we’re not convinced that these sorts of interventions are going to be the best way to improve global wellbeing. We’ve hired Lily to expand our analysis more broadly: Are there systemic changes that would move the world in the right direction, not just benefit one group? What should be done to improve wellbeing in high-income countries? A world without poverty isn’t a world of maximum wellbeing, so how could moving towards a more flourishing society today impact the long-term? These are harder, more qualitative analyses, but no one has tried to tackle them before and we think this could be extremely valuable.

I think the prior performance and the quality of the methodology they are using are good predictors of the expected value of this grant. 

3.

I didn’t get the impression that the application lacks specific examples. Perhaps could be improved though. They listed three specific projects they want to investigate the impact of:

For example, the World Happiness Report has only been running for ten years but its annual league table of national wellbeing is now well known and sparks discussion amongst policymakers. Further funding to promote the report could substantially raise the profile of wellbeing. Other examples include the World Wellbeing Movement which aims to incorporate employee wellbeing into ESG investing scores and Action for Happiness which promotes societal change in attitudes towards happiness.

That said, I wish they listed a couple of more organizations/projects/policies they would like to investigate. Otherwise, communicate something along the line: We don’t have more specifics this time as the nature of this project is to task Dr Lily Yu to identify potential interventions worth funding. We, therefore focus more on describing methodology, direction, and our relevant experience. 

4.

I am not sure how much support HLI gets from the whole EA ecosystem. It may be low. In their EA forum profile, it appears low “As of July 2022, HLI has received $55,000 in funding from Effective Altruism Funds”. Because of that, I thought discussing this topic on a higher level may be helpful.

5.

I also think the SWB framework aspect wasn’t highlighted enough in the application. I focused on this as I see a very high expected value in supporting this grant application as it will help HLI stress test SWB methodology further.

6.

As for Nuño's comment. I don't see a problem that money is passed further through a number of orgs. I sympathize with Austin's fragment of the comment  (please read the whole comment as this fragment is a little misleading on what Austin meant there) 

I'm wondering, why doesn't this logic apply for regular capitalism? It seems like money when you buy eg a pencil goes through many more layers than here, but that seems to be generally good in getting firms to specialize and create competitive products. The world is very complex, each individual/firm can only hold so much know-how, so each abstraction layer allows for much more complex and better production. 

Initially, FTX decided on the regrant dynamic – perhaps to distribute the intelligence and responsibility to more actors. What if adding more steps actually adds quality to the grants? I think the main question here is whether this particular step adds value. 

predicted NO

@pav "the project they are looking for funding for is pretty open-ended" and "I didn’t get the impression that the application lacks specific examples" sound slighly contradictory to me. I think there's a difference between something that's as concrete as projects like water box 2.0 or the student forecasting tournamenet (which you can summarize in a sentence) vs this app (where you can summarize some of the things that they might do). In any case, I agree with you broadly HLI should be funded more particularly from EA aligned funds and their work should be more promoted but this proposal, compared to the quality of the others ones, is not favorable to me. I strongly worry about granting programs that fund based on reputation or previous projects ("we know them") rather than quality of current proposals.

bought Ṁ90 of YES

// Utilitarianism and wellbeing

In many definitions of utilitarianism, well-being is the central, defining term. Utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all individuals.

Well-being, however, is notoriously hard to define and measure. Perhaps that’s why this area is relatively neglected within the EA community. Also, in the past, established frameworks like QALY+ didn’t render opportunities in the space particularly impactful. At least in the intuitive sense, it seems bizarre that EA couldn’t identify interventions that are attempting to increase wellbeing directly. Intuitively, It seems there should be projects out there with a high expected value.

Speculatively thinking there may be one more reason for the lack of interest in the community. People within EA seem like highly analytical – the majority consists of engineers, economists, and mathematicians. Could demography like this mean that people on average score lower in the emotional intelligence skills bucket? Therefore making the community see less potential in projects optimizing the space?

// Happier Lives Institute as an organization

In simplest terms, Happier Lives Institute (HLI) is like a GiveWell that specializes in well-being. They are working with the most cost-effective opportunities to increase global well-being.

Michael Plant, its founder, is an active member of the EA forum since 2015. He has written 26 posts gathering more than 5.6k karma. He seems to be interested in the subject matter at least since 2016 when he wrote a first post on the Forum asking Is effective altruism overlooking human happiness and mental health? I argue it is. His lectures on the subject matter seem clear, methodical, and follow best epistemological practices in the community. He was Peter Singer’s research assistant for two years, and Singer is an advisor to the institute.

The Clearer Thinking regrant is sponsoring the salary of Dr. Lily Yu. She seems to have relevant experience in the intersection of science, health, entrepreneurship, and grant-making.

// Neglected

The cause area seems to be neglected within EA. Besides HLI I am aware of EA Psychology Lab, and Effective Self-help – but none of these organizations do as comprehensive work as HLI.

// Subjective well-being framework

Even if the only value proposed by HLI was to research and donate to the most cost-effective opportunities to increase global well-being, I think it would be an outstanding organization to support.

However, HLI also works and stress-tests the Subjective Well-being framework (SWB) – work that the whole EA community can benefit from. Michael Plant describes the SWB methodology in this article and this lecture. Most leading EA orgs like Open Philanthropy and GiveWell use a different approach – the QALI+ framework.

I think the value of HLI as an organization lies in running the alternative to QALI+ framework and challenging its assumptions. Michael Plant does this in the essay A philosophical review of Open Philanthropy’s Cause Prioritisation Framework. I am not gonna attempt to summarize this topic here (please see the links above for details), but I am gonna highlight a couple of the most interesting threads.

“It’s worth pointing out that QALYs and DALYs, the standard health metrics that OP, GiveWell, and others have relied on in their cause prioritisation framework, are likely to be misleading because they rely on individuals' assessments of how bad they expect various health conditions would be, not on observations of how much those health conditions alter the subjective wellbeing of those who have them (Dolan and Metcalfe, 2012) … our affective forecasts (predictions of how others, or our later selves, feel) are subject to focusing illusions, where we overweight the importance of easy-to-visualise details, and immune neglect, where we forget that we will adapt to some things and not others, amongst other biases (Gilbert and Wilson 2007).” Link

Also worth noting is that the SWB framework demonstrates a lot of potential in areas previously ignored by EA organizations:

“at the Happier Lives Institute conducted two meta-analyses to compare the cost-effectiveness, in low-income countries, of providing psychotherapy to those diagnosed with depression compared to giving cash transfers to very poor families. We did this in terms of subjective measures of wellbeing and found that therapy is 9x more cost-effective” Link

HLI also looked at Open Philanthropy and GiveWell’s backed interventions to compare QALI+ with SWB results.

“I show that, if we understand good in terms of maximising self-reported LS [Life satisfaction], alleviating poverty is surprisingly unpromising whereas mental health interventions, which have so far been overlooked, seem more effective” Link

But how reasoning like this can influence organizations like Open Philanthropy or GiveWell? Here Michael Plant describes how grant-making decisions can vary based on the weight given to different frameworks. The example is describing value loss assessment based on the age of death.

“Perhaps the standard view of the badness of death is deprivationism, which states that the badness of death consists in the wellbeing the person would have had, had they lived. On this view, it’s more important to save children than adults, all else equal, because children have more wellbeing to lose.

Some people have an alternative view that saving adults is more valuable than saving children. Children are not fully developed, they do not have a strong psychological connection to their future selves, nor do they have as many interests that will be frustrated if they die. The view in the philosophical literature that captures this intuition is called the time-relative interest account (TRIA).

A third view is Epicureanism, named after the ancient Greek philosopher Epicurus, on which death is not bad for us and so there is no value in living longer rather than shorter.”

Prioritizing each of these approaches means different grant-making decisions (are we valuing kids or adults lives more?). Plant also thinks that GiveWell does an insufficient job in their modeling.

“On what grounds are the donor preferences [60% of their weight on this marker] is the most plausible weights … The philosophical literature is rich with arguments for and against each of the views on the badness of death (again, Gamlund and Solberg, 2019 is a good overview). We should engage with those arguments, rather than simply polling people… [Open Philantropy] do not need to go ‘all-in’ on a single philosophical view. Instead, they could divide up their resources across deprivationism, TRIA, and Epicureanism in accordance with their credence in each view.” Link

// Personal reasons

I also see the value in promoting evidence-based approaches to therapy because of my personal background. I grew up in Poland, a country that had a rough 19th and 20th century: Partitions, uprisings, wars, holocaust, change of borders, communism, transformation. The generational trauma is still present in my country.

I went to four types of therapies and only later stumbled on evidence-based approaches. From my experience, it seems critical to pick the right therapies early on. Approaches like Cognitive behavioral therapies (CBT) or Third wave therapies are much more effective.(Third-wave therapies are evidence-based approaches based on CBT foundation, but recreated using human instead of animal models).

In my country, and I assume in other countries as well, ineffective and unscientific approaches are still largely present. It seems valuable to have an organization with a high epistemic culture that assesses and promotes evidence-based interventions.

//Counter-arguments

I think the work of HLI would be compromised if the SWB framework had major flaws. Reading Michael Plant’s article on the subject makes me think that SWB is well researched and heavily discussed approach however I don’t know much about its internal mechanics.

// Summary

I see the value of HLI supporting interventions increasing global well-being directly. But I also see value in their work with SWB framework. I think having an alternative, thoroughly researched framework like SWB has a high expected value for the whole community. I think the regrant will help HLI stress-test their assumptions and run it on more organizations. The work of HLI could have an impact on leading EA organizations like Open Philanthropy or GiveWell – potentially helping recalibrate their recommendations and assessments.

// Epistemic status

I must be biased because I am voting yes on this market. When I saw this at 40% two weeks ago (while having exposure to HLI earlier) I thought the prediction is way off.

Since then I spend a couple of days researching the topic. I watched and read a couple of HLI YouTube lectures and EA Forum posts. I am not very knowledgeable about the internal mechanics of frameworks like SWB and QALI+. I have decent knowledge and long exposure to topics like psychotherapy, well-being, and evidence-based therapies.

bought Ṁ20 of NO

I'm thinking back to this helpful comment by John Beshir on "Better Party Politics". To me something similar can be said here.

HLI's project can be considered somewhat in the same vein as Extending cause prio to the behavioral sciences. Both (broadly speaking) aim to extend the consideration of useful interventions, both coming from a psychology & wellbeing informed angle. Both share some similar downsides for me (e.g. poor considerations of harm scenarios). Money from both grants is partially to be directed to salaries.

At the same time there is hardly any detail or concreteness in this (HLI) application here. It's extremely broad and descriptive, lacks an impact statement, lacks any defense for the $100,000 seed fund that would be a regrant of a regrant. Looking at the Max Planck group, there are concrete steps there, a specific methodology. I don't think it's the most appropriate approach there (and I would fund a much smaller amount than asked) but it's specific enough that a criticism can be made on that level and can be comapred to their projected impact.

HLI was at about 68-66% this morning and Cause Prio Behavioral Sciences at around/just under the 40% mark. I understand people in the comments have expressed skepticism for both approaches/ behavioral research in general, yet I think based on the quality of the applications these % should be in the other direction. So buying some more No here and some more Yes there.

predicted NO

This project is right on the borderline for me, the crowd's ~50% is probably roughly correct here. An alternate metric set to GiveWell's is a workable proposal, HLI seems well-poised to execute on it. I'm just trying to sus out what Clearer Thinking would think of this as a specific project. They let it advance to this round, but HLI's follow-up question responses to "what interventions would you like to investigate" and "how would those interventions improve the future" were basically given non-answers, they didn't list any interventions. The concept can work but the application is lacking, I'm not sure how to reconcile those.

Hi! I'm Barry, the Communications Manager at the Happier Lives Institute.

We’re very grateful to everyone who has provided constructive feedback on our funding application. However, we'd like to clarify a few points that commenters may be unaware of: 

  1. For many questions, we were advised to “[keep] each response to 1 sentence whenever possible, and not exceeding 3 sentences for any question.” This is why some of our responses do not provide as much detail as commenters might have expected.

  2. The full application includes five references which will allow the decision-makers to make an informed judgement about the grant manager’s previous experience and their ability to execute the project to a high standard.  We had a thorough recruitment process lasting over two months, with multiple interviews and relevant work-based tasks.  HLI has confirmed that the grant manager has experience managing grant-based programs for several organisations.

  3. For examples of a rigorous, action-guiding research framework see The Handbook for Well-being Policymaking (open access)and The Origins of Happiness.

predicted YES

@BarryGrimes You should bet all of your mana on yourself!! Skin in the game!!!

predicted NO

@BarryGrimes I'm surprised they gave you this recommendation for number of sentences especially looking at the other submitted proposals. While it does seem other proposals have managed to be more concrete and detailed in similar word counts as HLI there do appear to be others which are lengthier as well. For worth it's worth, if funders feel they need further information I hope you have the chance to provide it.

The Clearer Thinking team have added some further information about this on the about page:

What information have applicants been asked to provide? 

The 37 finalist applicants were given this form to complete, and all of them submitted it to us. For the 28 applicants who opted-in to sharing versions of their application documents on Manifold Markets, we asked them to remove any information that they did not want to be shared publicly.

bought Ṁ20 of NO

Comment section is wholly negative in it's assessment, and it seems like a lot of people buying yes are new accounts. Thus, I think the probability is inflated so I'm going to buy down despite hoping that they win funding!

Mental health is so very neglected within EA; I really hope HLI is successful!

bought Ṁ50 of NO

Actually buying even more No. As John Beshir and Rina Razh say below, there doesn't even seem to be a concrete plan for how the grantmaking will work and why we'd expect it to be unusually good.

The base rate expectation is not that a new grantmaker would be a GiveWell-quality grant recommender for mental health and wellbeing. This doesn't seem like high enough EV.

The project should be funded by people who are very passionate that they're doing high quality work and grantmaking, like early GiveWell supporters. Relatively low information forecasting (relative to those possible passionate peoples' possible insights) IMHO says that this is lower EV. I could be wrong, but I don't expect I and other forecasters here to know that. My bet is that Clearer Thinking should not fund this.

bought Ṁ100 of NO

Too much meta. SBF wants to give away money to do good. His team comes up with regranting strategy. Clearer Thinking gets regranting money and gets applicants for grants, comes up with strategy to use forecasting tournament to decide which grants will do the most good. Forecasters say give the money to start a new fund for increasing wellbeing. New hire will evaulate giving opportunities to determine which grants increase wellbeing the most.

bought Ṁ80 of NO

Why should we believe social science can answer this question impartially and usefully? What if such a study discovered, as many do, that religiosity is quite predictive of normal people's happiness and sanity amidst chronic pain? Would we actually try to promote that? What makes for a better life in a richer country seems to be intangibles, like belonging and meaningfulness, work, altruism, and family. Lacking these is a civic problem, not a research problem at least at this time. The grant doesn't provide good evidence that there's a framework for research on these wicked problems that can translate into action.

It seems there is more focus on the part of the application asking for coverage of the 2-year salary than the $100,000 for seed funding. This is strange to me for two reasons.

1) Dr Yu is already listed on the HLI site (https://www.happierlivesinstitute.org/about/meet-the-team/) and I imagine she already has a salary offer, i.e. HLI has funds to cover a salary at least for a year. If they get salary coverage through this grant, how would they redirect their funds?

2) There is hardly any detail about the seed funding request. How is there no detail on how this will be managed or allocated, how the decisions will be made? From the previous work highlighted, there does not appear to be relevant experience managing a seed fund. At the same time, the questions about risk or things not going well, seem overly optimistic. If managing the seed fund falls mainly on the shoulders of a recent hire, then does that not sound a little risky? It sounds plausible to me that there would be volatility in outcomes. My personal confidence increases the more transparency and consideration of risk there is.

I'll add a few more minor points only because I've now read the application in full.

In the future, what will be the most important indicators that your project is succeeding?   

  • The number of grantmakers and policymakers that measure the impact of their decisions in terms of Wellbeing-Adjusted Life Years per dollar (WELLBYs/$)

  • An increase in WELLBYs/$ as new research reveals more cost-effective interventions

  • The amount of money reallocated to those interventions.

>This could have been more specific. For the first point, surely we would care more about # grantmakers or policymakers changing their decisions as a result of this grant. How would this measure be obtained for either the salary or the seed funding? Analogous problems for the other two bullet points. There are possible ways to estimate this but from this application, HLI's answer is unclear at present.

Who is going to be working on this project, and why are they especially well-suited to implement it?

>I wish there were more detail on the hiring process. How many candidates were considered, how many rounds, how was the decision made. At least if I was covering two years of salary, I would want more assurance than a CV summary.

If this project gets funded by us but doesn’t achieve its desired outcomes, what would the most likely reason be?

We fail to find new policy priorities using subjective wellbeing measures and our advocacy efforts get very little traction with philanthropists and policymakers.

>There is little substance here, in fact this is almost circular - first, HLI says they aim to identify policy priorities and liase with philanthropists and donors, then they say the project fails if they can't do that. Potential failures modes should have been unpacked and evaluated, and there are various possibilities, e.g. finding some suitable policy priorities based on research but few on the grounds ops that can be evalauted transparently, communication or logistics difficulties, methodological or measurement constraints. I would imagine a more specific answer could have been been provided.

Please list any ways in which this grant/investment could be actively harmful (e.g., by creating risks or reputation damage to the funders or the EA community, bad effects on the funding ecosystem, or direct harm caused by the project).

It’s hard to think what these could be. We are doing research to find overlooked and high-impact ways to make people happier (in the near- and long-term), so we’re looking for things that are most beneficial in expectation. Making people happier seems robustly good compared to funding existential risk work which has greater sign uncertainty.

>This also sounds very one-sided and not well considered. I can think of a variety of harm scenarios. I will describe the tensions, rather than the outcomes, though I hope how reputation harm or misaligned priorities or inefficient use funding can come about.

Recently, on the EA forum there have been at least a couple discussion that might suggests concrete ways in which a focus on happiness may obscure a focus on meaning, or that a universal application of happiness and wellbeing focus may not be culturally appropriate (happiness may not be morally desirable to everyone, everywhere, see in the forum and for the peer-reviewed paper).

Further, I point to this blog by economist Lant Pritchett, critiquing charity work (such as overly promoting psychosocial treatments like therapies) as inefficient in the broader goal of development work.

As a non-expert, just as someone who reads the EA forum regularly, I am able to point to three possible difficult moments to navigate. It seems that communication and research analysis will be critical and do have harm scenarios.

I can keep going but I think it's clear that I find this application to be non-specific and not very convicing, whereas as there are clear and easy ways to improve the information. I admire HLI's work even though this particularly piece isn't persuasive.

"Project specific-question 2: 


What are some interventions you would like to evaluate if you could, and approximately how costly would each of these interventions be to investigate?

Your answer to project-specific question 2:


The Grants Strategist will take a global and qualitative hits-based approach to think about the best levers to pull to improve global wellbeing. This will include new policies, interventions, and research that could have a catalytic effect on global wellbeing, both now and in the long run (e.g. lead regulation, immigration reform, improving access to pain relief, and research on new treatments for mental illness)."

Feels like it's just restating the criteria by which it would like to do the evaluation, rather than any specific interventions they have lined up to evaluate.

@jbeshir I think this is a valuable observation and I consider that it is more broadly present throughout the application to the extent that it seems a major concern. A lot of the answers are very non-specific and really do not address the questions posed. For instance:

In brief, why should we fund your project?

HLI conducts and promotes research into the most cost-effective, evidence-based ways to improve global wellbeing, and then takes that information to key decision-makers.

This is a description of who HLI are, not why their project should be funded.

So far, we’ve looked quite narrowly at GiveWell-style ‘micro-interventions’ in low-income countries to see how taking a happiness approach changes the priorities. This sort of analysis is quite straightforward - it’s standard quantitative economic cost-effectiveness -  but we’re not convinced that these sorts of interventions are going to be the best way to improve global wellbeing.

This is a description of previous work with a comment on their view on CEA, not a response why their project should be funded.

We’ve hired Lily to expand our analysis more broadly: Are there systemic changes that would move the world in the right direction, not just benefit one group? What should be done to improve wellbeing in high-income countries? A world without poverty isn’t a world of maximum wellbeing, so how could moving towards a more flourishing society today impact the long-term? These are harder, more qualitative analyses, but no one has tried to tackle them before and we think this could be extremely valuable.

This is a statement about hiring a new team member and a series of questions followed by a broad and not well defended comment (claims valuable without elaborating), not a substantial argument why the project merits funding.

Further downL

If you received $30,000 USD from this regranting program six weeks from now, what would your plan be for the six months following that? Please be really concrete about what you’re trying to get done.

Seek additional funding from other grantmakers and private donors.

This is missing detail. How would 30,000 be used within six weeks? What would be the goals, milestones, dates? Which of the two "projects" would the money go towards (salary or seed funding?)

FTX which chooses regrantors which give money to Clearthinking which gives money to HLI which gives money to their projects. It's possible I'm adding or forgetting a level, but it seems like too many levels of recursion, regardless of whether the grant is good or bad.

predicted YES

@NuñoSempere Hm, the number of steps of indirection in grantmaking had been on my mind too, and makes me hesitate about this particular application.

I'm wondering, why doesn't this logic apply for regular capitalism? It seems like money when you buy eg a pencil goes through many more layers than here, but that seems to be generally good in getting firms to specialize and create competitive products. The world is very complex, each individual/firm can only hold so much know-how, so each abstraction layer allows for much more complex and better production.

My default guess is something like "for-profit purchases are evaluated based on personal value; nonprofit purchases are a mix of 'maybe this is good for the world' and 'my funder would like this'"? And I wonder if there are ways to improve the nonprofit logic such that we can support more useful layers of indirection, eg:

  • Providing a tighter feedback loop between 'good for the world' expectations vs results, eg your criminal justice reform review

  • Having nonprofit people better internalize what 'good for the world' looks like

  • Reducing dependency on 'my funder would like this'

bought Ṁ50 of NO

Right, how could this possibly be the most efficient way to create benefit?