Will a compelling argument defending Effective Altruism be posted in response to my criticism of the movement?
13
384
270
resolved Nov 28
Resolved
NO

On November 21, 2023, I compiled and published a criticism of the Effective Altruism movement at https://www.reddit.com/r/singularity/comments/180gnca/why_has_nobody_asked_what_is_wrong_with_all_these/. The criticism makes the case that EA has resulted in some of the largest value destruction in history and that the millions of lives that have been ruined by its most powerful practitioners far exceed any good the movement has achieved.

I am interested in understanding if my assessment is generally correct. However, at market open, nobody had yet posted any arguments defending the movement or its principles. I want to understand whether or not that is because the movement's philosophical conclusions are logically indefensible.

This market will resolve to YES if someone posts a compelling argument in that thread that refutes my original position. For the market to resolve YES, the argument must:

  • Defend the Effective Altruism movement

  • Not rest solely on the idea that every single target of my criticism was acting contrary to the movement's principles

  • Not include personal attacks against anyone, including against me or the targets of my criticism

  • Have a score of 50 or greater on November 28, 2023 at noon EST

Otherwise, the market will resolve NO.

If you post the response that resolves the market to YES, then I'll send 100M.


RESOLUTION: See comment below.

Get Ṁ600 play money

🏅 Top traders

#NameTotal profit
1Ṁ46
2Ṁ30
3Ṁ20
4Ṁ18
5Ṁ12
Sort by:

RESOLUTION: This question resolves to NO. There were four comments that were close, but none qualified.

Two comments had more than 50 upvotes, but criticized the movement.

One comment, by nextnode, had 56 upvotes and defended Effective Altruism, but did so by implying that I was mentally ill (which is true.) However, the resolution criteria do not permit attacks on personal character to be included in a YES resolution post, and my bipolar disorder was not referenced in a manner relevant to effective altruism.

One comment, by take_it_easy_m8, defended the movement's ideals and would have qualified. However, it had 49 upvotes at market close time. That image is below.

Therefore, there were - by one vote - no qualifying posts. Nobody from Manifold accepted the 100M offer, so it was not paid.

predicted NO

@SteveSokolowski ohh wow, I did not realize you would count that screenshotted comment as a compelling argument or a defense (I was maybe excessively cynical about why you were running this market).

Kudos; I'll that that in mind for your future markets!

@RobertCousineau It met all the criteria as defined in the post, didn't it? I'm not sure I was convinced by it, but that's what the criteria said. There just weren't enough votes.

predicted NO

@SteveSokolowski the language you have in this market made me expect you wanted to resolve it No.

Therefore, I would have expected you to want a comment that actually argued against what you were saying at an object level, and to have also gotten 50+ upvotes. (I'd also expect that to be an very hard ask, as reddit doesn't really do great for lognform comment engagement in general, much less more than several hours after the post gets posted (and it's starting to move off the front page)).

I generally agree with your assessment in that post, but I don't think the things you are attacking represent the entire EA movement. Those AI-fearing EA people are part of a specific subgroup called longtermists. Many EAs are either not interested in that, or even oppose those people.
As an effective altruist, I'm tired of being associated with longtermists. First, it was SBF and his crypto bullshit, now it's those crazies on the OpenAI board. I have nothing in common with these people. I just want to donate so that impoverished children will have healthy lives, and to improve the lives of the billions of farm animals that are abused every day.

@Shump I'll just add that at least from my experience, longtermists are not the majority. However, it appears that they get the vast majority of the press on EA, especially the negative press.

@Shump +1

I'd also mention that the online EA presence is very skewed. I help run in person EA meetups in Seattle and longtermism is very much not the dominant ideology. When MacAskill's book came out (basically a longtermism booster book) it was heavily criticized and only a handful of people agreed with the tenants that differentiated it from what I think of as "core" EA.

In any case I'm sure whatever arguments made here are at least engaged with extensively on the EA forum if the OP is interested.

EA folks are nothing if not intellectually masochistic so you will be very popular if you post criticism on the forum and will likely have more than your fill of engagement.

The last point is an impossibly high bar, since old posts on Reddit quickly get less and less attention.

I would point out the following specific issues with your post.

  1. It's factually inaccurate to say McCauley and Toner "contributed to crafting" OpenAI's structure, since they joined the board after the governance mechanisms of the OpenAI nonprofit was established. It's also not established that AI-safety was the main reason or even a contributing reason why Altman was fired, and there's multiple public statements from people on both sides of the standoff that it was not related to safety, including from Shear, OpenAI employee roon (on Twitter), and most credibly Brad Lightcap https://www.axios.com/2023/11/18/openai-memo-altman-firing-malfeasance-communications-breakdown

  2. You implicitly lump Ilya Sutskever in with EA (even if you don't state it explicitly) even though there's no indication he's part of the EA movement. As far as I know, he hasn't ever said he's an EA, he explicitly wants to create AGI ("make sure that OpenAI builds AGI that benefits all of humanity" https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/), which is hardly a doomer position, and this New York Times article says he is not an Effective Altruist https://www.nytimes.com/2023/11/20/technology/openai-sam-altman-microsoft.html. Nor is it correct to imply that he is essentially part of EA because he care's about AI safety. Concerns about AI as an existential threat have come as much from within the AI research community as from the EA community; perhaps most notably from Stuart Russell, but also other signatories of this letter https://www.safe.ai/statement-on-ai-risk. It's likely Ilya would have similar if EA never was turned onto AI-existential risk.

  3. The controversial Shear comment was in response to an explicit thought experiment, not some secret dog whistle about what real-world policy should be. https://twitter.com/BarbettiJames/status/1664375581180571648

  4. There are multiple things wrong with your $120B value destroyed number, which are far as I can tell you obtained by adding to highest valuation of FTX to the highest valuation of OpenAI. First, SBF's trial revealed FTX was a fraud pretty much from the start, so the real amount of value destroyed was the total amount of the fraud ($8 billion). Also, a large portion of that will apparently be recovered, though of course that has to be discounted by the extreme pain the collapse caused in the meantime. Second it's probably incorrect to say the full value of OpenAI was destroyed by the board's actions. All those AI researchers still exist, and will go work for other companies if they're not satisfied with the resolution of the crisis. If you believe Manifold, OpenAI is still worth 47B https://manifold.markets/LarsDoucet/what-will-openai-be-valued-at-this, and the result of this debacle may be that OpenAI loses 40B in value and Microsoft gains 30B in value. Plus private valuations are not necessarily reliable (think WeWork).

  5. It's incorrect to compare the value of companies and GDP. The portion of GDP that a company contributes is it's yearly output, which is considerable lower than it's valuation, especially for a growing company like OpenAI.


I don't know how much I can speak to what ideological objections you have against EA because your post focused more on the harm that members of EA have done. I will say that you treat as obvious that EA work in global health is insignificant to the value they've destroyed, and I'd like to push back on that. I'll accept the (clearly flawed) premise that EA is responsible for the FTX crash and the complete destruction of OpenAI, and that advanced AI development is solely good. GiveWell alone estimated they've averted 159,000 deaths between 2009 and 2021 https://www.givewell.org/default/citations#:~:text=standout%20charity%22%20designation.-,Lives%20saved,will%20avert%20over%20159%2C000%20deaths. Can you honestly say you'd kill 159,000 people to prevent the FTX collapse and to save OpenAI? I'd also like to note that not all EAs think developing AGI is a bad idea, and most EAs and most EA money from my understanding are devoted to non-ex-risk causes.

All these points are all accepting your premise that everything a self-proclaimed EA does is fully attributable to EA.

I want to say that, if I understand your allusions correctly, I'm very sorry that you lost money to FTX, and I agree that EA norms were partially responsible for SBF's fraud, more than some EA leaders would like to admit.

@MaxMorehead Thanks for these points. I will review them in detail when I fully wake up later and make corrections.

@SteveSokolowski Add to all these, that Shear, an EA, was responsible for mending the rift in the company.

I'm not seeing much of a point to trying because you posted it on a reddit that seems to be extremely pro-AI. This seems an absurd assertion: "Meanwhile, in the real world, people are suffering from cancer and aging, and dying at a rate of 150,000 per day. Around the world, real people alive today slave away in subsistence farming or expose themselves to brain damage after burning trash to recover pieces of gold from discarded computer parts. These are problems that AI can definitely solve."

Uh, I see no reason that is definitely true, but it hasn't gotten any pushback, it seems to me like this subreddit is very convinced that the singularity will come and will ameliorate all human suffering (including aging???!!!) so I don't see a likelihood that my pushback will get 50 votes

@TiredCliche There actually is a post that would almost qualify already posted there - it has 38 upvotes but includes personal attacks. While there are obviously trolls, I have found the discussion to be more reasoned than I thought it would be.

Just noting that the offered 100M is 1 dollar. Maybe another one from this market liquidity. This would be below market rate even on fiverr, maybe try asking ChatGPT?

@CamillePerrin This, I'd be tempted to write the rebuttal myself, but not for 100M.

That said, given that it sounds like you genuinely do want to understand whether your arguments are correct, maybe you might have more luck posting somewhere other than r/singularity?

More related questions