Do you identify as EA or E/acc?
293
resolved Jun 28
EA
E/acc
Both
Neither
See Results

EA:

Effective altruism (EA) is a 21st-century philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis".

People who pursue the goals of effective altruism, sometimes called effective altruists, often donate to charities or choose careers with the aim of maximizing positive impact.

Effective altruists emphasize impartiality and the global equal consideration of interests when choosing beneficiaries. Popular cause priorities within effective altruism include global health and development, social and economic inequality, animal welfare, and risks to the survival of humanity over the long-term future.

E/acc:

Effective accelerationism, often abbreviated as "e/acc", is a 21st-century philosophical movement that advocates for an explicitly pro-technology stance. Its proponents believe that unrestricted technological progress (especially driven by artificial intelligence) is a solution to universal human problems like poverty, war and climate change. They see themselves as a counterweight to more cautious views on technological innovation, often giving their opponents the derogatory labels of "doomers" or "decels" (short for deceleration).

The movement carries utopian undertones and argues that humans need to develop and build faster to ensure their survival and propagate consciousness throughout the universe. Its founders Guillaume Verdon and the pseudonymous Bayeslord see it as a way to "usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms."

Get
Ṁ1,000
and
S3.00
Sort by:

e/acc is retarded and cringe but ultimately harmless.

EA isn't as retarded or cringe. But's much more harmful.

This desu:

By attempting to reduce acceleration to a technical or temporal process that can be understood, intervened on, and even mastered, E/ACC smuggles in its own surreptitious humanisms—’growth is good,’ ‘big tech knows what’s best for you,’ ‘science is good and we should trust it.’ These are just updated versions of the same corny boomer ‘we know what's best for you’ neoliberalism fueling OpenAI’s plans to scan the eyeballs of African villagers and Elon’s universal poster verification schemes. E/ACC posers would have you believe that they are rebelling against the norms of their industry, when really they are just rehashing the same tired ‘disruption’ narrative of every faggy Silicon Valley exec who gets interviewed on Bloomberg. They are the Hitler Youth of globalist control society, and nothing more.

hey I found this article and it's just all my thoughts except written out in English:

What this pathetic clinging to humanism shows us is not just that the current ‘accelerationists’ have strayed far from the godfather’s plan, but that they have no idea what accelerationism meant in the first place.

Effective altruism always made me confused because what your actions ought to be change drastically based on how long of a timeline you take into account, how many people you expect to cooperate with you, etc.

And so to claim to be an EA requires you to take a stance on one such set of conditions, but personally I just want anything that gets us to relative post scarcity (completely feasible if maybe a century or more away at current development speeds and not assuming any major AI breakthroughs, for details see Isaac Arthur's SFIA). And as far as I'm aware, very little people seem to have those conditions in mind.

Now, this might seem like an e/acc stance then, but I think the term got so tainted by the AI hype that it's not a valid term to represent my position.

To be really lame, I'm thinking of something you could call "F/acc" for "Factorio", where you just invest everything into the most extreme industrialization possible all the way to space industry, obliterating the entire biosphere and turning earth to something akin to Mars in habitability, while exporting all the biosphere bits we care about to space habitats or whatever.

And once you're properly set up in space and extracted whatever you cared about here on earth, you could terraform earth back into the current form (or better!??) if you're afraid of keeping things in the habitats for preservation for some reason (remember earth has like a billion years until it's uninhabitable anyway unless you starlift the sun so I don't exactly see the problem personally), and everybody should supposedly be happy in the end.

Mira/acc

I believe it is possible strong AI will kill us all, but also that it will bring about a relative utopia and uplift a lot of humans out of meaningless suffering.

The magnitude of my expected value of AI being built is therefore roughly zero, but with very high variance over outcomes. So I suppose that makes me “both” because the common thread of EA and e/acc is that AI will be a big deal, which I agree with.

@ConnorDolan This could be a false dichotomy. AI could be created later, when the alignment problem has been solved indisputably, and then the likelihood of utopia would be much higher.

@fd19f3 The oppurtunity costs of delaying means a lot of humans will suffer in the mean time, for an uncertain chance of actually preventing doom.

@ConnorDolan There is logic in that, but does it seem to you that the chance (even 5-10%, let`s say, although it could be much higher) of destroying all humanity and its potential future is worth the undefined probability of improving currently existing lives?

It seems to me, that with enough time, the probability of doom while reaching utopia can at least be seriously reduced (e.g. by improving human intelligence before attempting to create AI).

In terms of the mathematical calculation of expected value: if humanity survives and develops, billions * billions of happy lives could happen over its history. So shouldn`t the real chance of losing all of that outweigh the improvement of only a small currently existing fraction of humanity? ( 0.05 * 10^18 > 0.95 * 10^10, roughly speaking, even when ignoring everything else). Probably many people wouldn't play Russian roulette for a chance to win a million dollars, to draw an analogy.

@fd19f3 But the flip side is that more time isn’t a guarantee of a good outcome. It would only help in the marginal case where alignment turns out to be hard but not impossible. I think it’s much more likely that it’s either impossible or really easy.

The analogy would be more like would you play Russian roulette for a million dollars now, or wait a year to play Russian roulette for a million dollars adjusted for the time value of money.

@ConnorDolan In this scenario, as technology advances, we could become able to better understand whether alignment is incredibly hard or easy, and in the case of the former not create AI/pause for a long time. Utopia can probably be achieved even without AI.

Other relevant polls

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules