Transcript of a Twitter Discussion on EA from June 2022

Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest’s announcement.

That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post.

This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term.

There are a lot of offshoots and threads involved, so I’m using some editorial discretion to organize and filter.

To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum.

As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads.

(As Kevin says, I did indeed mean should there.)

At this point there are two important threads that follow, and one additional reply of note.

Thread one, which got a bit tangled at the beginning but makes sense as one thread:

Thread two, which took place the next day and went in a different direction.

Link here to Ben’s post, GiveWell and the problem of partial funding.

Link to GiveWell blog post on giving now versus later.

Dustin’s “NO WE ARE FAILING” point seemed important so I highlighted it.

There was also a reply from Eliezer.

And this on pandemics in particular.

Sarah asked about the general failure to convince Dustin’s friends.

These two notes branch off of Ben’s comment that covers-all-of-EA didn’t make sense.

Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find.

This thread responds to Dustin’s claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length.

This came in response to Dustin’s challenge on whether info was 10x better.

After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length.

This response to the original Tweet is worth noting as well.

Again, thanks to everyone involved and sorry if I missed your contribution.

This entry was posted in Uncategorized. Bookmark the permalink.

12 Responses to Transcript of a Twitter Discussion on EA from June 2022

  1. Anonymous-backtick says:

    Speaking of Twitter, could it be any more obvious that Elon Musk was secretly threatened and intimidated out of finishing the deal?

    • TheZvi says:

      Yes. If tech stocks were up 10% instead of down 25% and he was doing this, it would be very clear. Given tech stocks are down, it is entirely possible he no longer wants to pay $54.40 per share – if he was doing the deal now I’m confident he would be paying $42 or $42.69.

      And yes, for some value of ‘we’ we should totally swoop in and buy Twitter if he manages to kill the current deal. One last chance!

  2. myst_05 says:

    One question that would be curious to answer: how much worse of humanity would’ve been if we just outright stopped all charity work altogether 50 years ago? How much good did it really do compared to a world where people don’t donate any money at all?

  3. Ikigai says:

    I have been heavily involved in the EA movement for many years, almost exclusively on a volunteering basis. It put me in a financially vulnerable situation and slowed down my career, and some negative experiences took a toll on my well-being, resulting in a sense of prolonged exploitation. I couldn’t share any of these points publicly – it wouldn’t improve my situation, and could just negatively affect my public image in a highly competitive and, unfortunately, not-so-altruistic environment.

    While I still consider myself EA-adjacent due to my alignment with foundational principles and appreciation for numerous people and initiatives, I’ve come to the conclusion that the movement I was extremely passionate about steered in a disappointing direction, and might be unlikely to meaningfully improve. Five reasons:

    1. **Misquantification of utility**: various organizations use various metrics, and in the vast majority of cases they don’t account for the critically important, exponential aspect of affective experiences. When poor metrics get combined with varying assumptions about the population ethics (and the vague but often authoritative best judgment of people working in some prominent EA orgs), the resulting lack of rigor can be used to justify many ineffective or even harmful interventions with just-so stories.

    2. **Sociopolitical bias and the dark triad**: at this point, the movement – instead of approximating a unifying, cross-cultural, timeless intention to do the most good – is largely controlled by one Narrative of contemporary US politics, which – while probably better than some alternatives – is epistemologically inconsistent, resulting in numerous inefficiencies, spiraling tensions, and unnecessary suffering. The baseline levels of Narrative and dark tactics grounded in its (mis)use now seems higher than in many corporations and NGOs. In addition, while I generally endorse utilitarian ethics, I also acknowledge that a large portion of people with utilitarian proclivities tend to score high on the Dark Triad traits. If you combine it with the political hijacking and uncanny NGO leadership dynamics, you notice that being a bad person is often an effective strategy to climb the ladders.

    3. **Game-theoretic deficiencies**: there are little (if any) visible efforts to align EA with local high-impact niches and the principle of reciprocity, fundamental to just and sustainable civilizations. Many things need unconditional help (x-risk reduction, relieving the pain of innocent people in need), but other interventions should be properly embedded in broader contexts involving reciprocal dynamics. Please see this 2015 comment: https://slatestarcodex.com/2015/03/06/effective-altruists-not-as-mentally-ill-as-you-think/#comment-188178

    4. **Detrimental self-recommending loops and systemic problems**: the concern about the self-recommending aspect is widely known since 2017 (http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/). There are some advantages of having tight connections between the organizations, but also many downsides that – at least in my estimate – got worse over the last few years. People keep hearing about the absence of money constraints and big orgs willing to fund any project meeting basic criteria – but then promising applicants get regularly rejected, and many activities are still run by self-sacrificial volunteers. Then, the discussion switches to talent gaps being the core issue – but there are many outstandingly talented people who regularly opt out of the movement after spending months and years on applications. Nobody cares much about the buildup of negative sentiment among them. Finally, we arrive at the recognition of some-sort-of-managerial problems. Overall, it seems that the movement is quite unresponsive to the reasonable feedback, and complex incentive structures keep blocking the progress.

    5. **Mental health**: EAs are unhappy, ~40-70% of them struggle with moderate to severe depression and/or anxiety. The movement selects for perfectionism and FOMOMU (the fear of missing out on maximizing utility), and regularly reminds you about the bottomless pits of misery in the universe. While this could be considered as an inherent occupational risk, it’s still addressed at the very superficial level – discussions concern largely ineffective solutions (CBT, SSRIs, vague musings about the work-life balance and imposter syndrome, without any clear takeaways that could scale under the current circumstances), and the mixture of a rising sociopolitical bias and dark triad dynamics continues taking its toll.

    Dear Reader, if you happen to be a prominent EA with more agency than me, please give these points some consideration – and make the world a better place.

    • TheZvi says:

      Thank you. Do you have a source for the 40-70% number in #5 or is that an estimate?

      • I says:

        I vaguely recall two EA mental health surveys, both indicating a ~50% prevalence, which would be in line with the reports I got from some honest conversations with EAs, and in line with the studies involving similar groups, such as the PhD (Ivy League) students (25-40% prevalence). It’s quite likely that the pandemic, war, and polarization added extra 10-20%.

        • TheZvi says:

          Huh. So that’s obviously stupidly scary high and indicates something very, very wrong with Ivy League students – they ‘won’ their childhoods and now half of them are depressed, what the hell is up with that? The recent world events weren’t great but really aren’t an excuse and it was very bad earlier. I need to understand this better. (I have guesses but don’t want to anchor)

          The other question then is, are the EAs much more depressed than they would be as counterfactual non-EAs, going around being neither E nor A? Or, hopefully, going around E at whatever they are doing. Or is it better to say that EAs don’t have a *solution* to the mental health problem that’s not coming from them?

          (To be clear, actually asking)

        • Ninety-Three says:

          EA has enough demographic skews that are obviously selection (I’m pretty sure EA doesn’t cause autism, for instance) that I think we should have a strong prior of “EA is heavily selected, trends in any trait are probably selection effects”.

        • Basil Marte says:

          Wrt. Ivy League students, section “The Big Lie” in https://srconstantin.wordpress.com/2016/12/12/sane-thinking-about-mental-problems/ and some comments.

  4. Sam Sarah says:

    Re: depression/anxiety rates, I assume that the key underlying factors include accelerating cultural evolution, environmental pollution, all things (social) media, many sedentary on-screen hours far from nature, poor nutrition, high competition and status uncertainty in the globalized world, polarization, isolation, gender war, misery-driven consumerism and superstimuli, disrupted sleep and circadian rhythms, and the loss of meaning/transcendence.

    I guess that many EAs would be much better if they didn’t engage in the movement; being burned can take a massive toll on one’s well-being, productivity, and creative spirit. We don’t hear from people who got mistreated or canceled in subtle ways, and nobody quantifies this form of net negative impact. Caution is especially advisable for those with relatively high expectations and exploitable tendencies for prolonged volunteering. If you have more original ideas challenging the status quo, heterodox sociopolitical views, identities labeled as problematic, and come from a disadvantaged background (mostly reflected in difficulty entering the authoritative circles), you might be better striving to do the most good independently.

    When it comes to solving the EA mental health problem, it would have to involve a major reform of the movement + addressing the factors mentioned above + improved general treatment options.

  5. Pingback: Links #26 – Harsimony

  6. Pingback: Criticism of EA Criticism Contest | Don't Worry About the Vase

Leave a comment