Previously (SlateStarCodex) on Effective Altruism: Fear and Loathing at Effective Altruism Global 2017
A few weeks ago, I listened to a podcast from Freakonomics Radio that purported to ask the question, Are The Rich Less Generous Than The Poor?
A clever researcher decided he wanted to measure altruism. He’s tried testing it in the laboratory, but lab settings are often importantly different than the outside world, so he wanted to do an experiment in ‘the field’.
His idea was that he would pose as a postal worker in The Netherlands, and drop off envelopes addressed to someone else, with the envelope window showing visible cash money, up to 20 euros, along with a card like “thanks to Grandpa on his birthday.” He then waited to see how many rich and poor people returned the envelopes.
He found, to his surprise, that rich people returned far more of the envelopes than the poor. One thing he noted was that rich people returned the envelope at the same rate regardless of how much money was there, whereas the poor were more likely to return 10 euros than 20 euros, but he still essentially disbelieved the result.
To his credit, he still presented his findings at a conference, even though they were in conflict with his priors and the agenda pretty much everyone is trying to push these days.
Then Steven Levitt of Freakonomics fame got up, and said you aren’t measuring altruism, you’re measuring who has the life skills and available time to return misaddressed envelopes.
Story checked out, at least to some extent. Rich people were more likely to know how to return an envelope or to have the time to do so. The findings were adjusted for this variable, and a new story emerged that rich and poor were exactly equally generous. What a coincidence!
My instincts tell me that if you issue a correction for an unexpected and hard to measure hidden variable, ignoring many other potential hidden variables and differences, and your result is now to find no result at all, there are two possibilities. Either your study is under-powered and you’re saying ‘no effect’ when you mean ‘the effect isn’t that large,’ or you “corrected” for the variable in a way that effectively assumed it was the entire difference.
Rich and poor differ in lots of different ways. There are reasons to think poor people would be more generous, and reasons to think rich people would be more generous. Neither result would surprise me. The idea that you’ve found one of those reasons, decided that reason doesn’t really count, and then miraculously all the other reasons exactly cancel out? TINACBNIEAC. Omega doesn’t happen to equal one by happy accident. You did that.
Those mistakes are illustrative and important. It’s good to see them in a low-stakes place where all participants are truth seeking and admit their errors once the errors are found. Ten points for all!
The most important mistake was that returning a lost envelope isn’t only not effective altruism. It mostly isn’t altruism at all.
Returning a lost envelope is honesty. Returning a lost envelope is honor. You return the envelope because it isn’t yours. It belongs to someone else and it is your honor-bound duty to make sure that it reaches its intended destination, to the extent that you are willing to go out of your way to do so even if the cost to you in lost time exceeds the expected benefits to the recipient of this particular envelope. You are defending the system, upholding the public trust, and reinforcing the habits that make you the person you want to be.
Returning the envelope isn’t entirely not about altruism. You certainly can feel compassion for the person who is about to lose money, to worry about the relatives who will argue over what happened to the envelope and who forgot whose birthday. You could have the argument over whether it would be better to do that or to donate the money to mosquito nets or existential risk prevention. If you do have that argument, even in your mind, to me you are already lost, because it’s beside the post. The envelope isn’t yours. Give it back. That’s all there is to it.
I am not claiming it is never right to take what is not yours. I am not arguing for death before dishonor. I am not arguing against it, either. That’s beyond scope. I am simply saying that in such a context, it should completely dominate the calculus.
Effective altruists are trying to do good. We salute you for that.
We even salute you even when you are worried about things I do not care about at all. Even when they pursue plans that would actually be a negative for me, and make my life noticeably worse. Even when they pursue plans that would be both pointless and catastrophic, proposing killing off wildlife or collapsing physics.
We even salute you when we think that it’s more than a little worrisome that your statements logically imply you really should be trying to wipe out humanity. Even when I think they’re completely wrong and have made rather silly and fundamental errors. You’re trying to do good, and at least for now all such people claim to have the ‘but killing people or using force to do that would still be bad’ hack going on, although that has a shaky historical track record under pressure.
Charity and non-profit work is one path to doing good, and optimizing its impact is a great idea on the margin. Some of you should do that.
Some of you should do the other. They have the more important job.
What concerns me about the culture of effective altruism is the implication that the altruistic actions are the ones that count.
I worry many in EA are looking at life like a game where giving money to charity is how the world scores victory points.
I worry that others in EA are looking at life like a game where giving money to charity is how the world saves lives, and saving lives is how you score victory points. You can also substitute prevent suffering, or other such worthy (and unworthy) causes.
I am not saying to stop giving money to charity*, stop saving lives or stop preventing suffering. I am saying that this is not The Good, how good is primarily done, how most victory points are scored, or what the game of life is or should be about.
Life is mostly about getting things done. Most good is done on the object level, and most of it is done for other reasons. A lot of it is done for profit or survival. A lot is done for love and friendship. A lot is done for status, for your tribe, for fame, for sex or for power. A lot is done because of curiosity, or because it is interesting, or because it is the virtuous thing to do. A lot is done because of how it would look if you didn’t do it, and people found out. A lot is done because it’s bothering the heck out of someone that it isn’t being done, to prove that it can be done, or simply because it’s there. A lot is done incidentally while doing something else; simply going out and doing things tends to create lots of positive side effects.
There are also some reasons that are less fine, but I think even they are far less not fine than people act like they are.
Stop feeling bad about doing things for those reasons. As long as you’re doing good, those reasons are absolutely fine. All of them. Yay motivation. Object level stuff has to get done. Life must go on, so must the show and so must business. People need to stay motivated to do things.
Stop feeling bad about not sacrificing everything including exploration, and doing the theoretical maximum amount of good on the margin. Jesus told us, take all you have and give it to the poor, as a suggestion to improve behavior on the margin, and anchor us. He correctly assumed almost no one would do that. It would not even be good if everyone did that, even if they did the sane William MacAskill version of that and kept the bare minimum. He knew that some people had to keep on doing the productive work, and getting rewarded for that, at least until some big changes happened. He didn’t actually want everyone to do it, and he also didn’t want everyone to feel bad about not doing it.
The most impactful and successful charity in the world is Amazon.com.
Most importantly, for the love of utility, stop making other people feel bad about this, and stop using manipulative techniques to steer them towards what you think is more effective at scoring the world some victory points. I understand the temptation. I really do. To not do this, from a certain utilitarian point of view, is to implicitly say that you value your own scrupulousness and honor, or the feelings and abstract accuracy of those you are interacting with, more than you value saving human lives. Tough sell. Again, I really do get it.
We still have to do it. We still have to protect The Mission. EA is good to the extent that it shares with us The Mission. The other way lies madness.
You say people gonna die. I agree. Sad. Balance is tricky. Death rate stable at 100%. We should fund further research. We should fix it. This is not The Way.
It is also not the Official Party Line of EA. I’ll get to that at the end.
It is tricky to balance giving the right encouragements and rewards to those who do hugely ambitious projects, altruistic and otherwise, while also giving ordinary efforts the credit and honor they deserve, as well. Certainly, though, we need to stop feeling bad about doing better than almost everyone. More than that, we should make people feel good when they are doing better than almost everyone. Hell, we should probably do that even if they’re doing way worse than that!
I apologize to anyone who got the impression from last week’s post that they are bad and they should feel bad, simply because they weren’t out there saving the world, or they were not ambitious enough. Please do not do that. All that I ask is that you honor those who do set out to do so, and continue to seek what is true. Based on who has commented on this blog so far, I can say with high confidence and no known exceptions: You are not bad, nor should you feel bad.
I suspect that in some sense, rather than feeling bad one should instead feel what some traders call sad. You can be sad that you’re not doing more, because doing more would be great, without it being an ethical issue that more wasn’t done. No blame is assigned. You’re simply expressing the fact that more being done would have been better, and it would have been (or would be) nice to do more. If you bet on your favorite team, either you lose (and are sad you bet at all) or win (and are sad you didn’t bet more). It can’t be helped. Sad!
Que this week’s Quote That Should Freak You Out:
(I had been avoiding the 80,000 Hours people out of embarrassment after their career analyses discovered that being a doctor was low-impact, but by bad luck I ended up sharing a ride home with one of them. I sheepishly introduced myself as a doctor, and he said “Oh, so am I!” I felt relieved until he added that he had stopped practicing medicine after he learned how low-impact it was, and gone to work for 80,000 Hours instead.)
— Scott Alexander
This is insane on a minimum of three levels.
The first level is that Scott Alexander is feeling bad about being a doctor. On the list of ‘people who have done the most for EA’ he is probably not number one right now, but if he ended up at the top by doing what he’s already doing I would not be terribly surprised.
Scott Alexander is doing a hell of a lot of good, both for his patients and for other people, and the work he does as a doctor is vital to that.
Scott’s work as a doctor, seeing patients, experiencing how the other almost everyone lives, is vital to the jobs he does advocating and writing. Scott provides lots of value with his in-depth discussions of research, of drugs, of treatments, of how to deal with various problems and safely interact with various questionable ideas for self-modification. Having someone in our space that fills that role is incredibly valuable for its own sake, and having it be Scott allows those people to find his blog, find each other and be exposed to our ideas including EA.
The last thing Scott should be thinking about is not practicing because it’s eating too much time that could be used elsewhere.
The second level is that someone else quit being a doctor to do career counseling, because being a doctor wasn’t impactful enough.
It is one thing for a person to decide not to study medicine because what they want to do is save lives, and it isn’t the best way to save lives. I totally, totally get that. Learning to be a doctor is often a decade of misery, during which you are earning little, and by going to medical school you are taking a slot that could have been used by someone else. The link makes some good points, although it seems to be slanting its presentation to justify its conclusion.
It is entirely another thing to quit practicing medicine after getting your license to go work for 80,000 hours instead because you don’t think you are saving enough lives.
The calculation is completely different once you are already a practicing doctor. [EDIT: Note that 80,000 hours full analysis does realize that these cases are very different, and recommends that already trained doctors not quit, as is pointed out in the comments.]
That decade of work you were looking at before you could help much or earn much money? Already completed.
That slot in medical school? It’s gone. That slot in residency? Also gone. No one will replace you. There will be one less doctor. Yes, that’s stupid, we should train lots more doctors, and that would be a Worthy Cause to lobby for, but for now we don’t and that’s not changing soon.
The system spent a quite large amount of money on training you, and all of that is gone now. Demand will exceed supply by that much more, prices will rise to clear the market. More doctors will opt out of insurance, especially Medicaid and Medicare. More people will be unable to afford care, or find themselves bankrupted. The government will be under that much more burden to pay the higher fees.
It seems hard to me to look at a profession where the skeptical view is that you create four years of healthy life for every year you work, and you get one of the highest pay scales, and where no one can legally replace you if you quit, and where high costs from lack of supply are threatening to strangle the entire economy, and which gives you a high level of trust and a potential strong public platform, as something you have an ethical motivation to quit so you can spend more time telling other people what their ethical motivations are.
I am not saying that this former doctor is bad and should feel bad. I am certainly not saying that going to medical school is an implicit social contract that then obligates you to use that degree to help others. Even if I believed them, I would not have any social right to make such claims. Certainly there are good reasons to decide not to practice medicine, including simply no longer being interested in the practice of medicine. If the true justification was that staying wasn’t sufficiently ethical due to opportunity cost, that makes me quite sad.
The third is the level discussed earlier – even if being a doctor is not locally victory-point-maximizing-on-the-margin it’s pretty terrible to go around putting so much implicit pressure on well-meaning people that doctors start avoiding you out of the shame of continuing to practice. Seriously, everyone. Cut it out.
Scott then ends where I will end, on the Official Party Line:
And one more story.
I got in a chat with one of the volunteers running the conference, and told him pretty much what I’ve said here: the effective altruists seemed like great people, and I felt kind of guilty for not doing more.
He responded with the official party line, the one I’ve so egregiously failed to push in this blog post. That effective altruism is a movement of ordinary people. That its yoke is mild and it accepts everyone. That not everyone has to be a vegan or a career researcher. That a commitment could be something more like just giving a couple of dollars to an effective-seeming charity, or taking the Giving What We Can pledge, or signing up for the online newsletter, or just going to an local effective altruism meetup group and contributing to discussions.
And I said yeah, but still, everyone here seems so committed to being a good person – and then here’s me, constantly looking over my shoulder to stay one step ahead of the 80,000 Hours coaching team, so I can stay in my low-impact career that I happen to like.
And he said – no, absolutely, stay in your career right now. In fact, his philosophy was that you should do exactly what you feel like all the time, and not worry about altruism at all, because eventually you’ll work through your own problems, and figure yourself out, and then you’ll just naturally become an effective altruist.
And I tried to convince him that no, people weren’t actually like that, practically nobody was like that, maybe he was like that but if so he might be the only person like that in the entire world. That there were billions of humans who just started selfish, and stayed selfish, and never declared total war against suffering itself at all.
And he didn’t believe me, and we argued about it for ten minutes, and then we had to stop because we were missing the “Developing Intuition For The Importance Of Causes” workshop.
Rationality means believing what is true, not what makes you feel good. But the world has been really shitty this week, so I am going to give myself a one-time exemption. I am going to believe that convention volunteer’s theory of humanity. Credo quia absurdum; certum est, quia impossibile. Everyone everywhere is just working through their problems. Once we figure ourselves out, we’ll all become bodhisattvas and/or senior research analysts.
The Official Party Line is saying that the heroic values are optional. This is a movement of ordinary people, its yoke is mild and it accepts everyone.
Sorry. The Official Party Line is bullshit. It is philosophical bullshit. The people saying the Official Party Line are saying it because it helps spread the movement, rather than because it is true. This is what they have to say in the pitch meeting. This is what you have to tell the people who aren’t ready to give everything they have. You don’t say, “We are Out To Get You for every waking moment of your entire life, because doing less is letting children die” if you want to succeed. So you make sure not to say that.
You don’t have to be explicit. The logic of the movement implies it. Its culture implies it. Every interaction carries this undertone. Every action has the whispered motivation, ‘how can we make people give more of themselves’? Occasionally someone explicitly says that a night out drinking with your friends is tantamount to negligent homicide. The Life You Can Save isn’t exactly subtle.
You don’t have to be a vegan? Technically you don’t. In practice, if you touch on the community and aren’t vegan, you will get into fights and conversations about it. People will think less of you, and give you the impression you are bad and you should feel bad. Lots of people I know spend lots of time worrying exactly how vegan they need to be. Either your EA gathering is vegan, or it has a bunch of fighting about it not being vegan.
People pick up on all that. They follow this presumptions to their logical conclusions. One of those conclusions is to push others as hard as you can. Choices Are Bad, and many realize that under such conditions they are doomed to feel bad, so they look for solutions. Stopgaps are attempted, to contain the damage, like the Giving What We Can pledge. They help, sometimes. It’s a hard problem, and I’m sympathetic; I don’t think anyone is being malicious about this.
Life is mostly about life. That’s how and why it works and why we have nice things. Having nice things and selfish people trying to get them needs to take up a huge portion of GDP to make the system work and incidentally give us the power to help out with other stuff on the margin. Most people will start selfish and mostly stay selfish. It’s fine. Better than fine. If that hadn’t resulted in indoor pluming, industrialization and electronics, I wouldn’t be typing this, or even exist.
My model of EA is that it was originally founded by a few people who had money they wanted to give away, and wanted to make sure the money did as much good as possible, so they set out to analyze that question. That was insanely great. One of the things that kept it great was the framing that there was a certain budget to do the most good with, so it was time to focus on figuring out what was true. Once you put ‘how big is your budget’ into the mix, that starts to be the knob most attractive to try and turn, and the dangers of culture drift towards what is persuasive are upon you.
[Author’s note/edit: Several quite smart people have said this conclusion is hard to parse and they aren’t sure what I’m getting at. I agree that it is not as clear as I would like, so if you’re confused, assume that the point you previously thought I was making is in fact the point I was making, because chances are that you are mostly correct. Sorry. Tsuyoku Nairitai.]
Last week I was talking about how a group dedicated to pursuit of truth and saving the world ended up with a different culture and organizing principle, and was mostly accomplishing living life, and noting that this seemed badly in need of fixing. To now say ‘life is mostly about life’ and worry about people feeling overly pressured to help others seems to directly contradict that.
To me, it’s not a contradiction at all. It’s the same concern. EA started out being about figuring out what charities had what effects, which meant its culture was all about pursuit of truth with a goal of world saving.
In both cases, there was a Worthy Cause, and to accomplish that Worthy Cause required creating a culture that put what is true above what sells. If anyone ever wants my help on a project like this, so long as I’m not actively against the cause in question, I’ll at least strategize with you.
Then people involved realized that selling to and recruiting others would actually be kind of neat, allowing more people to focus on what was true and/or the Worthy Cause, making the world a better place. It required some compromises, but that’s life. That brought in more people, who were more concerned with selling and Worthy Cause and less with truth, and suggested how great it would be if we compromised a little more, and perhaps added this additional Worthy Cause.
In EA’s case, the good news is that at least this worked, and they’re now busy moving nine figure amounts to Worthy Causes, and at least some of that is going to the ones I consider most worthy, even if they’re also distorting them in ways that make me nervous. In fact, it seems likely that it was the very over-the-top actions themselves that largely led to those nine figure sums being moved.
In the other case, the good news is that it at least sort of worked in the sense that the people involved are happier, and people are now giving serious thought and money to Worthy Cause even if they’re mostly doing it in the wrong ways for the wrong reasons.
Truth is on the defensive these days. So are reasoning and discourse. Speaking the truth causes one’s voice to tremble more than it should, or than it used to. Many even speak of the post-truth era. My model of why this is happening would be beyond scope. I do feel the need to point out that the compromises seem to be happening in the places and people near and dear to me. That which makes truth-focused groups unique and valuable risks being lost as they are transformed into something else, and we risk passing future points of no return soon.
None of this is easy. Make no compromises and you have no movement. Make too many, or make the wrong ones, and you have no movement worthy of the name.
When you are looking to expand your group, it is hard to say, we’re going to only do exactly the welcoming things that the people who properly value our mission truly like. It is hard to intentionally not do the things that would attract other smart, nerdy, nice, sexy people that would make our lives happier in the short and medium term, but who do not share the mission.
It is really freaking hard to see nine figure sums being moved in ways that save lives, say the reason we get to move around these nine figure sums is by seeking truth and doing good on our own, so we need to keep doing only that, and trust in that enough. It is super hard to follow the decision theoretic implications that lead to that conclusion. Keeping your end of that deal means refusing to compromise your integrity even in the ways everyone compromises all the time. Even when you believe this means a lot people will die.
The good news is we don’t need to compromise on truth or say “Credo quia absurdum; certum est, quia impossibile” to avoid feeling bad about not being perfectly good. We can care about things that don’t show up as the maximum direct good done in a utilitarian calculus, do mostly object level things, and still sleep at night.
Believing that everyone will eventually become Bodhisattva and/or senior research analyst is absurd. Believing this will happen on its own is doubly absurd. But we don’t want everyone to be that. It wouldn’t even be a good idea. It is safe to not feel bad for spending your time and money doing object level things and enjoying ordinary life. You do not need to make everyone an effective altruist. Mostly you should confirm you do a lot of good, then make yourself effective.
*: The exception is if you’re not in a position to do this. You need to pay off your debts and get yourself an emergency fund. Seriously. If you are in debt, focus on paying it off, and stop giving large amounts of money away. Please.
I’m actually quite alright with saying that someone who encourages someone to stop being a doctor should be ashamed. Also, that most doctors should be ashamed. 80K hours should be asking about what a doctor can choose to do, not about what a typical member of the reference class in fact does.
I didn’t take their estimate seriously enough to even notice that they were equating ‘what you can do as a doctor’ with ‘what the average doctor does.’ This is in fact the *exact opposite* of the question you need to ask when thinking about going into medicine, which is how much more good you can do than the doctor whose slot you are taking (the marginal medical school applicant).
I do not, however, think that it is a useful frame to say that so many people should be ashamed of themselves.
Do you really believe that guy actually changed his career because he was happily being a doctor and then was guilted into it? I’m skeptical. From my own experience and other people I’ve known people who are happy in their career and enjoying it tend not to actually make that kind of change (they may feel guilty about not doing so which is bad but thats a different issue). I personally think its more likely the guy discovered he didn’t really like being a doctor but having that kind of 10 year investment hanging over ones head and the expectations of family and friends makes it very difficult to say ‘crap, I made a mistake I’m not happy being a doctor’ and the 80,000 hours stuff serves as a face saving way out.
I mean how long did it take you to think up this post? Surely an order of magnitude less time than this guy considered the question before deciding to give up his career and I doubt that you are so much smarter than he that he couldn’t have come up with all the same reasons if he had wanted to remain a doctor. I think its much more likely he is using this as a means to cover the fact that he made an embarrassingly bad choice by spending all that time learning to be a doctor only to find out he would rather do something else.
I mean one can never be sure but if I had to guess that is what I would guess.
I have spent a considerable amount of time on my model of such things. I do not think it is impossible that I spent more time modeling them than he did, although I agree he probably spent more time than that making his choice. I certainly hope so.
I agree that he also had other reasons, and probably didn’t like being a doctor as much as he hoped. Part of that is likely that he came in with the image of himself as someone who was helping and saving lives, and 80K hours comes in and takes that away from him, making the job a lot less fun. Now instead of feeling good about his accomplishments he feels bad; we both agree he shouldn’t, but he does.
I also think that even if the real reason was that he hated the insurance paperwork and this was a way to save face, the fact that he is presenting it that way to another doctor, and making that other doctor feel bad, is still a problem, and not only because it’s not truth-seeking; it is actively harming Scott, and others like him.
Yes, I mean I tend to agree but on the other hand its really hard to ever get this kind of information out there without causing this kind of guilt and in the long run I do think we benefit from having it out there.
And I certainly agree it would be better not to make people feel guilty but almost any conversation about motives and what does good has that feature. I mean I could make the same allegation about this very piece. Surely there are some people who are doing 80,000 hours type stuff who will read this and be inclined to feel guilty about making other people feel guilty. If you promulgate a rule not to make people feel guilty that very rule will itself sometimes make people feel guilty.
I guess what I’m saying here is that it might be that the best one can do is personally avoid making statements that tend to be guilt inducing and simply be cheery and positive and not touch the subject of guilt at all. This isn’t to disagree with your claims per see but to the extent that they are true it seems similar worries could be raised.
Also I’d add that every single person who answers “Why did you decide to become a doctor” with “Because I wanted to help people” probably has this guilt inducing effect on those that choose to do something else.
I don’t know if its fair to criticize someone for something which might simply be an almost unavoidable aspect of conversation given human psychology.
Also, I think you are conflating the question of feeling guilting/blameworthy with the issue of doing less than one possible can. I, for instance, just don’t believe that guilt and blame are valid moral notions…I believe in a partial order of possible states of affairs and that is it. So no, I don’t think it follows from the fact the various arguments that are floating around in EA that one needs to feel guilty or bad for not doing more. It may be that as a matter of human psychology that is how it will be processed.
But I kinda feel I lost the exact claim you were arguing for at the end so maybe I’m misreading you.
Yeah, you’re not the first one to tell me that last part. I’ll keep working on the end although it’s probably too late at this point, hopefully I can make it more clear. I do think we agree on it.
Don’t worry about it. Nothing drives engagement like the chance to point out someone is wrong :-). I presume you’ve seen the xkcd about someone being wrong on the internet.
Pingback: Rational Feed – deluks917
[First comment. So feel free to bash my misunderstandings.]
1. There is a humoristic tone in the SSC post. Taking it as face-value seems to be a main drive of your post.
2. About the envelope
>That’s beyond scope. I am simply saying that in such a context, it should completely dominate the calculus.
This isn’t really clear. Why should it ? You mention `honesty` and `honor`, which sound “like people should uphold virtue”. Then `because it isn’t yours`, which sounds like `taking what isn’t yours` is inherently bad. And finally `defending the system` and `upholding public trust`, which is more (rule-)consequentialist.
So, you use arguments from virtue, deontology, and then rule-consequentialism. But then, right after :
– `I am not claiming it is never right to take what is not yours.` -> Disowning the deontological argument
– `I am not arguing for death before dishonor.` -> The virtue argument
– `It should completely dominate the calculus.` -> Right after you said thinking about the calculus is `beside the post`
What are you arguing for ? What are you claiming ? For what obvious reason should it completely dominate the calculus ?
3. `Stop feeling bad about doing things for those reasons. As long as you’re doing good, those reasons are absolutely fine.`
You are stating that the reasons are valid *as long as people do good*. But people feel bad, because these reasons make them do not-good things. You are stating that as long as good things are done *in absolute values*, then it’s fine. But people actually genuinely factor counter-factuals and feel responsibility.
This is the same fundamental moral intuition : if a kid is drowning in the lake, you help him. You don’t say “Well, in absolute value, I do good things, and this walk isn’t negative, ~”. Because that kid is in Africa, not drowning but dying from [X] and that a lot of people do nothing doesn’t change that.
To make people feel less bad, simply telling them to feel less bad isn’t enough. You have to frame metaphor in a way to make it wrong, without the frame being obviously inconsistent, and still modelling most moral intuitions.
People have tried, and failed. Not to say that you won’t achieve it, but to say that the issue requires more than a `Stop feeling bad`.
4. `Most importantly, for the love of utility, stop making other people feel bad about this, and stop using manipulative techniques to steer them towards what you think is more effective at scoring the world some victory points.`
I think at that point, giving an account of your experiences, rather than immediately trying to defend general claims about the community, might be better.
I’m mostly in contact with EA communities online (FB group and Discord server), and I simply don’t see that. People welcome me, and welcome various initiatives, and various levels of involvement (including simple curiosity/lurking).
Epistemology becomes relevant here :
– General claims need stronger arguments.
– Damageable claims ([X is bad]) need stronger arguments.
– Both external sources **and** personal anecdotes are lacking. After having read >4500 words, I still don’t understand why you say what you say beyond “It clashes with my ethical and world views.”.
5. `Believing that everyone will eventually become Bodhisattva and/or senior research analyst is absurd.`
This is not representative of most EAs’ argumentation. The usual argumentation is two-fold :
– Some people would actually spend (more) effort in altruism (NGO work, donating, politics) if they had more time and money.
– Some people don’t know how to do good well, and would do so if they knew : people who already give giving to better charities, people planning to study research to help people going into a more effective field.
The problem is wider. There are some vegan thinking that animals suffering / getting killed is morally bad and not an affair of personal preferences. As such, there is no possible mutual agreement.
More specifically, at a given threshold of moral wrongness associated to animals suffering / getting killed, it becomes a simple culture war between meat-eaters and moralist vegans.
These individuals are more similar to universalist fundamentalists than isolated communities seeking to live without being persecuted.
Welcome, and +1 to numbering your points! Makes things much easier to keep track of. I think a lot of your points are valid.
1. Scott is an amazing writer, and part of that is infusing most things he does with humor. I think that the actual points still stand and can/should mostly also be taken seriously. His brand of humor speaks truth. If I stepped over the line somewhere and took something too seriously, I’m sad about that, but proper calibration means I should be making both errors.
2. Returning the envelope, provided it is logistically reasonable for you to do so, is over-determined. I presume you’d agree with that. I’m saying that in order of most common true reasons for doing so, and in terms of the best reasons for doing so, altruism is something like a distant fourth. I’m also saying ‘that the envelope is not yours, therefore give it back’, is, to me, the best way to think about the situation and the right justification in context, because it is a sufficient reason and saves you the trouble of working out the rest of the problem. It’s a shortcut, in addition to being its own case. Doing it because you are virtuous, or because it is your civic duty, are paths to the same place, and in terms of the actual if we were going into tons of detail in a dialogue sense I would start with the virtue argument, but life is short. Life is short is what I mean by dominate the calculus, here.
The reason I say, I’m not endorsing never taking things that aren’t yours, is to make it clear I am not overreaching and making the ‘you break this rule actual never’ argument, which I actively disagree with – but I’m probably 99th percentile in terms of how rarely I think one should break it.
3-4. Taking these together, I think this embodies the very problem. There is an intuition that appeals to some people, that is being presented and reinforced, as the core and de facto founding principle of the movement, that says that the kid dying should make you feel bad if you don’t do everything in your power to stop it. As you say, it’s difficult to convince some people that this is wrong, once they’ve been exposed. This is a very dangerous idea to take seriously, and EA encourages people to take it seriously.
It screws a lot of people up. Naming names would be bad, but when I say ‘pay off your debts first’ it’s because I see people who really, really shouldn’t be donating, donating enough to hurt.
As I say, I believe, both based on what I’ve read and what I’ve seen and heard from EAs at meetups and otherwise, that they do indeed react well and welcoming to people who commit far less, but that is what you do when you want to get new people and grow a movement.
I feel like one cannot have it both ways. If you think that money can save lives and you have the same obligation to save a life in Africa as you do someone who is right in front of you and who no one else can help, then you have two choices and both of them are really bad and make you feel bad. I do think there are a bunch of knock-down metaphors/arguments against that principle, because there are so many levels on which it plain old does not work. Whether you or others would agree with me, even if I got the argument into its tightest possible shape? I don’t know. It’s a big job and I’m pretty outgunned.
Basically, here I’m arguing that these arguments are bad strategy. They lead to bad results for individuals, and also for groups. They also imply that approximately everyone on Earth is horrible person. Reject them even without a deeper rejection.
For now, that will have to do. Hopefully in the future I can do better.
5. I was echoing Scott’s post there and was NOT trying to imply that this was the typical EA belief. If I gave the impression I thought that, I clarify here that I do not and did not think that. The point there was that this being false is not a bad thing, or a problem to be fixed. I fully anticipate that the guy who did say this was inevitable was REALLY WEIRD even for weird EA, and that Scott was mostly fully kidding (sic) at the end.
I explicitly endorse the “given person wants to devote X resources to doing good, and has utility function for good done Y, help them do Y effectively” mission for all not-outright-evil values of Y – I mostly endorse even the view if you replace doing good by simply doing most anything, really. I’m a big fan of the E in EA. I also explicitly endorse helping people get their **** together.
6. There are certain fights that I would rather avoid picking, so I’m being very careful what claims I am making about this particular issue.
1. I think the error was mostly one-way here. Also, how he reported that EA change of career was mostly some humoristic short description rather than a precise recollection.
6. True enough, you have higher stakes than I do. If you do want to talk about it in private, with pleasure.
5. You don’t modalize any of your claim, and your post is literally entitled `Altruism is incomplete`. It doesn’t leave much room for nuance :D
2.3.4. I think learning about common meta-ethics concepts would make discussion easier.
Also, the problem of factoring egoism into morality is a hard one. Typically, you get stuff like “Only negative rights.” (you don’t have to save the drowning child) or “No morality at all.” (you don’t care about saving the drowning child).
It’s not a problem of EA though, but of morality with positive obligations in general.
Having talked about it on Discord, people think that you can have some spare effort (time+money) for altruism which you try to make the best of, and enjoy your egoism the rest of the time. If you want to put more effort / live your life around it, feel free, other EA will give you social value.
But none thought that not dedicating more effort was a bad thing. (At least, not a morally bad one.)
Again, I’m not in any offline communities. So this might only reflect the consensus of a restricted number of EA.
Happy to talk in private at some point. I do not know who you are, but I doubt that would be too big a barrier. I wonder if I would be surprised.
And of course, I hope that I am being pessimistic, and things are better than they appear, on many fronts.
I am happy to hear about the discord server. It sounds like they are doing a reasonable thing, at least explicitly. Good times.
I do agree that if I want to take this on I need more formal grounding in technical/academic meta-ethics; part of my reluctance to do so is that is that people with far more free time than me have spent a LOT of it on building up reasons why any given idea or argument is technically wrong or fatally flawed. That’s a good way to get dismissed and ignored. But certainly, if you want to invoke standard concepts, don’t worry about me not understanding them, worst case I’ll Google or look in Stanford (if you think that would lead me astray, you can provide links).
>Happy to talk in private at some point. I do not know who you are, but I doubt that would be too big a barrier. I wonder if I would be surprised.
Glad to read that. I will start by engaging more regularly with the content on this blog.
That you write your thoughts in a centralized place makes discussions easier.
>I do agree that if I want to take this on I need more formal grounding in technical/academic meta-ethics […]
I don’t have any formal education in academic meta-ethics. I felt similarly to `why any given idea or argument is technically wrong or fatally flawed`, but talking with people with education in philosophy, I realized there is actually quite some space for new thoughts. (And the classical counter-arguments are usually sound.)
The lack of on argument map of meta-ethics from the fundamentals to the modern thesis make things harder.
Basic concepts simply make communication simpler and classical counter-arguments are actually more sanity checks than final answers.
“The most impactful and successful charity in the world is Amazon.com.”
Care to elaborate?
That’s item 6A in the list of claims he’d like to make. So maybe eventually.
I hope to do so at some point. But basically, Amazon runs at epsilon operating profits, which maximizes surplus, providing many (including me!) with thousands of dollars a year in value versus a world without it. Our lives are just way better and that’s happening in more and more places every day (see their purchase of Whole Foods, and their first act is to lower prices so people can afford to shop there).
Measuring value added to the world in dollars is not a good idea because the marginal utility of wealth is not constant.
Quick example to motivate the idea that this matters: if we measured value in dollars, we would not care about risk in finance (it averages out in the expected $ calculation), so there would be no point to diversification, etc.
For a longer, rambling development of this point in relation to market efficiency, see my post here.
Thanks for this, Zvi. The kind of things you’re speaking out against were already getting to me as depressive thoughts in my head back as early as 2011 or so. I think that was before EA was fully a thing. Since I’m vulnerable to shame, it drove social isolation for me, because how can I face people if I’m this useless in being able to help save the world? It wasn’t the only thing, but it was a big thing. It took many years and devotion to other things and the people I still kept in my life to let that go.
Seeing the clusterfuck of various internet social movement thrash around between then and now helped solidify a healthier perspective. The implication, “That dollar you spent on something you personally enjoy could have gone towards saving a life, just sayin’,” lives in the same functional neighborhood as, “If you’re not outraged most of the time, you are a bad person.”
We need a Douchebag Peter Singer meme.
I remember the last time I saw Adam in person, it sounded like he was venting about people not doing enough. As we sat down on a bench in a small park off of Nostrand Ave, I reflected on myself and I went back to feeling bad again. And I asked, am I doing enough?
And Adam looked at me and said, “No, I think you’re actually doing your best to live by your values.” He smiled in embarrassment and clarified. He was mostly griping about people saying that they wanted to save the world, and then spending most of their work on signaling and feeling good via affiliation.
He even met one person who completely admitted that they really didn’t care about helping people that much in the end. They did it for self image, and to have that self image reinforced from feedback by others.
Meanwhile, Adam actually fretted about not doing enough, and, worse, not being physically able to do enough. He wrote about it frequently in locked Livejournal posts. And once or twice a year, I’d get an email from him out of the blue where he was scared he was failing, that everything he was trying to do would collapse, that his health would go back to deteriorating and he wouldn’t be able to do enough anymore.
And I’d think, dude, everyone thinks you’re awesome and they’d completely understand if you had to slow it down. You’re probably in the top fucking five in the whole community of both coasts who aren’t also prolific public writers. This so impostor syndrome it almost feels contrived. But I would respond with something empathetic and maybe quote Seneca. But I think part of it wasn’t impostor syndrome, because it wasn’t a fear of social repercussions.
Not Enough. There exists a healthy place for these words. You nod and say, yes, I have yet work to do. Oettingen’s mental contrasting provides a good example of appropriate use.
But I’m more personally familiar with pathological, ulcerous Not Enough. I would illustrate it as a spiral, because the thoughts begin with mild observations, then progress into twisted and steep pronunciations. This is not good enough. My efforts are not good enough. My skills are not good enough. My cognitive output is not good enough. My health is not good enough.
Then, add in a shame vector. The terminology for shame is confusing, particularly guilt versus shame.
So when I say that I feel shame, people say, “Well, what’s really the worst that people could do to you?”
But what I’m talking about is different from fearing getting ridiculed, ostracized, or losing love. It’s feeling unworthy of love.
So, not: “I am afraid that if they find out that I am Not Enough, everyone will hate me.”
Instead: “I am afraid that when people find out, it will finally affirm that I am unworthy to face them.”
Sometimes it feels like we have a Charybdis and Schylla here.
Fail to construct sufficient standards, and the movement gets sucked into a whirlpool of status affiliation, signaling circle jerks, and just utter wrongness. This is a great feeding ground for sociopaths, who eat and eat until they get pulled under themselves.
On the other side, we get people burned out and wracked with guilt about not maximizing the flow of their dollars and time and krebs cycles for saving that one extra life. We get high level rationalists avoiding people because they feel they are ashamed for only being doctors. We get people exiling themselves because they feel they can’t do enough, aren’t successful enough, aren’t good enough, aren’t healthy enough to be useful for saving the world. And, I’d argue, we have people literally dying on this side.
My epistemic whatever for this pretty shitty (maybe for this whole rant) but I feel like those of us who actually care about saving the world—we’re getting a shit deal right now with these two hazards. At present it doesn’t feel like we have a dichotomy, where, you know, if we sail through the middle we are okay. It’s more like a random walk where so far most of the effort has gone into dumping players into the mix.
Douchebag Peter Singer crosses his arms and says, “Hey bro, no need to get all upset about this!” He shrugs and adds with a disdainful half smirk, “I’m just sayin’.”
Thanks again, Zvi, and I’m sorry this rant was a mess. I think you’re tackling this in a way that’s worth repeating. Growing up is hard and learning from it is hard and we all have to do it. And I think what you are doing will help, especially those who are just starting out.
Pingback: Paths Forward on Berkeley Culture Discussion | Don't Worry About the Vase
Pingback: Is Altruism Incomplete? - Giving Compass
> In both cases, there was a Worthy Cause, and to accomplish that Worthy Cause required creating a culture that put what is true above what sells. If anyone ever wants my help on a project like this, so long as I’m not actively against the cause in question, I’ll at least strategize with you.
I’d like to take you up on this offer. I think we are in a world where hard take-off is unlikely and I see the world moving in a direction that worsens human autonomy and freedom, I would like to do something about this. My current thoughts are [here](https://improvingautonomy.wordpress.com/blog).
Having read your top post and thought about it, I think the most productive way to help you would be via Skype. I am there under Zvi Mowshowitz, as you would expect. I could likely speak with you this evening, or nights later this week (all eastern time), add me there then let me know your availability.
Skype is tricky (especially voice/video) because I’m on an obscure variant of linux. Also I’m BST with a day job so probably can’t do eastern nights. Is google hangouts an easy thing for you?
Sure, Google works. I’m my handle here at Gmail.com. If eastern nights don’t work, I have the opposite of my usual problem, and we can aim to connect next Sunday in my morning?
Hi, are you good for chatting today? I sent you an invite on hangouts.
Yes. Will accept in about an hour.
I wrote the 80k profile on Medicine (https://80000hours.org/career-reviews/medical-careers/). Although I enjoyed reading this post (despite inevitable disagreements), I confess some frustration at the criticism leveled at the medicine stuff given these criticisms have already been anticipated – albeit from a variety of places on the website one may not necessarily have looked.
1) The actual number is almost surely lower: You are right to take the estimate unseriously, as it is extremely rough ecological analysis of pretty noisy data. However, it is not the maximally sceptical estimate. In fact, it is likely to be an overestimate: the modelling ignores any credit sharing for QALYs that should go to nurses, physiotherapists, labs, and any other medical staff group.
2) Attempts to measure elasticity and replaceability have already been made: (https://80000hours.org/2012/09/how-many-lives-does-a-doctor-save-part-3-replacement-84/). These attempts are 5 years old, but it isn’t something the folks at 80k were oblivious of. (I also note that these corrections will reduce the expected impact of someone going into medicine, as whoever you are replacing is better in expectation than no one at all).
3) I don’t equate ‘what you can do as a doctor’ as ‘what an average doctor does’: I have sections in the career profile which attempt to guestimate whether some fields of medicine have above-par impact, offer recommendations of which areas in medicine might have outsized impact (e.g. biomedical research, public health), and links to a post (thanks to Ryan Carey) which discusses maximizing earnings in medicine for E2G. That said I think work on the average impact is helpful to give a benchmark.
I understand your frustration. I have been in your shoes often, at other places and times, with people giving criticisms that I not only fully anticipated but feel strongly I have fully refuted/answered (and often not in a follow-up but in the post itself). I totally sympathize there. I did think about whether or not I had a responsibility to go digging through a number of other pages, but felt that the need to get things out there while Scott’s piece was still hot took precedence. One of the cool but frustrating things about writing this blog is that I usually end up with more questions/research/topics than I started with, which I take as a sign I’m doing something right, but which can also be paralyzing if I’m not careful (which is a topic I’m focusing on a lot, too).
With that out of the way, given your response and my now having the time, I’ve gone and read the full analysis, so here are my thoughts:
1. I agree that you do address many of the points I quickly made. The link you provided is MUCH better than the one I was sent to. In my defense, I got the original link from Scott, who is paying pretty close attention to EA and 80K hours and doctor-related issues, and as laid out, it definitely looks like you lose a million points for asking the wrong questions. I do see that later on, you recommend that existing doctors NOT quit, and for basically correct reasons, which is good.
2. I am modeling the USA system, as opposed to the UK system. This may account for some differences. In particular, in the USA the fixed number of medical school / residency slots essentially binds, so by quitting you decrease the count by 1.0, and by going to medical school, you increase the count by 0.0. So quitting and entering are both worse. You model the economic impact of marginal entry/exit, but from a fixed-pool-of-health-care-resources but elastic labor supply model that doesn’t work in the USA. Similarly, the UK has a rationing system that the USA does not; decreasing supply and increasing cost leads to a much bigger mess here than it would in the UK, where they can actually do a reasonable job of choosing what services to lose if they lose resources. Certainly I would say quitting is ‘less bad’ in the UK on many levels.
3. I think that you underestimate how good a doctor someone can choose to be relative to average, on several levels, if they’re on the 80K level of trying to make a difference. Lot of ways to do better.
4. I don’t accept the exchange rate (of buying QALYs with donations) you take as a given, even on the first direct level. I don’t think you can save that many QALYs (nor do I think you can translate the two forms like that, either).
5. As you note in your conclusion, 600 QALYs, or 20 lives saved, by working for 30 years seems pretty great if you don’t look at it through the exchange rate, plus you get a strong salary, and plus you don’t actually get replaced if you quit. If anything, this number is HIGHER than I would have guessed, not lower.
6. Doctors do a lot of good that is not about health. I understand we can’t count it here, but still.
7. Still, the 80K hours guy quit his job as a doctor due to it not being high impact enough, and Scott was made to feel bad about being a doctor because it’s not high enough impact. Not great, Bob.
Overall, I’d say, the full, later analysis seems much better, and the disagreements I have with it are less ‘this is wrong, stupid and irresponsible’ and more ‘this is a reasonable good faith effort if you buy fully into a few background assumptions about both philosophy/morality and the feasibility/effectiveness of marginal charitable intervention that I don’t agree with.’ Or at least, it is that, give or take me actually checking the studies and math, which I’m not going to do.
No worries, and on the matters you remark upon:
1) I fear you perhaps should find this work more objectionable than you in fact do. The profile mentions ‘leaning against’ recommending people enter medicine, and a footnote talks about a rationale contra medicine (i.e. it is already oversupplied with human capital). FWIW (and on which more later) I generally don’t think medicine is a great ‘EA option’, and although I wouldn’t offer blanket advice for everyone to quit, I don’t think someone quitting is any cause for regret.
2) I’d guess you get a more adverse response leaving medicine in the US than the UK. I agree both are fairly inelastic (although not perfectly, given IMGs etc.). I don’t think the dysfunction of the US medical system is hugely responsive to supply of doctors, and I don’t think the ‘QALY yield’ of going into medicine will jump by an order of magnitude when you account for these.
3) I agree the wideness of the distribution of impact in medicine is very important. I do make remarks on it in talks (but not in the post). There’s probably a subdivision between clinical practice and other aspects. Clinical practice seems to not scale very well, and so probably has normal-distribution like properties: broadly, you can only help the person in front of you, and it is hard to see 10x more patients or make their outcomes 10x better etc. Other things doctors do – like research – obviously have much wider distributions, and so the medically trained nobelists are probably much more productive than their peers. Yet I take the opportunity to note that medical training is only necessary for clinical practice, and other things doctors do well in (e.g. research, management) can also be done ably by non-medics.
4) The conversion rate quoted in 2012 is in hindsight optimistic. In talks I give I tend to use a ballpark of $100/QALY. I believe although the estimates are rough, they are not incommensurate, so I’m happy to defend claims along the lines of “The expected value of a typical medical career is several thousand dollars given to a good global health charity”. For whatever it is worth, if you stuck a gun to my head to offer a central estimate for the impact of a typical doctor, I’d be giving figures more in the 2 QALY/year range.
5) I agree medicine is worthwhile. Yet the sort of options for ‘EAs’ tend to be even more ambitious, so it seems good try and make comparisons to these (global health donations makes one natural yardstick).
7) [The much promised ‘which more later’]. I disagree with this bit (and what seems to be running under the hood of your comments) entirely.
I am fortunate that the 80k career coach who left medicine to do it is a friend of mine (I don’t speak on his behalf). I think his decision is not cause for regret, but rather of celebration. What’s the problem with someone leaving medicine to do something which is much higher impact?
One problem would be if it is mistaken: if working as a career coach just involves sucking people out of the world to whir the wheels of the EA behemoth to no real benefit, it would be much better to stick with the day job. Yet EA seems to be doing pretty well at actual benefit (9 figures moved to charities etc. etc. etc.) The value proposition for an 80k careers coach would be helping generate more people who can do valuable things like AI safety research – and, again, there seems tangible evidence 80k is actually driving output (https://80000hours.org/2016/12/metrics-report-2016/#what-did-the-changes-consist-of). If our career coaches ‘life’s work’ is at least a couple more people like this, this looks to me much better than a typical medical career (even modulo caveats about elasticity), and with a much thicker right tail than the ‘exceptional doctor’ one.
(A lot seems to turn on the actual merits. If Scott thought medicine was generally a better bet than going to work for 80k, it would be odd to feel guilty on finding out a career coach left medicine – it would seem to reflect on these lights admirable, yet mistaken, moral commitment.)
Another problem could be second order costs even if the object level case goes through. If our career coach went around sneering at other doctors, and generally does his bit to contribute to a generally toxic culture where we diss people for any luxury purchases or a ‘non-optimal’ career and so on. Yet in my experience he does none of these things, and the psychologising indulged in up-thread (secretly disillusioned with medicine, but pretends he ‘did it for the QALYs’ to save face in way that coincidentally makes others feel guilty) is false as well.
If we grant some minimal charity and take him at his word that he left medicine because he thought working for 80k would have a bigger impact, what would we have him do otherwise? Lie about his motivations for fear of making others feel guilty?
I interpreted Scott’s guilt as semi-sincere, but if it arises because of bad behaviour by others, it reflects poorly on EA in toto (I am also aware of issues re. scrupulosity, but I generally endorse things along ‘Official party line’: it seems this is likely counter-productive by the lights of utilitarianism, and bad by the lights of other reasonable moral views). Yet if people are feeling guilty in a manner which arise through good-faith behavior of others (e.g. offering an estimate of impact of medicine, sincerely leaving medicine on the intent ‘do more good elsewhere’), then I don’t think this can (or should) be helped. I give significantly less to charity than I could, and this is shown in stark relief by others in the EA community who give a lot more. That I am not like them is cause for moral regret, but I am glad, not sad, to be able to see their moral example.
Thanks for the detailed reply, again. This all seems very reasonable, and obviously I wish the man (and you) well. I agree that we disagree on the beliefs I have under the hood, and I think you’ve cashed enough of them out well enough that we don’t need to continue. I think you’re doing a good job of reaching the right conclusions given you have different views on the under-the-hood stuff.
(I think there’s a typo/math mistake in #4, since if it’s $100/QALY then several thousand dollars is still well under the estimate for the impact of a typical doctor – did you mean per year? Odd that an order of magnitude difference here doesn’t really change either of our positions that much, but I think it doesn’t and shouldn’t, given our other views, more of a can’t-let-the-thread-dangle kind of thing.)
Added a line above that clarifies that 80K Hours is in fact not recommending that doctors quit, in case that was implied. I can’t really edit comments (or at least haven’t figured out how yet), so I’ll just note here that my harsher criticisms of the analysis, as opposed to the actual expressed feelings and attitudes observed in the field at EA Global, are restricted only to the particular page in question, and the full analysis is much more reasonable and its issues lie in its assumptions outside medicine.
Pingback: Best of Don’t Worry About the Vase | Don't Worry About the Vase
Pingback: Sandcastles In The Sky | Because you never know who you will meet…
Read this twice now. Let me see if I can distill my counterargument here:
– Spending money on beer or video games impacts others in a pretty mild way
– Spending money on malaria prevention in impoverished nations is, by contrast, a great deal better for other people
– Lots of people spend money on beer or video games when, for a small switching cost, they could spend money on malaria prevention
– It is ergo a “bad thing” (or at least a large sum worse) for me or those people to not switch. If I feel bad about not switching because it’s not optimal, whether or not I “should” feel guilty, it seems at least like I’m honestly assessing the facts. That guilt is not based on a delusion. It’s not because EA has brainwashed or lied to me. Sometimes the things I or you find most motivating, exciting, or pleasurable really are less effective at helping people than malaria nets.
– If you can manipulate me into redirecting my money from beer to malaria prevention, and my guilt impacts me less than the malaria nets impact people in Sudan, then that’s a positive act.
That’s really it. That there are organizations dedicated to bad things that use the same tactic is entirely irrelevant. That the money represents my slack and could be used to start Google or finance prediction markets or otherwise “Do Things” is also irrelevant. The money is currently being allocated towards my porn subscription. Mostly people spend money on the things that benefit them, and those things don’t have some secret underlying society-preserving side effect besides keeping the other guy in business.
I understand you’re brilliant, and your thoughts on society and altruism and organizational politics are grand and nuanced. I understand that you can probably think of some 160IQ 5D chess moves or billionaire CEOs that move the needle more than EA donations. And I understand that donations are absolutely an argument that’s out-to-get-me, and scary organizations rely on a variation of this argument all the time to convert people to unworthy causes, and accepting this argument means accepting that perfection is really rough and feeling a little guilty. Unfortunately my caveman hind-brain doesn’t see you actually contradict the argument. I would like to see a plainly stated, different point of view that neither relies on nor is cryptically inserted into a Marxist-scale prologue of ideological world-building.
Written after I wrote the rest of this: Apologies if this is harsh/rude/unfair, but I wanted to actually answer your question in a small amount of space as best I could quickly. Your core request is something like “write something on the level of a philosophy book that would go alongside Aristotle, Kant, Hume and friends” and yeah that’s actually a great idea if I could pull it off but as you can imagine, HARD.
You’re saying this.
1. Currently you do spend money on X.
2. I prefer you gave the money to me so I could spend it on Y.
3. I can get you to give me your money.
4. I should do that.
And you think ‘well, yes, that should be the default way of thinking, you do math and Y>X after extraction costs TO ME so why shouldn’t I do that?’ and also ‘well my obviously exactly correct with nothing missing altruist utilitarian calculus fully describes value so my calculation is valid’ and ‘no manipulating people isn’t a cost to them and won’t do anything bad when done at scale what are you talking about’ and so on and so on, and I could provide detailed models of first order effects, second order effects, third order effects, virtues and habits cultivated or destroyed, incentives altered, psychologies damaged, production reduced, ability to process information destroyed, living in a fully extractive dark forest, slack and innovation destroyed, and also have you noticed all of the giant piles of skulls and seriously what the hell I can barely even and seriously what the hell.
It would be useful to think about exactly why in your model this doesn’t result in people spending all their time trying to extract resources from others, why increasingly hostile methods aren’t used when you’re not making unprincipled exceptions, why a world with a lot of such activity starts wouldn’t rapidly having less resources rather than more resources, why the whole thing isn’t easily hijacked by hostile forces, etc.
If your demand is ‘well I wrote a simple equation on the board and named a first order effect so that’s a full world model and so it’s on you to create your own full world model and make it fully legible to me if you want me to change, well, sorry but it does not work like that and I don’t know how to make it work like that.
Don’t apologize for being rude to someone who is already being rude to you.
> And you think ‘well, yes, that should be the default way of thinking’
Well no, I don’t. I think it should be _my_ way of _doing_. I obviously don’t want my pseudo-political opponents to do the same thing, and would prefer they thought differently, because it would mean they were less effective at competing with me. I assure you that is a distinct point and compatible with including error bars in my utility calculations, and error bars for the error bars, or navigating my damaged hardware.
> ‘no manipulating people isn’t a cost to them and won’t do anything bad when done at scale what are you talking about’
I do mention the first clause, and I’ll say it more explicitly now – it costs them the beer and then possibly some mental stability. In the comment I work that into the calculation and conclude malaria is worse.
>It would be useful to think about exactly why in your model this doesn’t result in people spending all their time trying to extract resources from others, why increasingly hostile methods aren’t used when you’re not making unprincipled exceptions, why a world with a lot of such activity starts wouldn’t rapidly having less resources rather than more resources, why the whole thing isn’t easily hijacked by hostile forces, etc.
It doesn’t because my model doesn’t demand such things. I’m not scaling anything. I’m just convincing someone to donate to EA by pointing out they should feel guilty. I’m not establishing a code of conduct unless you think I’m indirectly creating one in a second or third order way, and I really don’t. If I were designing humans from scratch, president of a social club, or drafting a constitution I might consider introducing principled objections against guilt tripping as a feature in order to prevent exactly the behavior you describe (although honestly in general I think guilt tripping is a net good). I might even follow and enforce it. However, I’m not doing that, I’m deciding where resource X should go and how to put it there. It unambiguously should go towards the malaria nets.
I think you’re using the this-saves-lives-so-it’s-justified thing when you want to justify your actions, but then claiming it doesn’t count when it would point to actions you don’t feel like taking, and not noticing that there’s a contradiction. If it’s unacceptable not to inflict costs on people in order to extract their resources in order to spend it on X, then it’s also unacceptable not to maximally scale efforts to do this, for exactly the same reasons.
And that’s what you’re doing. You are engaging in hostile action, to make people’s lives worse, in order to extract their resources from them, in order to spend it on what you think is better. And you’re saying that the ends justify the means.
It’s not like your logic thinks you should stop anywhere. If you don’t think you should rob them, why not? How could the negative impact of robbery make up for the nets? If you don’t think you should enslave them, if you can do so, why not? It’s only one person’s suffering.
And then you’re referring to anyone who doesn’t want this as an enemy, and wishing upon them what you consider errors (and presumably would spend some amount to inflict upon them such errors), so they’ll be less effective at making things happen in the world. And because you notice that if other people who have a different X they think is better went around doing this too, it would very much not end well. And often hasn’t in the past.
Anyway, I think you now should have a very good idea where I’m coming from, and what I see when I see people making this style of argument and drawing these types of conclusions, and then going out in the world to implement it, and convince others to implement it, which by the way is scaling it, or trying to.
I need to get back to my job and my other tasks, so calling it there, and pre-committing to not responding further.
Just FYI, I have a sense of strong moral outrage at the kind of Robinhood/slack seizure behavior I outlined. I don’t do that or typically encourage others to. I was writing a comment that articulated a kind of indignation at your indignation at people raising money for EA, then I wondered why there are these boundaries at all, and that somehow morphed into LARPing as an unscrupulous ideologue to see what you would say. I honestly didn’t expect you to reply and somehow that made it feel more appropriate to send. I still feel like there has to be some way in which you’ve turned your mind into an Escher painting in order to be anti-malaria-nets-fundraising, but I sincerely apologize for being mean.
Ah, didn’t anticipate that and seems OK to break the commitment to accept your apology. I respond to most comments here and it takes a lot of people by surprise.