Response To (SlateStarCodex): Against Against Billionaire Philanthropy
I agree with all the central points in Scott Alexander’s Against Against Billionaire Philanthropy. I find his statements accurate and his arguments convincing. I have quibbles with specific details and criticisms of particular actions.
He and I disagree on much regarding the right ways to be effective, whether or not it is as an altruist. None of that has any bearing on his central points.
We violently agree that it is highly praiseworthy and net good for the world to use one’s resources in attempts to improve the world. And that if we criticize rather than praise such actions, we will get less of them.
We also violently agree that one should direct those resources towards where one believes they would do the most good, to the best one of one’s ability. One should not first giving those resources to an outside organization one does not control and which mostly does not use resources wisely or aim to make the world better, in the hopes that it can be convinced to use those resources wisely and aim to make the world better.
We again violently agree that privately directed efforts of wealthy individuals often do massive amounts of obvious good, on average are much more effective, and have some of the most epic wins of history to their names. Scott cites only the altruistic wins and effectiveness here, which I’d normally object to, but which in context I’ll allow.
And so on.
Where we disagree is why anyone is opposing billionaire philanthropy.
We disagree that Scott’s post is a useful thing to write. I agree with everything he says, but expect it to convince less than zero people to support his position.
Scott laid out our disagreement in his post Conflict vs. Mistake.
Scott is a mistake theorist. That’s not our disagreement here.
Our disagreement is that he’s failing to model that his opponents here are all pure conflict theorists.
Because, come on. Read their quotes. Consider their arguments.
Remember Scott’s test from Conflict vs. Mistake (the Jacobite piece in question is about how communists ignore problems of public choice):
What would the conflict theorist argument against the Jacobite piece look like? Take a second to actually think about this. Is it similar to what I’m writing right now – an explanation of conflict vs. mistake theory, and a defense of how conflict theory actually describes the world better than mistake theory does?
No. It’s the Baffler’s article saying that public choice theory is racist, and if you believe it you’re a white supremacist. If this wasn’t your guess, you still don’t understand that conflict theorists aren’t mistake theorists who just have a different theory about what the mistake is. They’re not going to respond to your criticism by politely explaining why you’re incorrect.
I read Scott’s recent post as having exactly this confusion. There is no disagreement about what the mistake is. There are people who are opposed to billionaires, or who support higher taxes. There are people opposed to nerds or to thinking. There are people opposed to all private actions not under ‘democratic control’. There are people who are opposed to action of any kind.
There are also people who enjoy mocking people, and in context don’t care about much else. All they know is that as long as they ‘punch up’ they get a free pass to mock to their heart’s content.
Then there are those who realize there is scapegoating of people that the in-group dislikes, that this is the politically wise side to be on, and so they get on the scapegoat train for self-advancement and/or self-protection.
Scott on the other hand thinks it would be a mistake to even mention or consider such concepts as motivations, for which he cites his post Caution on Bias Arguments.
Caution is one thing. Sticking one’s head in the sand and ignoring most of what is going on is another.
One can be a mistake theorist, in the sense that one thinks that the best way to improve the world is to figure out and debate what is going on, and what actions, rules or virtues would cause what results, then implement the best solutions.
One cannot be an effective mistake theorist, without acknowledging that there are a lot of conflict theorists out there. The models that don’t include this fact get reality very wrong. If you use one of those models, your model doesn’t work. You get your causes and effects wrong. Your solutions therefore won’t work.
There already were approximately zero mistake theorists against billionaire philanthropy in general, even if many of them oppose particular implementations.
Thus, I expect the main response to Scott’s post to mainly be that people read it or hear about it or see a link to it, and notice that there are billionaires out there to criticize. That this is what we are doing next. That there is a developing consensus that it is politically wise and socially cool to be against billionaire philanthropy as a way of being against billionaires. They see an opportunity, and a new trend they must keep up with.
I expect a few people to notice the arguments and update in favor of billionaire philanthropy being better than they realized, but those people to be few, and that them tacking on an extra zero in the positive impact estimation column does not change their behavior much.
There were some anti-government arguments in the post, in the hopes that people will update their general world models and then propagate that update onto billionaire philanthropy. They may convince a few people to shift political positions, but less than if those arguments were presented in another context, because the context here is in support of billionaires. Those who do will probably still mostly fail to propagate the changes to the post’s central points.
Thus, I expect the post to backfire.
Pingback: Rational Feed – deluks917
What post would you have written in Scott’s place? Not necessarily the full post, just an outline.
Good question.
Any/all of:
Option 1: Nothing. Something Is Wrong On the Internet. Ignore.
Option 2: Post aimed at reminding the actual billionaires that this is just online mob behavior and will pass, and what they do is super important.
Option 3: Post pointing out why these complaints are happening, respecting the real complaints to the extent they deserve respect.
Option 4: Post making the case against big government and/or for his causes, without a harmful framing.
Option 5: General attempt to develop a better model of how to fix the mistake of circumstances and dynamics that lead to such applications of conflict theory.
Option 6: Approach this as a conflict and use conflict theory. Devise countermeasure.
Those seem like the obvious/clear ways to approach this.
Thanks — I had a somewhat similar feeling of hard-to-describe futility when reading Scott’s post. But I think his mistake is a little more subtle than conflict vs. mistake theory. I’m not sure who exactly it is who Scott is arguing against, but in my little media bubble it’s not true that everyone who hates on billionaire philanthropy is a pure conflict theorist (unless you expand the definition to the point where everyone on Earth as a conflict theorist).
Arguments against conflict theory can have something of conflict theory and something of moloch. But they can also be quite rational if you accept as a tenet that billionaires are evil.
Consider: are you against Scientology philanthropy? Scientology is not the most charity-prone religion, but it certainly has charities. Some are there for publicity and proselytizing, but there are enough decent people in the religion that there must be real, pro-social philanthropic organizations with a Scientological bent. Now imagine a world where for some reason Scientology-adjacent organizations donated billions and billions of dollars to very reasonable causes: malaria prevention, animal welfare, and so on. In such a world many of us would probably still oppose it, but there would be (non-religious) apologists of Scientology who would say “well, it’s evil and cultish and homophobic, but considering all it does for the world, it’s still a net good”. More staunch opponents would then argue “no — the philanthropy just gives it a veneer of respectability that gives Scientology room to be even more craven and dangerous”.
I think that this kind of discussion is going on right now on the internet in “anti-billionaire” circles. I happen to disagree with the basic tenet that all billionaires are evil and coordinated as a class to enrich themselves at the expense of other social classes. (Though I do believe something like this about scientologists.) I think billionaires, just like people, can be either good or bad and have no coordination whatsoever, evil or otherwise. But I don’t think having the opposite belief is conflict-theoretic, and I think this is fundamentally where the disagreement lies.
Sure. I think our models are fully compatible. And I agree that conflict vs. mistake isn’t exactly right, in the sense that it’s more complex than that, it’s just the closest I could get with a small number of words (and using Scott concepts when discussing Scott’s stuff is a factor as well).
But I do think that this is mostly, yes, conflict vs. mistake. A mistake theorist welcomes good actions from otherwise bad actors. A conflict theorist worries mostly about what that does to the conflict, so thinks they might actually make things worse. That’s exactly the disagreement you’re describing.
And if someone’s position is, to quote you, “all billionaires are evil and coordinated as a class to enrich themselves at the expense of other social classes” then how in the world is this person NOT a hardcore conflict theorist? Whether they’re right or wrong? That is kind of the central case of conflict theory: The rich and powerful against the people.
Of course, even if I did think that all billionaires were doing this, and the whole ‘give away almost all my money to obviously good charities’ was a dastardly publicity stunt to somehow make Gates and company even more money (I’m just going with it, don’t ask how that works), I’d still say, please do keep making the contributions.
I think that believing that people in general have no coordination whatsoever, evil or otherwise, is necessarily conflict theory, though generally of the more cautious and defensive variety. I also think that conflict theorists should be against billionaires in general.
I think that being a pure mistake theorist is *lying* although saying “there are no or practically no bad people, and the idea of retribution is an immoral confusion” is true.
Partial mistake theorists should quickly notice that democracy, inequality and centralization are basically conflict theory intuitions, so on the margin they should favor less of all of them. That implies that billionaires are better than the government, even if they are both evil.
Anand Giridharadas mocks non-zero-sum solutions and otherwise falls pretty far on the Conflict side of the spectrum:
https://www.theguardian.com/us-news/2018/dec/18/anand-giridharadas-author-aspen-wealthy-elite
On the other hand, Vox is an influential publication:
https://www.vox.com/recode/2019/7/25/8891899/john-arnold-billionaire-criticism-donor-advised-funds-silicon-valley-philanthropic-loophole
The Vox piece is a technocratic criticism of a policy that misaligns incentives; i.e. a Mistake (incidentally, I think “Mistake Theory” is a misnomer; the real opposite of Conflict Theory is “Liberal Pluralism.”)
So I’m not so sure that that criticism of philanthropy is overwhelmingly rooted in Conflict Theory.
I did see that Vox article, as much as I try to avoid them (Vox is a site that claims to be all mistake theory and objective and mostly isn’t, and many of their articles copy this fractal pattern). We have the headline calling this a ‘loophole’ when it’s very much an intended way of doing things. And we have it mainly saying, yes, that there’s a technical mistake in the law causing a bad result and the law should be changed. But I don’t get the sense that the post in question is against billionaire philanthropy at all, and I don’t see it as something that Scott would feel the need to try and talk someone out of, either.
I don’t see how liberal pluralism is the opposite of mistake theory, I can *sort of* see how you could get that hypothesis if I squint? But seems not a good fit, to me.
Agree with Zvi that Liberal Pluralism is orthogonal to the Mistake/Conflict dichotomy. Mistake/Conflict is a theory of “ways different people think other people are wrong” while Liberal Pluralism is more about people not actually being wrong.
Hi Zvi, I enjoy your writing.
But here, I think you are missing that there is a third audience: undecided, nones, etc. People like myself who don’t have a strong opinion on something they have not thought too seriously about despite hearing several arguments about the topic. Scott’s post for me was a solid take down of the arguments I had previously heard or read, but have not taken the time to think about.
My habit is not to think about the meta “Is this person mistake or conflict theorist? If mistake, read on, if conflict don’t engage.” People are rarely pure mistake or conflict theorists; they vary by the topic and the season. So just dealing with the argument helps others like me understand what the argument would be if it were a good faith discussion about charitable giving.
Yeah. For sure.
I guess the one thing I would say to defend Scott, if not the post, is that if he weren’t such a conflict-blinkered ultra-mistake theorist, he wouldn’t be the Friedman’s favorite blogger. Yeah, it’s a huge mistake here, and it’s worth hammering it into his head. But it’s exactly the reason he’s blind on this that he’s so great (IMHO).
I can respect the premise that there are advantages to being blind in some areas, to allow one to focus in others. I can certainly relate, looking back. Perhaps we’re better off if there are people who don’t know. But a lot of people are being led into this mistake and those around it, and it’s causing a lot of damage, and I have to assume it’s better to try and fix it.
I dunno. This was a quick write that generated a lot more reaction than I expected.
FWIW, I think Scott has one advantage here. He’s not speaking to the entirety of civilization. He’s speaking to the kind of people who read Slate Star Codex. And almost all of us are mistake theorists to a near-autistic degree(or a literally-autistic degree, or a somehow-make-autists-look-functional degree). So there’s no real worry about making conflict theorists angry, because conflict theorists would never consider his arguments in the first place. They might occasionally link him, but they’re not going to take what he says seriously. Why would he worry about what they think?
Maybe it’s a good, or not-that-bad, strategy to simply ignore ‘conflict theories’. Conflict theorists are, of course, going to interpret one’s actions relative to the theorized conflict – at least up until they realize they’ve ‘made a mistake’. But it doesn’t seem particularly efficacious to engage in a ‘conflict’ one thinks is a ‘mistake’.
Per the relevant conflict, I’d definitely be considered ‘pro billionaire’ by the conflict theorists but, as I consider that conflict theory a ‘mistake’, I don’t consider myself to be on that side. But, it doesn’t really seem possible to do anything ‘inside the conflict theory’ except fight on the terms of that theory. Any attempt to disavow or repudiate a conflict theory is inevitably going to be interpreted as partisan by any conflict theorists – there’s no way to escape the conflict from their perspective, except to the degree to which one can reach them via mistake theory.
And more generally, the intensity or degree in which anyone believes in any particular ‘conflict theory’ must itself be either due to ‘faith’ or some other ‘mistake theory’. If it’s faith, it’s impregnable against contrary evidence. And otherwise, it’s susceptible to argument and persuasion.
Ignoring conflict theorists is fine if you’re asking what the ideal action is given sufficient power plus sufficient freedom from conflict theorist interference. If you’re trying to understand what is going on, or change the results, or pick a policy that will interact with a lot of conflict theorists, you’re going to… make a lot of mistakes. And generally, if you model the world without them in it, your model is going to have a lot of nonsense in it. Which I think is a serious problem for the class of folks who read this blog and SSC.
A conflict theorist is neither obviously wrong about what’s going on, or impossible to convince, or doing a net harmful thing. A lot of them can be convinced, but ignoring their perspectives and points entirely is a poor method of doing that.
Faith as a distinct category of thing that just can’t be altered isn’t really in my model – faith is created and faith is destroyed, all the time.
I’m not sure that a lot of us folks that read this blog or SSC _are_ neglecting to include conflict theorists in our models. I think having the phrase has made that easier but ‘politics is the mind-killer’ is an ‘old’ LessWrong insight and captures a lot of the same ideas.
I’m in ‘violent’ agreement with you about conflict theorists not being (obviously) wrong, nor acting in a net-harmful way, generally. I’m less sure they’re impossible to convince relative to reasonable limits on possibility.
Of course faith is destroyed, and created, all the time. But any particular instance is often impervious to any one ‘attack’ or attempt to induce a ‘crisis’.
I think the specific subject of Scott’s post is one that is particularly unlikely to result in fruitful dialog among conflict and mistake theorists. Egalitarianism is almost, if not actually, a human universal, and maintaining even a moderately unequal status hierarchy seems to require significant social, and moral, ‘infrastructure’.
I think it’s likely to be most _effective_, for areas with particularly intense ‘conflict theories’, to argue or discuss the relevant ‘mistake theories’ and simply ignore the conflicts. It seems _mostly_ inevitable that any attempt to address or dismiss the ‘conflict theories’ directly to be interpreted as supporting the side(s) of the conflicts that are (considered to be) relatively more powerful.
I’ll admit all of this seems _really_ murky to me. I’ve, personally, mostly given up on arguing with or trying to convince – directly – anyone that’s reached wildly different moral or political conclusions. For you, let alone Scott, to try to determine the optimal way to write or pick topics about which to write … seems so dizzyingly complicated and difficult as to be approximately impossible.
It’s very possible that all of this, on my part, is just a bunch of rationalization. I want to read Scott’s posts and my ‘revealed preference’ seems to be that I’m willing to give up a lot of possible political effectiveness to continue to do so.
Conflict vs Mistake is also about prioritizing.
Consider the value “Power should not scale exponentially with rank.” e.g. if the power of the median person is 1, and the power of the 75th percentile person is 100, the power of the 90th percentile should be closer to 10,000 than 1,000,000 and the power of the 99.9th percentile should be around 1,000,000 not 10^100.
Each person could rate their agreement with this “equality” value on a scale of -10 to +10. Folks who rate it with a high absolute value will be taking action to support their belief, but (probably most people) will rate it 0 and not spend any effort thinking about it either way.
Now, consider you are a Mistake theorist with a value rating of +7. You see someone advocating for the opposite position – clearly they are making a mistake, and should be corrected / challenged / resisted, lest they misinform others. You also notice someone giving money to the opposing advocate, so you give that supporter information about the competing values, and why that person is wrong…
From the list of observable actions, you seem indistinguishable from a Conflict theorist?
This comment, and its whole parent thread, is a fantastic description of conflict theory and its dynamics:
https://www.lesswrong.com/posts/9fB4gvoooNYa4t56S/power-buys-you-distance-from-the-crime?commentId=S4QephDXJWqGJhuuH
My favorite bit:
> The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people *enacting* conflict in the course of discussing conflict, rather than using denotative speech.
> Now, consider you are a Mistake theorist with a value rating of +7. You see someone advocating for the opposite position – clearly they are making a mistake, and should be corrected / challenged / resisted, lest they misinform others. You also notice someone giving money to the opposing advocate, so you give that supporter information about the competing values, and why that person is wrong…
>
> From the list of observable actions, you seem indistinguishable from a Conflict theorist?
I don’t think this actually follows. Believing a ‘mistake theory’ doesn’t imply that anyone that disagrees with your object-level theories “should be corrected / challenged / resisted, lest they misinform others”. But, aside from whether one thinks one should ‘push back’ against mistakes, the nature of the way in which one _does_ push back is likely to be different, e.g. someone that believes a mistake theory might communicate something like “I think you’re wrong. X seems to be Y based on the evidence I’ve seen.” whereas someone that believes a conflict theory might communicate “Your claims hurt Z people.”.
You’re right that we’d expect ‘conflict’ among people that disagree, regardless of whether the participants hold a mistake theory or conflict theory _about the nature of the disagreement itself_.
[I meant to reply to the post itself. Zvi – feel free to delete the ‘original duplicate’.]
This comment, and its whole parent thread, is a fantastic description of conflict theory and its dynamics:
https://www.lesswrong.com/posts/9fB4gvoooNYa4t56S/power-buys-you-distance-from-the-crime?commentId=S4QephDXJWqGJhuuH
My favorite bit:
> The harms from talking about conflict aren’t due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they’re due to people *enacting* conflict in the course of discussing conflict, rather than using denotative speech.