Response To (Scott Alexander): Bounded Distrust
Would that it were that simple.
There is a true and important core idea at the center of Bounded Distrust.
You can (and if you are wise, you often do) have an individual, institution or other information source that you absolutely do not trust to reliably tell the truth or follow its own explicit rules. Yet by knowing the implicit rules, and knowing the incentives in place and what the consequences would be if the source was lying to various extents, you can still extract much useful information.
Knowing what information you can and can’t extract, and what claims you can trust from what sources in what contexts to what extent, is a vital life skill.
It is also a difficult and often an anti-inductive skill. Where there is trust, there is the temptation to abuse that trust. Each person has a unique set of personal experiences in the world and samples a different set of information from various sources, which then evolves one’s physical world models and one’s estimates of trustworthiness in path dependent ways. Making real efforts from your unique epistemic perspective, will result in a unique set of heuristics for who you can and cannot trust.
Perspectives can become less unique when people decide to merge perspectives, either because they trust each other and can trade information on that basis, or because people are conforming and/or responding to social pressure. In extreme cases large groups adopt an authority’s stated heuristics wholesale, which that authority may or may not also share.
Scott’s model and my own have much in common here, but also clearly have strong disagreements on how to decide what can and cannot be trusted. A lot of what my weekly Covid posts are about is figuring out how much trust we can place where, and how to react when trust has been lost.
This all seems worth exploring more explicitly than usual.
I’ll start with the parts of the model where we agree, in list form.
- None of our institutions can be trusted to always tell the truth.
- However, there are still rules and associated lines of behavior.
- Different rules have different costs associated with breaking them.
- These costs vary depending on the details, and who is breaking what rule.
- This cost can, in some cases, be so high as to be existential.
- In some situations, this cost is low enough that statements cannot be trusted.
- In some situations, this cost is high enough that statements can be trusted.
- Often there will be an implicit conspiracy to suppress true information and beliefs, but the participants will avoid claiming the information is false.
- Often there will be an implicit conspiracy to spread false information and beliefs, but the participants will avoid explicitly claiming false information.
- This conspicuous lack of direct statements is often very strong evidence.
- The use of ‘no evidence’ and its synonyms is also strong evidence.
- There will sometimes be ‘bounded lying’ where the situation is painted as different than it is but only by a predictable fixed amount. If you know the rules, you can use this to approximate the true situation.
The difference is that Scott seems to think that the government, media and other authority figures continue mostly to play by a version of these rules that I believe they mostly used to follow. He doesn’t draw any parallels to the past, but his version of bounded distrust reads like something one might plausibly believe in 2015, and which I believe was largely the case in 1995. I am confused about how old the old rules are, and which ones would still have held mostly true in (for example) 1895 or in Ancient Rome.
Whereas in 2022, after everything that has happened with the pandemic and also otherwise, I strongly believe that the trust and epistemic commons that existed previously have been burned down. The price of breaking the old rules is lower, but it is more than that. The price of being viewed as actually following the old rules is higher than the cost of not following them, in addition to the local benefits of breaking the old rules. Thus the old rules mostly are not followed.
The new rules are different. They still bear some similarities to the old rules. One of the new rules is to pretend (to pretend?) to be following the old rules, which helps. The new rules are much less about tracking physical truth and much more about tracking narrative truth.
It seems useful to go through Scott’s examples and some obvious variants of them as intuition pumps, but first seems worth introducing the concept of the One Time, and also being clear about what this post doesn’t discuss due to time and length constraints, since that stuff is very important.
Bounded Discussion (Or: Why Can Wait)
This is the long version of this post due to lack of time to write a shorter one. I do hope at some point to write a shorter version.
Thus, this post is already extremely long, so it doesn’t have additional space to get much into my model of why much of this is the case.
Here are some of the things I am conspicuously excluding due to length and time.
I’m excluding all discussion of simulacra levels, and all discussion of moral mazes or even motive ambiguity, or the dynamics of implicit conspiracies, despite them being important to the underlying dynamics. I’m excluding all the reasons why there is pressure to break the rules as much as possible and be seen to be doing so, and why this pressure is currently historically high and increasing over time along with pressures to visibly reverse all principles, values and morality.
Thus I’m not ‘putting it all together’ in an important sense. Not yet.
I’m excluding all discussion of what the Narrative actually is, how it gets constructed and decided upon, what causes it to update and to what extent it responds to changes in the physical world.
I’m excluding all discussion of why there exists an Incorrect Anti-Narrative Contrarian Cluster (ICC), or a Correct Contrarian Cluster (CCC), or how their dynamics work in terms of what gets included or excluded, or what pushes people towards or away from them.
I’m excluding most of the necessary discussion of how one evaluates a particular source to decide how bounded one’s distrust in that particular source should be and in which particular ways, and how I identify sources I can mostly or entirely trust or ones which are less trustworthy than their basic identify would suggest.
I’m excluding discussion about how to elicit truth from by-default untrustworthy sources, if given the opportunity to interact with them, which is often possible to varying degrees.
I’m excluding a bunch of synthesis that requires more careful simmering and making into its own shorter post.
I’m excluding a bunch of other things too. This is quite the rabbit hole to go down.
Now, on to the One Time.
There’s a lot of great tricks that only work once. They work because it’s a surprise, or because you’re spending a unique resource that can’t be replenished.
The name here comes from poker, where players jokingly refer to their ‘One Time’ to get lucky and hit their miracle card. Which is a way of saying, this is the one that counts.
That’s the idea of the One Time. This is the high leverage moment. It will give away your secret strategy, show your opponent their weakness, blow your credibility, use all your money, get you fired, wreck the house or cash in that big favor. The rules will adjust and it won’t work again.
Or alternatively, if it succeeds you get your One Time back, but damn it This Had Better Work or there will be hell to pay. You come at the king, you best not miss.
Maybe that’s acceptable. Worth It. Then do what you have to do.
That’s one way of thinking about the price one has to pay for breaking some of these rules. Would this be slight annoyance? Or would you be cashing in your One Time?
Shooting at Yankee Stadium
Scott frames this as you being a liberal and thus lacking trust in Fox as a source, but it’s important to note that this does not matter. Either Fox News is trustworthy in a given situation, or it is not. MSNBC is also either trustworthy in a given situation to a given degree, or it is not. Your views on social and economic policies, and which party you want in power to what degree, should not matter. The exception is if the reason you are on one side or the other is that you believe one side’s sources are more honest, but if that’s true then you’re a liberal because you don’t trust Fox News, rather than not trusting Fox News because you’re a liberal.
Anyway, this is his first example:
One day you’re at the airport, waiting for a plane, ambiently watching the TV at the gate. It’s FOX News, and they’re saying that a mass shooter just shot twenty people in Yankee Stadium. There’s live footage from the stadium with lots of people running and screaming.
Do you believe this?
Yes, of course I believe it. In fact it’s rather overdetermined. Why?
- Clear physical fact claims, specific details.
- If false, will be known to be false quickly and clearly.
- If caught getting this wrong, price would be high relative to stakes.
- Admission against interest (could also say poor product-market fit).
- Live footage from the stadium.
- They are not in the habit of this type of lie.
The combination of these factors is very strong, and in the absence of counterevidence I would treat this as true with probability of essentially (1 minus epsilon).
I agree with Scott that deep fakes of live events are beyond the reasonable capabilities of Fox News or any other similar organization at this time. And also that even if they could do them, the price of getting caught doing so would be very high, even higher than the already high price of being seen getting this wrong. So the live footage alone makes me believe whatever I see on the live footage.
I would only doubt that if the stakes involved were somehow high enough that Fox News would plausibly be cashing in their One Time (e.g. if #3 was false, because the value at stake rivaled the potential price).
Note of course that the live footage doesn’t always mean what it looks like it means. It can and will be framed and edited to make it look like they want it to look, and anyone interviewed might be part of the production. It doesn’t automatically imply a mass shooting. But you can trust the literal evidence of your senses.
If it was Yankee Stadium without live footage, that lack of footage would be highly suspicious, because there should be cameras everywhere and Fox should be able to get access. I’d wonder what was up. But let’s say we move this to a place without such cameras, so it’s not suspicious, or otherwise we don’t have footage that actually proves that the shootings happened (and for whatever reason it’s not suspicious that we lack this). Are we still good?
Yeah, we’re still good. It’s still reporting physical facts with specific details, in a way that if anything goes directly against Fox’s vested interests. There’s no reason to lie.
What if in addition to removing the live footage, it was MSNBC or CNN instead, so there was a clear reason to claim there was a mass shooting but the situation is otherwise unchanged?
Now I notice that this is a sufficiently combination of missing factors that I’m moving from numbers like (p = 1 – epsilon) to something more like p~0.95. They could make a mistake here, they have reason to make a mistake here, they’re in the habit of calling things mass shootings whenever possible. The price for getting this wrong isn’t zero, but the mainstream media is good at memory holing its ‘mistakes’ of this type and isn’t trying to be super reliable anymore.
They are in the habit of this kind of lie, of finding ways to claim there are lots of mass shootings all the time, and characterizing everything they can as a mass shooting, so #6 also does not apply, although there would still be something that their source was claiming had happened – they wouldn’t as of yet be willing to use this label if none of their sources were saying bullets were involved or that anyone had come to harm.
It’s probably still a mass shooting, but if my life depends on that being true, I’m going to double check.
Scott’s next hypothetical:
Fox is saying that police have apprehended a suspect, a Saudi immigrant named Abdullah Abdul. They show footage from a press conference where the police are talking about this. Do you believe them?
Once again, yes, of course. This is no longer an admission against interest, but I notice this is an actual red line that won’t be crossed. The police either apprehended a suspect named Abdullah Abdul from Saudi Arabia or they didn’t, this can be easily verified, and there will be a price very much not worth paying if this is claimed in error. There is a strong habit of not engaging in false statements of this type.
If this was more speculative, they would use particular weasel words like ‘believed to (be/have)’ at which point all bets aren’t quite off but the evidence is not very strong. If the weasel words aren’t there, there’s a reason.
However, I don’t agree with this, and even more don’t agree with the sign-reversed version of it (e.g. flop MSNBC for FOX and reverse all the facts/motivations accordingly):
It doesn’t matter at all that FOX is biased. You could argue that “FOX wants to fan fear of Islamic terrorism, so it’s in their self-interest to make up cases of Islamic terrorism that don’t exist”. Or “FOX is against gun control, so if it was a white gun owner who did this shooting they would want to change the identity so it sounded like a Saudi terrorist”. But those sound like crazy conspiracy theories. Even FOX’s worst enemies don’t accuse them of doing things like this.
This very much does not sound like a crazy conspiracy theory. It is not crazy. Also it would not be a conspiracy. It would be some people making some stuff up, only in locally noticeably more brazen ways than those previously observed, and which we thus think is unlikely. But if someone came into Scott’s office and said ‘I think FOX’s story today about that Saudi terrorist is importantly false’ then it would be a mistake to suggest therefore putting this person on medication or asking them to go to therapy.
Of course it matters that FOX is biased and would very much like to make up a case of Islamic terrorism. FOX makes up cases of Islamic terrorism, the same way that MSNBC shoves them under a rug. And my lord, FOX would totally love to change the identity so it sounded like a Saudi terrorist. Of course they would. And MSNBC would love to make it sound like it was a white gun owner.
Before the identity is known, MSNBC and friends will run stories that assume of course it is a white gun owner, while FOX and friends will run stories that assume of course it is an Islamic terrorist. And they will hold onto those assumptions until the last possible moment when it would be too embarrassing not to fold, in the hopes of leaving the right impression (for their purposes) with as many people as possible, and to signal their loyalty to their narrative model of the world. And then they will insist they didn’t say the things they previously said, and they will definitely insist they certainly haven’t repeated this pattern dozens of times.
The question is, does adding the detail of the police identifying the suspect sufficiently over-the-line that this is insufficient to make the actions in question plausible? With these details, my answer is yes in the central sense of there being an apprehended suspect named Abdullah Abdul from Saudi Arabia.
Whereas on MSNBC, they’re probably whistling and pretending not to notice this person’s name and origin because they’re suddenly not important, and having experts on saying things like ‘we have no idea what caused this incident, who can know, but we do know that there are so many more shootings here than any other country.’
Now flip it again, and suppose the suspect was a white gun owner. Fox will keep talking about the threat of Islamic terrorism and pretend not to notice the person was white, and probably expound upon the various FOX-friendly potential motivations and histories that could be involved long after they’re no longer remotely plausible.
Now imagine the person in question was both, and was a white person who happened to be born in Saudi Arabia, and whose name (whether or not it was given at birth) was Abdullah Abdul, and watch two completely disjoint sets of facts get mentioned.
But, you say. But! They still wouldn’t outright say the fully false things here. There are rules, you say. Everyone involved is distorting everything in sight but there’s still this big signpost where they say ‘the police have apprehended a suspect named X with characteristics Y’ and you know X is the suspect’s name, and you probably know they have characteristics Y depending on how slippery that could potentially be made.
And yes, you’re probably right. Last time I checked, they do have a red line there. But there’s a bunch of red lines I thought they had (and that I think previously they did have) that they’ve crossed lately, so how confident can we be?
Scott says this:
And there are other lines you don’t cross, or else you’ll be the center of a giant scandal and maybe get shut down. I don’t want to claim those lines are objectively reasonable. But we all know where they are. And so we all trust a report on FOX about a mass shooting, even if we hate FOX in general.
Scott links to Everybody Knows to indicate this is a ‘the savvy know this and then treat it like everyone knows.’ But the savvy are necessarily a subset, and not all that large a subset at that. Not only does everyone very much not know this, I don’t even know this.
I have a general sense of where those lines seem to be, but they seem to be different than where the lines were five years ago, which in turn is different from thirty years ago. I am not confident I have them located correctly. I am damn sure that very far from everybody knows even that much with any confidence, and that those who think they are damn sure often strongly disagree with each other.
I don’t expect this example to rise to anything like the level where FOX might get shut down and I’d expect it to be forgotten about within a few weeks except maybe for the occasional ‘remember when FOX did X’ on Twitter. They’ll claim they made a mistake and got it wrong and who are you to say different and why should we believe your biased opinion? That seems so much more likely to me than that this suddenly becomes a huge deal.
The reason I still believe FOX (or MSNBC in reverse) in this spot is because it’s still not something they’re in the habit of doing, and it’s still a dumb move strategically to choose this spot to move expectations in this way, in ways they can understand intuitively, and mostly that it feels like something that will feel to them like something they shouldn’t do. It doesn’t pattern match well enough to the places where outright lies have already happened recently. Right now. For now.
Yet, for all our explicit disagreements, I expect Scott in practice to be using almost the same heuristics I am using here if such events were to happen, with the difference being that I think Scott should be adjusting more for recent declines in deserved trust, and him likely thinking I’m adjusting too far.
Lincoln and Marx
I’m going to first deal with the Lincoln and Marx example, then with the 2020 election after, although Scott switches back and forth between them.
Here’s a Washington Post article saying that Abraham Lincoln was friends with Karl Marx and admired his socialist theories. It suggests that because of this, modern attacks on socialism are un-American.
Here is a counterargument that there’s no evidence Abraham Lincoln had the slightest idea who Karl Marx was.
I find the counterargument much more convincing. Sometimes both the argument and counterargument describe the same event, but the counterargument gives more context in a way that makes the original argument seem calculated to mislead. I challenge you to read both pieces without thinking the same.
A conservative might end up in the same position vis-a-vis the Washington Post as our hypothetical liberal and FOX News. They know it’s a biased source that often lies to them, but how often?
So both sides are often lying, but with some conditions under which a given statement can still be trusted. The question is what conditions still qualify.
So before looking at the counterargument, we can start with the easy observation that the headline is definitely at least a claim without evidence, which I would consider in context to be lying. Scott excuses this by saying that headline writers are distinct from article writers, and make stuff up, and everybody knows this and it’s fine. Anything in a headline that isn’t a tangible specific fact is complete rubbish.
The body of the article is a real piece of work. I didn’t need to see the counterargument to know it stinks, only to know exactly how much it stinks. It is doing the association dance, the same one used when someone needs to be canceled. Other than being about someone long dead, and that the author thinks socialism is good actually, this seems a lot like what The New York Times did to Scott Alexander, drawing the desired associations and implications by any means technically available, and because there was nothing there, being made of remarkably weak sauce.
Here Lincoln is ‘surrounded by’ a certain kind of person, and someone is that kind of person if they ‘made arguments’ that are of the type that a person of that point of view would make. I totally noticed that the argument that Lincoln was reading Marx was that he was a columnist in a newspaper Lincoln was reading, which is like saying I was as a child a reader of William Safire because I read the New York Times. The ‘exchanged letters’ thing where Lincoln wrote back a form letter I can’t say for sure I would have picked up on on my own, but I like to hope so. The clues are all there.
That’s the thing. The clues are all there. This is transparent obvious bullshit.
It’s still easy to not spot the transparent obvious bullshit. When one is reading casually or quickly, it’s a lot easier to do a non-literal reading that will effectively lie to you, than the literal reading that won’t. Not picking up on (or noticing in a conscious and explicit way that lets you reject them) the intended insinuations requires effort. And despite the overall vibe of the post being transparent enough to me that it would trigger a ‘only a literal reading of this will be anything but bullshit,’ it was less transparent to others – Scott said in a comment to a draft of this post that he’s not confident he would have sufficiently noticed if he’d seen only the original but not the rebuttal.
The direct quotes of Lincoln here are interesting. They do have quite the echo to things Marx said. And to things many others of that era said who had very different beliefs. They also make perfect sense if you interpret them as ‘you should want the slaves to be freed,’ which is the obvious presumed context when I read them, and which was then confirmed by the context later provided by the counterargument. Which also seems to include such lines as:
“Capital,” Lincoln explained, “has its rights, which are as worthy of protection as any other rights.”
They also are missing the thing that makes a socialist a socialist, which is to declare that we should find the people with the stuff, point guns at them, and take their stuff. It doesn’t even quote him saying this about slaves, and he’s the one who freed the slaves, so it seems like a strange omission. In this type of agenda-pushing, one can safely assume that if there was better material available it would have been used.
The counterargument misunderstands what is going on here.
Brockell badly misreads her sources and reaches faulty conclusions about the relationship between the two historical contemporaries. Contrary to her assertion, there is no evidence that Lincoln ever read or absorbed Marx’s economic theories. In fact, it’s unlikely that Lincoln even knew who Karl Marx was, as distinct from the thousands of well-wishers who sent him congratulatory notes after his reelection.
There’s the fact that technically no one said Lincoln read Marx’s economic theories but that’s not the point here. Brockell did not misread anything. Brockell looked for words that could be written to give an impression Brockell wished to convey while not crossing the red line of saying definitively false things of the wrong type, and Brockell found the best such words that could be found. There are no ‘faulty conclusions’ here, there are only implausible insinuations.
Anyway, yes, the rebuttal is deeply convincing, and the fact that the original made it into the Washington Post should be deeply embarrassing. Yet it was not. Scott notes that it was not, that everyone forgot about it. Scott seemingly thinks not only that the Washington Post will pay zero price for doing this, but that this was entirely predictable. As a ‘human interest’ story, in his model, no one is checking for such obvious hackery or caring about it, it’s par for the course, you should expect to see ‘we can’t know for sure that wet ground causes rain, but we do know that there’s a strong correlation, and where wet ground you can usually look up and see the rain coming down’ and who cares, it’s not like it matters whether the rain caused the wet ground or the other way around, that’s a human interest story.
There’s also the question of whether this story is lying or not. Scott seems to be trying to have it both ways.
First off, there’s the common sense attitude that the Marx/Lincoln article is of course lying. But the claim is then that this is because the questions in it are not to be taken seriously, and trust only matters when questions are sufficiently serious.
Then there’s the thing where the article didn’t technically lie aside from the headline. Which is true.
It’s hard for a naïve person to read the article without falsely concluding that Marx and Lincoln were friends. But the article does mostly stick to statements which are literally true.
I don’t think it’s mostly? I think the statements are each literally true. It’s more like it’s full of insinuation and non-sequiturs. This paragraph, for example, is all completely true aside from the questionable ‘was surrounded by socialists’ but also is also completely obvious nonsense. It gives the impression that conclusions should be drawn without actually justifying those conclusions at all, which is classic.
President Trump has added a new arrow in his quiver of attacks as of late, charging that a vote for “any Democrat” in the next election “is a vote for the rise of radical socialism” and that Rep. Alexandria Ocasio-Cortez (D-N.Y.) and other congresswomen of color are “a bunch of communists.” Yet the first Republican president, for whom Trump has expressed admiration, was surrounded by socialists and looked to them for counsel.
What are the potential outright falsehoods?
There’s that line about ‘surrounded by socialists’ above. The only evidence given is that there are were a few people around Lincoln who expressed some socialist ideas, and who encouraged him to free the slaves. That doesn’t seem like it clears the bar on either ‘socialist’ or ‘surrounded.’ There are two socialists referenced, one of whom ran a Republican newspaper, supported him, and then investigated generals on his behalf, none of which has much to do with socialism. It’s no surprise that Lincoln ‘eagerly awaited’ dispatches about his generals, since his generals were one of his biggest issues. The other also ran a newspaper. It’s almost as if someone who wanted to run for office decided to become friends with the people who had access to printing presses. Smart guy, that Lincoln.
And there’s a bunch of statements like this. They seem more right than wrong, but not quite wrong enough to be lies.
If you think that sounds like something Karl Marx would write, well, that might be because Lincoln was regularly reading Karl Marx.
This is highly misleading in the sense that ‘regularly reading Karl Marx’ refers to his Crimea War dispatches in a newspaper, which he in turn might or might not have been doing, but technically that still counts. The question is whether the logical implication here counts as lying, since if you know the details it’s obvious that this could not have been why Lincoln wrote what he wrote.
Scott claims ‘the Marx article got minimal scrutiny’ but it manages to very carefully follow the correct exact pattern, and predictably got a bunch of scrutiny afterwards. I don’t buy it.
So my conclusion is that the article is intentionally misleading, a piece of propaganda designed to be obviously bullshitting in order to push a political agenda and make it clear you are willing to engage in obvious bullshit to support a political agenda.
But it’s bullshit, and isn’t lying, except for the headline. It follows The Rules, the Newspaperman’s Code that says that you can’t print known-to-be-technically-false things.
That leads to me getting confused by this.
Finally, the Marx thing was intended as a cutesy human interest story (albeit one with an obvious political motive) and everybody knows cutesy human interest stories are always false.
It could be reasonably said that everybody knows cutesy human interest stories are warped narratives at best and often centrally false, designed to give the desired impression and support the desired narrative. The post about rescuing that cat stuck in a tree is either going to talk about the dark underbelly of shady cat rescuers or else it’s going to be a heartwarming story about how a cute child got their kitty back. What it isn’t going to be is fair and balanced.
You can call this a ‘cutesy human interest story’ if you come from a background where being socialist is obviously great, but even then I don’t buy it because the purpose of this is to be used as ammunition in within-ingroup arguments to try and show one’s adherence to party lines. It’s not to try and convince any outgroup members because, as Dan Quayle famously put it and is quoted later in Scott’s post, no one was fooled.
Such people gave The Washington Post clicks, as did Scott here. Author showed their loyalties and ability to produce viral content of similar nature. Missions accomplished.
But the question I have is: What makes the rules observed here different from the rules elsewhere?
My answer to that is nothing. The rules are the same.
This is exactly the level of misleading one should expect, at a minimum, on a ‘how and in which way do Very Serious People want me to be worried this week about Covid-19.’ Or on a post about how an election (was / was not) stolen. This is exactly the level of misleading I expect any time there is a narrative and an interest in pushing that narrative.
In fact, I’d call this an excellent example of where the line used to be. The line used to be exactly here. You could do this. You couldn’t do more.
The difference is that people are increasingly doing somewhat more than this. That’s why we had to go through the steps earlier with the hypothetical shootings at Yankee Stadium. If 2012-media from any side tells me there’s a mass shooting at Yankee Stadium, I believe them, full stop, we don’t need the other supports. That’s specific enough. Today, it’s not enough, and we need to stop and think about secondary features.
It is often said that if you read an article in a newspaper about the field you know best it will make statements that are about as accurate as ‘wet ground causes rain,’ and you should then consider that maybe this isn’t unique to the field you know best. That certainly matches my experience, and that’s when there isn’t an obvious narrative agenda involved. When there is, it’s a lot worse.
Scott’s attempt to draw the distinction that expert historians specifically into Marx and Lincoln are not known to be saying nice things about this article feels like ad hoc special pleading, a kind of motte/bailey on what contextually counts as an expert. It also isn’t relevant, because ‘praise’ is not vouching even for its not-outright-lying status let alone its not-lying-by-implication status. Under the model, ‘praise’ is unprincipled, cannot be falsified, and thus doesn’t imply what Scott is suggesting it does, and mostly is only evidence of what is in the Narrative.
Scott notices that he never expected any of this to check out under scrutiny, because stories like this are never true, and certainly there were overdetermined contextual clues to allow that sort of conclusion even before the takedown. With the takedown, it’s trivial.
The 2020 Election
A conservative might end up in the same position vis-à-vis the Washington Post as our hypothetical liberal and FOX News. They know it’s a biased source that often lies to them, but how often?
Here’s a Washington Post article saying that the 2020 election wasn’t rigged, and Joe Biden’s victory wasn’t fraudulent. In order to avoid becoming a conspiracy theorist, the conservative would have to go through the same set of inferences as the FOX-watching liberal above: this is a terrible news source that often lies to me, but it would be surprising for it to lie in this particular case in this particular way.
I think smart conservatives can do that in much the same way smart liberals can conclude the FOX story was real. The exact argument would be something like: the Marx article got minimal scrutiny. A few smart people who looked at it noticed it was fake, three or four people wrote small editorials saying so, and then nobody cared. The 2020 election got massive scrutiny from every major institution.
To be safe, I’ll reiterate up front that I am very confident the 2020 election was not rigged. But I didn’t get that confidence because liberal media sources told me everything was fine, I got it because I have a detailed model of the world where there’s lots of strong evidence pointing in that direction. That and the stakes involved are why I broke my usual no-unnecessary-politics rules in the post after the election was clearly decided to be very explicit that Biden had won the election – it was a form of cashing in one’s One Time in a high-leverage moment, bending one’s rules and paying the price.
As I write this, I haven’t yet looked at the WaPo article so I can first notice my expectations. My expectation is that the WaPo article will have a strong and obvious agenda, and that it will be entirely unconvincing to anyone who hadn’t already reached the conclusion that the 2020 election wasn’t rigged, and will primarily be aimed at giving people a reference with which to feel smug about the stupid people who were ‘fooled by the Big Lie’ and think the 2020 election was rigged.
Notice that Scott’s argument rests here on the difference between the election article and the Marx article. The Marx article should not be believed. But I notice that I expect both articles to be following the same standards of evidence and honesty. Whoops.
Enough preliminaries. Time to click and see what happens.
We can start with the headline. As we’ve established, the headline is always bullshit.
Guess what? There (still) wasn’t any significant fraud in the 2020 presidential election.
So that’s a really strange turn of phrase, isn’t it? That still?
I mean, what would it mean for there to have not to have been fraud in the 2020 presidential election at some point in the past, looking back on the election, but for that to have changed, and there now to have been fraud that previously had not been?
Either there was fraud or there wasn’t fraud. There’s no way for that answer to change retroactively, unless the fraud took place in the interim, which isn’t anyone’s claim. So the mentality and model behind this headline is saying that whether there was fraud is somehow an importantly different claim than whether or not someone did a fraudulent thing at the time.
Instead, it’s about what the current narrative is. The current narrative is that there wasn’t fraud. The past narrative is that there wasn’t fraud. Thus, there (still) wasn’t any fraud, because ‘there was fraud’ means ‘the narrative contains there being fraud.’
One can make claims about what is or is not in the narrative, under this lexicon, but there isn’t an obvious combination of words that says whether or not someone did a physical act, only whether or not someone is generally said to have committed that act.
In other words, under this system, if I ask ‘was there significant fraud in the 1960 presidential election?’ I am asking whether the narrative says there was such fraud. And therefore the answer could be ‘no’ one day and ‘yes’ the next and then go back to ‘no’ based on what those who control the narrative prefer.
More charitably, one could interpret this as ‘there is still no evidence for’ (which is always false, there’s never no evidence of anything) or ‘there is still overwhelming evidence against’ (which is both stronger and has the benefit of being true) and having been cut down because headlines have limited space, and that I’m reading too much into this.
I don’t think so. The headline could have read “Evidence Still Overwhelmingly Says No Significant Fraud in 2020 Election” and been shorter. This was a choice. I think this headline is smug and has the asshole nature and makes it clear that this is how words are supposed to work and that none of this is an accident.
Let us begin.
It’s been more than a year since the 2020 presidential election ended according to the calendar, though, according to the guy who clearly and unquestionably lost that election, Donald Trump, things are still up in the air. For 400 days, Trump has been promising sweeping evidence of rampant voter fraud in that election. It’s eternally just around the corner, a week away. Two. It’s his white whale and his Godot. It’s never secured; it never arrives.
Yeah, that’s all going to get past a fact checker as defensible things to say, but: Who is the intended audience here? Who is this trying to inform? If you are reading this while previously believing the election was stolen, do you keep reading? Given the headline, what’s the chance you even got that far?
The entire article is written like this, with the baseline assumption that Trump is lying in bad faith and that claims of fraud are illegitimate.
Let’s push through the fact that the whole thing has the asshole nature and has no interest in providing anything but smugness while pretending to be #Analysis, and look at what the actual claims are that one might ‘have to be a conspiracy theorist’ not to believe, since that’s the core question, except man they make it hard.
Yet there he was, offering the same excuse once again when asked by the Associated Press. The occasion was AP’s exhaustive assessment of the 2020 election in which they uncovered fewer than 500 questionable ballots. Questionable! Not demonstrably fraudulent, but questionable. But Trump, never bound to reality, waved it away.
If you’re going to put lines like ‘Trump, never bound to reality’ into your statement, it’s really hard to complain that people on the other side aren’t viewing you as a credible source. You’re spouting obvious nonsense for the sole purpose of delivering snappy one-liners and then wondering why those who think the target of that putdown should be in the White House aren’t updating on your factual claims.
I mean, they end on this:
On the other hand, we have a guy who was documented as having said false things tens of thousands of times while serving as president continuing to insist that proof of wide-scale fraud is just around the corner.
But if you asked me to find tens of thousands of times the Washington Post has said that which was not, via a similar standard, do you think it would be hard?
We’re agreed that they lie all the time. And Scott is making the Bounded Distrust argument that this doesn’t much matter. That argument would need to apply equally to Donald Trump. And it seems like it does, in the sense that there are some statements he makes and I think he’s probably giving me new true information, and other times he makes statements and I don’t think that, and there’s a kind of concreteness that’s a large part of the distinction there.
And who is indeed sometimes bound to reality, and also often plays the game by exactly these rules. ‘Many people’ are saying X, you see, a lot of people, very good people. But not Trump, not directly, because those are the rules of the game. Similarly, when Cohen testified he noticed Trump being very careful with his word choices in private conversations, for similar (legal) reasons. Trump will also outright lie, of course, but he is a politician and a real estate developer so please don’t act quite so surprised and outraged. And he too is playing a game of this type and will choose when the price is too high and when it isn’t. The only difference is that he managed to ‘make a deal’ and thus pays lower prices. So he buys more and more brazen falsehoods, and occasionally he picks a falsehood that feels some combination of worthwhile and true to him and decides to double down on it.
Which is, as everybody knows, the rule for politicians. Who, with notably rare exceptions, will lie, to your face, all the time, about actual everything.
It’s worth noting that the linked-to report from Wisconsin, also in WaPo, was better on many dimensions, not perfect but definitely coming from a world in which there is more focus on physical world modeling and less on narrative.
When I focus purely on the facts from this article that seem like sufficiently detailed non-trivial physical claims that they have content that we could potentially rely upon, and edit to take out all the dripping contempt and hatred, what’s left is this.
- The AP’s assessment of the 2020 election uncovered 473 questionable ballots.
- “He said a soon-to-come report from a source he would not disclose would support his case,” the AP reported Trump saying. Trump did respond with: “I just don’t think you should make a fool out of yourself by saying 400 votes.”
- A total of 25.6 million cast ballots were cast in the states analyzed by the AP.
- We have multiple state-level reviews conducted by Trump allies suggesting that the vote totals in contested states was legitimate. There has been no person who has stepped forward and admitted participation in any sort of scheme to throw the election and no discovery of rampant, coordinated fraud save for an effort to cast ballots in Macomb County, Mich., that constitutes most of AP’s total from that state — an effort that didn’t actual result in ballots being counted. And then there’s AP’s broad analysis of the vote in all six states that found only piecemeal problems.
Here’s an important thing not on the list:
It often takes a while for states and counties to adjudicate dubious ballots. It’s a lengthy process, matching cast votes with actual voters. But counties have a sense now of how often votes might have been cast illegally. In sum: fewer than 500.
Because that, you see, is allowed to be functionally false, and also actually is functionally false, conflating different numbers at least three times.
It’s conflating the ballots cast in the states analyzed by the AP – 25.6 million – with the combined number of ballots cast, which was about five times that number. Whereas the AP analyzed only a subset of those 25.6 million ballets. And it is then implicitly stating that there is zero chance that any ballot not viewed as suspicious by the AP could have been cast illegally. While the chance of any given ballot cleared by the AP having been cast illegally is very low, there are ways to do this that would not show up on the ballot itself, and that would not have been detected.
When you’re willing to make this level of misstatement about the core question at issue, it makes it that much harder to know where you can still be credible.
Essentially what this is saying is:
- The number I’m giving you comes from somewhere.
- It doesn’t have to be the thing you naturally think it is.
- That number can represent a subset or otherwise be heavily misleading.
The window of what we are forced to treat as real keeps narrowing.
The original version of our Fact Statement #3 was, in fact, this:
Even if every one of those 473 cases was an actual example of fraud, it’s out of a total of 25.6 million cast ballots.
Which implies that those 25.6 million ballots were all analyzed as part of the AP’s work. They weren’t. I had to realize this and back-edit to fix that.
Clicking through to the AP article provided clarity on many things, but still the whole thing boils down to whether or not you trust the Associated Press to do an investigation like this. I don’t think it makes you a ‘crazy conspiracy theorist’ to think that the AP was not, via this method, going to detect all or even most potential forms of fraud that might have taken place.
If I imagine to myself that Omega (a hypothetical omniscient omnipotent always fully honest entity) told me the election was fraudulent somehow, and then I’m told that the AP report is about to come out, I notice I still expect the AP report not to find anything. If there was anything that they would have been forced to find that way, someone else would have already found it. The AP doesn’t have to lie to simply not notice things, and given who they are I expect them to be very good at not noticing.
So all of this boils down to this:
- Liberal sources continue to push narrative that there was no significant fraud.
- Liberal sources continue to push narrative that all specific physical claims of significant fraud have been debunked.
- Trump continues to promise that he’ll come up with evidence Real Soon Now.
- The evidence is not currently present, because otherwise Trump would say so.
That’s true as far as it goes.
And you know what? Points three and four are actually really super strong evidence that no one has this kind of concrete evidence of fraud. It’s the kind of very specific claim – that Trump is not saying X – that if false would be rapidly exposed as false because Trump would be very clear he was doubling down on X and this would be reported.
Thus, when Scott says this:
In order to avoid becoming a conspiracy theorist, the conservative would have to go through the same set of inferences as the FOX-watching liberal above: this is a terrible news source that often lies to me, but it would be surprising for it to lie in this particular case in this particular way.
I say no, absolutely not. The article in question is saying a mix of that which is, and that which is not, and a lot of that which is but is mostly designed to imply that which is narratively convenient without regard to whether or not it is true.
You can reasonably argue that there are particular statements within the article that one can be highly confident from context are being stated accurately. But one can accept those particular statements without it forcing one to accept that the election wasn’t stolen. There’s no logical incompatibility here.
Part of what’s going on is that this ‘conspiracy theorist’ label is a threat being used, and you have to do things to avoid labeled that way. In particular, you need to notice what everybody knows is a conspiracy theory right now, and avoid advocating for it. If that changes and something (for example, the lab leak hypothesis or UFOs) stops being considered a conspiracy theory, you can then switch your tune.
Things like this Washington Post article tell us nothing we don’t already know. All of the work is relying on this line and similar logic:
The 2020 election got massive scrutiny from every major institution.
The core argument is that the absence of evidence is, in this context with this many people looking this hard to find something, and with the magnitude of the necessary efforts to pull this off and the resulting amount of evidence that would be available to be found, and the number of people who could potentially talk, very strong evidence of absence. That the amount of ‘evidence of fraud’ that was found is about the amount you’d expect to find if there was no significant fraud and this kind of effort, if anything it’s less than that. It’s surprising that, even with nothing to find, stuff more suspicious than this couldn’t be found.
One could say that the liberal media would suppress such findings, and no doubt some parts of it would attempt to do so if such findings arose, but there are enough media sources on the other side that we need not worry much about such suppression happening without being noticed.
The liberal media could and did essentially use their One Time on Donald Trump in various ways, and paid the price in future credibility for doing so, but even with that it wouldn’t have been enough to sell us a centrally fraudulent election in a way that couldn’t be noticed.
All of the claims of fraud were even politely registered in advance as claims that would be made no matter what if the wrong side won, so they’re actual zero evidence of anything except in their failure to be better substantiated. Whereas if there was big fraud, we would almost certainly know. And the reason for that is that the ones claiming fraud realized that the distrust of institutions was no longer sufficiently bounded to convince people not to believe such fraud claims, so there was no incentive not to make the claims regardless of the degree of fraud.
Combine that with a reasonable prior, and you get extremely high confidence of no fraud.
What you don’t get is especially bounded distrust in the media sources involved.
As I was writing this Marginal Revolution linked to this excellent post about why the USA is unlikely to face civil war. Among other things, it notices that various measurements of America’s democracy were altered to make Trump look scary in ways that don’t make any sense. Then America’s past was retroactively made worse to make it consistent with the ratings they gave for modern America to make Trump look maximally bad and also get in digs on the outgroup while they were at it. You can fairly say once again, blah blah blah, none of that is specific actual physical world falsifiable claims so of course all such things were pure political propaganda, physical world falsifiable claims are different. But this kind of thing is then cited as a ‘source’ to back up claims and sounds all scientific and tangible even though it’s not, and is an example of something that wouldn’t have happened twenty years ago (as, in they went back and did it retroactively because it was too fair back in the day) so it’s an example of the war on memory in such matters and also the decay of the bounds of distrust. And also, these are ‘experts’ giving their opinions, so now ‘experts’ who aren’t giving physical world falsifiable (in practice, not in theory) claims need to also be ignored by that standard.
Basically, I’m saying no, you can’t evaluate any of this by saying ‘look at all these experts’ and ‘look at all these institutions’ without also using your brain to think about the situation, the counterfactuals and the likelihood ratios of various observations and applying something that approximates Bayes Rule.
I also am pretty sure that Scott did exactly the thing where you at least implicitly calculate a bunch of likelihood ratios of various observations and applied Bayes Rule and came to the same conclusion as everyone else who did this in good faith in this case.
Scott tells the story this story.
According to this news site, some Swedish researchers were trying to gather crime statistics. They collated a bunch of things about different crimes and – without it being a particular focus of their study – one of the pieces of information was immigration status, and they found that immigrants were responsible for a disproportionately high amount of some crimes in Sweden.
The Swedish establishment brought scientific misconduct cases against the researchers (one of whom is himself “of immigrant background”). The first count was not asking permission to include ethnicity statistics in their research (even though the statistics were publicly accessible, apparently Swedish researchers have to get permission to use publicly accessible data). The second count was not being able to justify how their research would “reduce exclusion and improve integration.”
It counts as ‘scientific misconduct’ for you to not be able to justify how your research would ‘reduce exclusion and improve integration.’
Which is odd.
It means it is official policy that wrongfacts are being suppressed to avoid encouraging wrongthink and wrongpolicy.
It also means that we can no longer have a thing called ‘scientific misconduct’ that one can use to identify sources one cannot trust, since that now could refer to wrongfacts. If someone says ‘that person is accused of scientific misconduct’ I need to be very careful to get the details before updating, and if I don’t I’m effectively reinforcing these patterns of censorship.
But, Scott says, scientists have the decency to accuse them of misconduct for failure to reduce exclusion. This has the benefit of making it clear that this is an act of censorship and suppression rather than that the scientists did something else wrong, for anyone paying attention. If the claims were false, the scientists cracking down on wrongfacts would say the facts in question were wrong. By accusing someone of saying wrongfacts but not saying the wrong facts are wrong, you’re essentially admitting the wrongfacts are right. So this gives you, in this model, something to go on.
I believe that in some sense, the academic establishment will work to cover up facts that go against their political leanings. But the experts in the field won’t lie directly. They don’t go on TV and say “The science has spoken, and there is strong evidence that immigrants in Sweden don’t commit more violent crime than natives”. They don’t talk about the “strong scientific consensus against immigrant criminality”. They occasionally try to punish people who bring this up, but they won’t call them “science deniers”.
Let me tell you a story, in three acts.
- All masks don’t work unless you’re a health professional.
- All masks work.
- Cloth masks don’t work.
At each stage of this story, scientists got on television to tout the current line. At each stage of this story, the ‘science denier’ style labels got used and contrary views were considered ‘dangerous misinformation.’
Yes, we did learn new information to some extent, but mostly we knew the whole story from the beginning and it’s still true now. Cloth masks are substantially better than nothing, better masks are much better. Also the super-masks like P100s (or the true fashion statements that work even better) are far better than N95s and you’re basically never allowed to mention them or advocate for mass production. And yeah, we all knew this back in March of 2020, because it’s simple physics.
I could also tell you a story about vaccines. Something like this:
- Vaccines are being rushed.
- Vaccines are great and even prevent all transmission and you’re all set.
- Vaccines are great but you still have to do all the other stuff and also you need a booster even if you’re a kid unless you’re in one of the places that’s illegal. But only the one, definitely, that’s all.
And that’s entirely ignoring the side effect issue.
Once again, yes, you could say that the information available changed. On boosters, I’m somewhat sympathetic to that, and of course Omicron happened, but don’t kid yourself. Motivations changed, so the story changed.
Then there’s the lab leak hypothesis. And the other lab leak hypothesis.
Then there’s social distancing and ‘lockdowns’ and protests where the scientists declared that social justice was a health issue and so the protests weren’t dangerous. Which are words that in other contexts have meaning.
Then there’s the closing of the schools and remote learning and telling us masks and the other stuff isn’t doing huge damage to children.
Then there’s travel restrictions.
There’s the WHO saying for quite a long time that Covid isn’t airborne.
There are the claims early on of ‘no community spread’ while testing was being actively suppressed via the CDC requiring everyone to use only its tests when it knew they didn’t work.
There’s Fauci saying we’d get to herd immunity at one number, then saying that when we’d made enough progress on vaccination he felt free to increase the number a bit more, indicating he didn’t care about what the real number was. And he wasn’t alone.
And so on.
And in each case, the relevant ‘expert’ people who are wearing official ‘trust the science’ lapel pins explicitly lied, over and over again, using different stories, right to our f***ing faces. While arranging for anyone who disagrees with them to be kicked off of social media or otherwise labeled ‘dangerous misinformation.’ Then they lied and say they didn’t change their story.
So when we say that scientists ‘don’t lie directly’ we need to narrow that down a bit.
Can we say ‘don’t lie directly about specific actual physical world falsifiable claims?’
I mean, no. We can’t. Because they did and they got caught.
There’s still some amount of increasing costs to increasingly brazen misrepresentations. That’s why, in the Swedish example, we don’t see direct false statements to deny the truth of the claims made. The claims made are too clearly true according to the official statistics, so opening up yourself like that would only backfire. But that’s a tactical decision, based on the tactical situation.
This is, as Scott says, a game with certain rules. But not very many.
If there is a published paper or even pre-print in one of many (but not all) jurisdictions, I mostly assume that it’s not ‘lying about specific actual physical world falsifiable-in-practice-if-false claims.’
Mostly. And that’s it. That’s all I will assume about the paper.
I will not assume it isn’t p-hacked to hell, that it has any hope of replication, that anything not explicitly mentioned was done correctly, that the abstract well-described the methodology or results, that their discussion of what it means is in good faith, or anything else, except where the context justifies it. I may choose to do things like focus on the control variables to avoid bias.
Outside of the context of an Official Scientific Statement of this type, even more caution is necessary, but mostly I still would say that if it’s something that, if false, I could prove was false if I checked then the scientist will find a way to not quite say the false thing as such.
So yeah, anthropogenic global warming is real and all that, again we know this for plenty of other good reasons, but the reasoning we see here about why we can believe that? No.
And that suggests to me that the fact that there is a petition like that signed by climatologists on anthropogenic global warming suggests that this position is actually true. And that you can know that – even without being a climatologist yourself – through something sort of like “trusting experts”.
This is not the type of statement that we can assume scientists wouldn’t systematically lie about. Or at least, it’s exactly the type of statement scientists will be rewarded rather than punished for signing, regardless of its underlying truth value.
That’s mostly what the petition tells you. The petition tells you that scientists are being rewarded for stating the narrative that there is anthropogenic global warming. And they would presumably be severely punished for saying the opposite.
Both these statements are clearly true.
The petition does not tell you that these people sincerely believe anything, although in this case I am confident that they mostly or entirely do. It definitely does not tell you that these people’s sincere beliefs are right, or even well-justified, although in this case I believe that they are. This kind of petition simply does not do that at this time. Maybe we lived in such a world a while ago. If so, we live in such a world no longer.
But why am I constantly putting in those reminders that I am not engaging in wrongthink? Partly because I think the wrongthink is indeed wrong and I want to help people have accurate world maps. Partly to illustrate how ingrained in us it is that there is wrongthink and rightthink and which one this is, and that this petition thus isn’t providing much evidence. And partly, because I really don’t want to be taken out of context and accused of denying anthropogenic global warming and have that become a thing I have to deal with and potentially prevent me from saying other things or living my life. Or even have to answer the question three times in the comments. And while I don’t think I was in any danger of all that here, I can’t be sure, so better safe than sorry.
In my case, if I believed the local wrongthink, I would avoid lying by the strategy of being very very quiet on the whole topic because I wouldn’t want to cash in this type of One Time on this particular topic and risk this being a permanent talking point whenever my name comes up. Wouldn’t be Worth It.
Others are surely thinking along similar lines, except not everyone has the integrity and/or freedom to simply say nothing in such spots. In any case, no, the petition did not tell me anything I did not already know, nor do I expect it to convince anyone else to update either.
Then Scott goes on to say this.
(before you object that some different global-warming related claim is false, please consider whether the IPCC has said with certainty that it isn’t, or whether all climatologists have denounced the thing as false in so many words. If not, that’s my whole point.)
So it sounds like the standard is specifically that the IPCC does not make statements that false things are definitely true. Whereas if ‘some climatologists’ make such claims, that’s unsurprising. So when enough scientists of various types go around saying we are literally all going to die from this and manage to convince a large portion of an entire generation to think they are so doomed they will never get to grow old, we can’t even treat that as evidence of anything, let alone call them out on that, because the IPCC hasn’t specifically said so. I mean, I checked and they don’t appear to have said anything remotely similar.
Yet I don’t see them or any other ‘experts’ standing up to boldly tell everyone that yes we have much work to do but maybe we can all calm down a bit. And maybe we should avoid the overselling because it will cause people to think such ‘experts’ can’t be trusted. Whereas I see other ‘experts’ adding fuel to this fire, presumably because they think that only by getting people into that level of panic can they get people to actually do something. A potentially noble motive to be sure, depending on details and execution, but not exactly the names you can trust.
Some people wonder how so many people could not Trust the Science™ in such matters. I don’t wonder about that.
Nor do I think this is the reason Scott believes in AGW. Does Scott look like the type of person who says ‘oh all these experts signed a statement so I’m going to believe this important fact about the world without checking?’ No. No he does not. Scott is the type of person who actually looked at the evidence and evaluated what was going on for himself, because that’s what Scott does and the only mystery is how he does so much of it so quickly. Even for me, and by not-Scott ordinary-human standards I do a lot of analysis very quickly.
Ivermectin One Last Time Oh Please God Let This Be The Last Time
Last year I explained why I didn’t believe ivermectin worked for COVID. In a subsequent discussion with Alexandros Marinos, I think we agreed on something like:
1. If you just look at the headline results of ivermectin studies, it works.
2. If you just do a purely mechanical analysis of the ivermectin studies, eg the usual meta-analytic methods, it works.
3. If you try to apply things like human scrutiny and priors and intuition to the literature, this is obviously really subjective, but according to the experts who ought to be the best at doing this kind of thing, it doesn’t work.
4. But experts are sometimes biased.
In the end, I stuck with my believe that ivermectin probably didn’t work, and Alexandros stuck with his belief that it probably did. I stuck with the opinion that it’s possible to extract non-zero useful information from the pronouncements of experts by knowing the rules of the lying-to-people game. There are times when experts and the establishment lie, but it’s not all the time. FOX will sometimes present news in a biased or misleading way, but they won’t make up news events that never happen. Experts will sometimes prevent studies they don’t like from happening, but they’re much less likely to flatly assert a clear specific fact which isn’t true.
I think some people are able to figure out these rules and feel comfortable with them, and other people can’t and end up as conspiracy theorists.
A conspiracy theorist, officially now defined as anyone believing the Official Lying Guidelines are more flexible than you think they are (see: everyone driving slower than me is an idiot, anyone driving faster than me is a maniac).
Scientists engaging in systematic suppression of Ivermectin trials via various tactics? Well, of course. Scientists making certain specific kinds of false statements that go against the ‘rules’? Conspiracy theory. Even though the rules keep loosening over time, and sometimes some things labeled ‘conspiracy theory’ turn out true, and also many things labeled ‘conspiracy theory’ don’t actually even require a conspiracy, that’s just a way of dismissing the claims.
Scott wrote a long post about Ivermectin. In that post, did Scott rely on ‘experts’ to evaluate the various papers? No, he most certainly did not. Scott actually looked at the papers and considered the evidence on each one and made decisions and then aggregated the data. And then, after all that, he took a step back, looked holistically at the situation, found it best matched a hypothesis from Avi Bitterman (worms!) and went with it, despite no ‘experts’ having endorsed it, and then a lot of people went ‘oh yeah, that makes sense’ and adopted the conclusion, which is how this works, is exactly how all of this works, that’s Actual Science rather than Science™.
As in, yeah, step three above is true, the ‘experts’ definitely reach this conclusion. But also we looked at exactly why those experts got to that conclusion, and story checks out. Also Scott looked in detail himself and got a more interesting but fundamentally similar answer.
Yes, experts are sometimes biased, if you’re being charitable, or ‘engaged in an implicitly coordinated suppression of information in conflict with the current narrative’ if you’re being more realistic. Also, sometimes they’re simply wrong, they have limited information to work with and limited cognition and lousy incentives and lives and this whole science thing is hard, yo. That’s why Scott had to spend countless hours doing all that work for himself rather than ‘Trusting the Science™.’ Which looks a lot different than ‘the experts wouldn’t lie about this particular thing so of course Ivermectin doesn’t work.’
I mean, the experts still haven’t come around to the Vitamin D train, so ‘the experts aren’t impressed by the evidence’ isn’t exactly what I’d think of as a knock-down argument against non-risky Covid treatments.
Also, remember the rules that Scott mostly agrees upon. The scientists aren’t allowed to say anything provably false, but they are allowed to suppress studies and other information they don’t like by making isolated demands for rigor.
Which is exactly what Alexandros claims they are doing. I can confirm this more generally because I spent a bunch of time talking to him as well. Then, in Alexandros’ model, having raised enough FUD (fear, uncertainty and doubt) around the studies in question, and using that to cast doubt on any that they couldn’t do hit jobs on, they go and say ‘no evidence’ which is a standard accepted way to say that which is not, and that’s that. You don’t even have to tell the scientists explicitly to do that because they notice the narrative is that Ivermectin is outgroup-branded and doesn’t work, and that’s that. In all my conversations with Alexandros, I can’t remember him ever claiming any scientist outright lied in the way Scott says they don’t lie. His story in no way requires that.
Which, again, is why Scott had to spend all that time looking himself to know for sure.
Once again, I agree with Scott on the bottom line. As far as I can tell, Ivermectin doesn’t work.
But once again, I don’t think Scott’s stated algorithm is a good one, although once again I happily don’t think Scott is using his stated algorithm in practice. I think he’s mostly using mine, with the main difference being that I think he hasn’t sufficiently adjusted for how much the goalposts have been moved.
The real disagreement between Scott and Alexandros here is exactly that. Alexandros thinks that scientists suppressed Ivermectin using arguments they would have been able to successfully make in exactly the same way whether or not Ivermectin worked. Thus, he claims that those arguments provide no evidence against Ivermectin, whereas there is other evidence that says Ivermectin works. Scott thinks that there are enough hints in the details and rigor of the arguments made that yes, they constitute real and strong evidence that Ivermectin does not work.
More likely, Scott noticed that the people pushing for Ivermectin were part of the Incorrect Anti-Narrative Contrarian Cluster who also push a bunch of other anti-narrative things that are not true, rather than part of the Correct Contrarian Cluster (CCC). There weren’t people who otherwise were playing this whole game correctly but also happened to buy the evidence for Ivermectin. Whereas those who advocated for Ivermectin were reliably also saying vaccines were dangerous or ineffective, and other anti-Narrative claims that were a lot less plausible than Ivermectin, usually along with a bunch of various assorted obvious nonsense.
Which in turn meant that when one did look at the evidence, the cognitive algorithms that caused one to support Ivermectin were ones that also output a lot of obvious nonsense and were functioning to align and appeal to an audience with this uniform set of obvious nonsense beliefs, and when something in that group is investigated it turns out to be nonsense or in violation of one of the sacred Shibboleths, so it may be completely unfair and a potentially exploitable strategy but as a Bayesian when one sees something in that cluster that doesn’t violate an obvious sacred Shibboleth it is safe to presume it is nonsense. And if it does violate a Shibboleth, then hey, it’s violating a Shibboleth, so tread carefully.
One can (and whether one realizes it or not, one does to some extent) use it in the climate change example, noticing that full denial of climate change is very much part of the Incorrect Anti-Narrative Contrarian Cluster (ICC), while also noticing that moderate positions are conspicuously not in the ICC but rather in the CCC.
Of course, that’s a level of attention paying and reasoning that’s in many ways harder than doing the core work oneself, but it’s also work that gets done in the background if you’re doing a bunch of other work, so it’s in some sense a free action once you’ve paid the associated costs.
One must of course be very very careful when using such reasoning, and make sure to verify if the questions involved are actually important. If you treat the CCC as true and/or the ICC as false than you are not following the algorithm capable of generating the CCC or rejecting the ICC. I mean, oh yes, this is all very very exploitable, as in it’s being exploited constantly. Often those trying to suppress true information will try to tar that information by saying that it is believed by the ICC. Although they are rather less polite and very much do not call it that.
But although all this did cause Scott to have a skeptical prior, Scott makes it clear that he came into his long analysis post not all that convinced. Hence the giant looking into it himself.
I also notice that Scott didn’t choose any examples where the narrative in question is centrally lying to us, so it’s hard to tell where he thinks the border is, until the final note about the harvest.
Scott’s next argument is that our Official Narrative Pronouncements can be thought of as similar to Soviet pronouncements, like so.
But also: some people are better at this skill than I am. Journalists and people in the upper echelons of politics have honed it so finely that they stop noticing it’s a skill at all. In the Soviet Union, the government would say “We had a good harvest this year!” and everyone would notice they had said good rather than glorious, and correctly interpret the statement to mean that everyone would starve and the living would envy the dead.
Imagine a government that for five years in a row, predicts good harvests. Or, each year, they deny tax increases, but do admit there will be “revenue enhancements”. Savvy people effortlessly understand what they mean, and prepare for bad harvests and high taxes. Clueless people prepare for good harvests and low taxes, lose everything when harvests are bad and taxes are high, and end up distrusting the government.
Then in the sixth year, the government says there will be a glorious harvest, and neither tax increases nor revenue enhancements. Savvy people breath a sigh of relief and prepare for a good year. Clueless people assume they’re lying a sixth time. But to savvy people, the clueless people seem paranoid. The government has said everything is okay! Why are they still panicking?
The savvy people need to realize that the clueless people aren’t always paranoid, just less experienced than they are at dealing with a hostile environment that lies to them all the time.
And the clueless people need to realize that the savvy people aren’t always gullible, just more optimistic about their ability to extract signal from same.
I mean the clueless people aren’t exactly wrong. The government is still lying to them in year six, in the sense that the harvest is unlikely to be what you or I would call ‘glorious,’ and they will doubtless find some other ways to screw the little guy that aren’t taxes or revenue enhancements.
But if that’s all it is, then the point is essentially correct. There are rules here, or rather there are incentives and habits. The people are responding to those incentives and habits.
That doesn’t mean the ‘savvy’ position is reliable. Being savvy relies on being unusually savvy, and keeping track of how far things have moved. Every so often, the goalposts got moved, you think you know what ‘good’ or ‘glorious’ means, but you’re using the old translation matrix, and now you’re wrong, and often that’s because people noticed the translation matrix people were using and wanted to control the output of that matrix.
Those rules are anti-inductive, in the sense that they depend on the clueless remaining clueless. If the clueless did not exist, then the statements stop serving their purpose, so they’d have to ramp up (or otherwise change) the translation system. At some point, the government cashes in a One Time to say ‘glorious’ instead of ‘good,’ the living still envy the dead, and now if the system keeps surviving ‘glorious’ means ‘the living will envy the dead’ and ‘legendary’ means we will get to put food on the table this year. Then at some point they cash that in too, and so on. In other less centralized contexts, this word creep is continuous rather than all at once.
Then at some point the translation system resets and you start again, with or without the system of power underlying it collapsing. One way for this to happen is if ‘glorious’ already means ‘the living will envy the dead’ and I say ‘lousy’ then that can’t be intended to be translated normally, so I might actually honestly mean lousy without thinking the living will envy the dead, and so the baseline can reset.
But if you play this game, you by construction have to lose a large percentage of the people who will be confused what you’re doing. It’s designed to do that. One can’t then look at the clueless and tell them to get a clue, because there’s a fixed supply of clues.
If the system is distributed rather than centrally determined, and it’s a bunch of people on social media running around labeling things as other things, then you see a gradual ramping up of everything over time as people adjust expectations and get wise to the game, or as the Narrative’s forces win battles to expand their powers and then launch new attacks on the opposition. If I want to say something is glorious I have to be two steps ahead of whatever I view as the ‘standard’ description. Other similar dynamics exist in other places where words meanings can be changed or expanded over time, because those words serve purposes.
Bounds, Rules, Norms, Costs and Habits
Scott views bounded distrust as a game with rules and lines. There are some lines you mostly obey but sometimes cross at a price, and some lines you don’t cross.
I’d modify that to say that there mostly aren’t lines you simply do not cross. There are only lines that are expensive to be caught crossing when similar others are not also caught crossing them.
This is a variant of having correlated debts, or losing money in the same way those around you lose money. You mostly only get punished for getting singled out as unusually bad. Thus, the more you are pushing the same lies as others and breaking the same rules, especially as part of The Narrative, you are effectively protected, and thus the price of breaking the rules is far lower.
When deciding what to do, various players will rely on some combination of bounds, rules, norms, costs and habits. Mostly, they’ll do whatever they are in the habit of doing, and those habits will adjust over time based on what is done, rather than thinking carefully about costs and benefits. This can also be thought of similarly as them following and over time changing the norms that are being locally and globally followed. They’ll look at the costs and benefits of following or breaking what they think of as ‘the rules’ in various ways, mostly intuitively, and decide what to do about that in context.
Centrally, most official, news and ‘expert’ are looking to balance the opportunity to show their loyalty to and support the Narrative that they’re getting behind, and the rewards for doing that, against the penalties that might be extracted if they are caught getting too far out of line and doing things that are out of line, and thus hammered down upon.
It is out of line to go too far and get caught, to be too far removed from the underlying physical reality in ways that can be observed or proven, and thus that weaken the Narrative and your reputation. You lose points for losing points, more than you lose points for anything else.
It is also out of line to not go far enough, and to adhere too well to what used to be ‘the rules’ rather than scoring sufficient Narrative points. One must stay on brand. This, too, is sticking one’s neck out in a dangerous way.
The combination of these factors does often mean that there is effectively a calibrated response to any given situation. The details of what is said will be an intuitively but skillfully chosen balance of exactly what claims are made with exactly what level of specificity and rigor. Thus the chosen details of what is claimed and said actually can tell you quite a lot about the underlying physical world situation, if you can remain sufficiently well-calibrated in this and maintain the right translation matrix.
If you can do that, you can observe exactly how much smackdown occurs and in exactly what way, and know whether they’re smacking down something true, something unclear or something false. The problem is that there’s lots of inputs to that matrix, so without a lot of context you’ll often get it wrong. And also the rules keep changing, so you need to keep your matrix up to date continuously.
Combining a variety of sources improves your results. Different sources, even with similar overall trustworthiness, will have different costs, both external and internal/intrinsic, and be pushing somewhat different Narratives. By observing the differences in their responses, you can learn a lot about what’s going on by asking what would make all their responses make sense at once. Exactly who falls in line and in which ways, with what levels of weaseling, is no accident.
The principle that This is Not a Coincidence Because Nothing is Ever a Coincidence will serve you well here on the margin.
What Is the Current Translation Matrix?
I’m not going to justify this here, but seems only fair to tell where I am at. A full explanation would be beyond the scope of this (already very long) post, hence the incompleteness warning up front.
Here’s mine for politicians:
They are on what I call simulacra level 4, and they are moving symbols around without a direct connection to the underlying reality. Mostly, presume that politicians are incapable of means-ends reasoning or thinking strategically or engaging seriously with the physical world, and what comes out of their mouths is based on a vibe of what would be the thing one would say in a given situation, and nothing more.
Assume by default that they lie, all the time, about everything, including intentionally misstating basic verifiable facts, but that to model them as even thinking on those terms is mostly an error. Also assume that when they do say that which is not, if it is within the ability and the interests of the opposition to call them out on it then they will do so, and that the politician has intuitions that consider this and its consequences somewhat when deciding how brazenly to lie. While noting that in some situations, being called out on a lie is good for you, because it draws attention to the proper things and shifts focus the way you want.
Information about what type of vibe a politician is looking to give off is useful in terms of figuring out what vibe they are looking to give off, which can change when circumstances change. Explicit promises carry non-zero weight to the extent that someone would be mad at them for breaking those promises and that this would have felt consequences that can impact their intuitions, or other ways in which it directly constrains their behaviors.
Also assume that they will act as if they care about blame on about a two week time horizon, so the consequences of things being proven false mostly have to back-chain in time to punish them within two weeks, or no one will care.
And that’s it.
For traditional news sources like the Washington Post, CNN or FOX:
Assume until proven otherwise that they are engaging primarily in simulacra level 3 behavior, pushing the relevant Narrative and playing to and showing their loyalty to their side of the dialectic to the extent possible. Thus, subject to the constraints they are under, assume they are giving the optimal available-to-them arguments-as-soldiers (also rhetoric-as-soldiers) version of whatever thing they are offering, and calibrate based on that.
Those constraints are a very narrow form of technically correct, the best kind of correct. Or rather, a very narrow form of not technically incorrect, with something that could be plausibly held up as some sort of justification, although that justification in turn need not be verified or accurate. So you can often have a circular information cascade with no actual evidence.
Basically, if a statement or other claim is:
- A specific falsifiable claim about the physical world.
- Could, if false, in actual practice, be falsified in a way that would ‘count.’
Then it has to technically be laid out in a not false way, for example by saying that ‘source Y (or an unnamed source) said that X’ instead of X. The Marx/Lincoln story is an excellent example of exactly where this line is. Assume that like that story, everything will go exactly up to that line to the extent it is useful for them to do so, but not over it. Then, based on what content is included, you know they didn’t have any better options, and you can back-chain to understand the situation.
Like politicians, they mostly also care about blame on a two-week time horizon, so there needs to be a way for the anticipated consequences of crossing lines and breaking rules to back-chain and be visible within two weeks, or they’ll mostly get ignored.
Assume that they are constantly saying things similar to ‘wet ground causes rain’ when they want to be against wet ground, and also framing everything with maximum prejudice. Everything given or available to them will be twisted to inflict maximum Narrative (and get maximum clicks otherwise) wherever possible, and analyze output on that basis. Assume that they outright lied to their sources about what the story was about, or what information would be included, or anything else, if they found this to be useful and worth more than not burning their source. Also remember that if you are about to be a source.
Basically, yes, there is a teeny tiny sense in which they will not outright lie, in the sense that there is a Fact Checker of some kind who has to be satisfied before they can hit publish, but assume it is the smallest sense possible while still containing at least some constraint on their behavior.
Remember that any given ‘source’ can, for example, be a politician.
Remember that if the source is an ‘expert’ that means exactly nothing.
Also assume that headlines have (almost) zero constraints on them, are written by someone who really, really doesn’t care about accuracy, and are free to not only be false but to directly contradict the story that follows, and that they often will do exactly that.
If information is absent, that only means that such information would have been unhelpful and they don’t think it would be too embarrassing to simply ignore it, for which the bar is very high. They are under zero obligation to say anything they don’t feel like saying, no matter how relevant.
If there’s an editorial, there are no rules.
If it’s in any way subjective, there are no rules.
Words mean whatever the Narrative decided they mean this week.
And that’s it.
(I will note that in my experience, Bloomberg in particular does not do this, and can be trusted substantially more. There likely are also others like that, but this should be your default.)
For ‘scientists’ and ‘experts’:
If you want to find a ‘scientist’ or ‘expert’ to say any given thing, you can.
If you have some claim that fits the Narrative, then unless it is a full strict-false-and-one-could-prove-it violation, you can get lots of experts/scientists to sign off on it. So all you’re learning is that this is part of the Narrative and isn’t definitely false.
You can look at the details of the dissent and the details of what is in the petition or official Narrative statement, and exactly who conspicuously did/said or didn’t say/do what and exactly what weaseling is there, and extract useful information from that, because they’re maximizing for Narrative value without going over the strict-false line.
Mostly any given expert will have slightly more constraints on than that, and will follow something similar to the news code, and will also have some amount of internal pressure that causes the vigor of endorsement to be somewhat proportional to the accuracy of the statement, but it’s also proportional to the magnitude of the Narrative pressure being applied, so one must be cautious.
The more technical the talking gets, the more you can trust it (to the extent you can understand it), there’s still some amount of dignity constraining behaviors in these ways in some places, but in other places it is mostly or entirely gone.
Also understand that the systems and rules are set up at this point to allow for very strong suppression of dissent, and creation of the illusion of consensus, through the use of social pressures and isolated demands for rigor and other such tactics, without need to resort to sharp falsifiable statements. Often the tactics and justifications involved in such moves are obvious nonsense when viewed by ordinary humans, but that is well within bounds, and failing to use such tactics is often not within bounds.
Expert consensus that is falsifiable-in-practice-in-a-punishing-way can still largely be trusted.
Expert consensus that is not that, not so much. Not as such. Not anymore. But you can sometimes notice that the consensus is unexpectedly robust versus what you’d expect if it wasn’t trustworthy. You can also use your own models to verify that what the experts are saying is reasonable, combined with other secondary sources doing the same thing, and combined with individual experts you have reason to trust.
You should definitely expect the experts in any given field to greatly exaggerate the importance of the field at every turn, and to warn of the dire consequences of its neglect and our failure to Do Something, and for there to be real consensus on that for obvious reasons, except with less shame or restraint than in the past.
And, again, that’s it.
There are other sources, specific sources, where the translation matrix is less extreme, and I of course do my best to draw as much as possible from such sources. There’s still almost always a long ways to go before getting to the level of trust that would be ideal, but there are many levels.
So What Do We Do Now?
We decide how much time and effort we want to spend maintaining our calibration and translation matrix, and for which sources.
Maintaining a high-quality translation matrix of your own is a lot of work. That work isn’t obviously worth it for you to do. There are three basic approaches here.
One is to basically stop caring so much about the news. This is a good strategy for many, and in most times. Before Covid, especially before Trump and when not doing any trading that relied on knowing what was going on, I was mostly implementing it. One can live the good life without caring about such matters. In fact, not caring often makes it easier. Thus, you don’t know what you can trust. But as long as you also don’t care, it’s fine.
You know what’s going on hyper-locally, with your friends and family and work and neighborhood, and that’s it. For most of history, that was enough.
This isn’t as easy as staying away from newspapers and other official news sources. You also have to deal with the constant stream of news-bringing on social media, and in real life from coworkers, friends and family, and so on. You might want to be done with the news, but the news isn’t voluntarily done with you.
You’ll need to train yourself that when you see a post about today’s terrible news, you ask yourself only one question. Will this directly impact the local physical world in ways that alter my life, thus forcing me to care? Or not? If not, move on. If it’s political advocacy, or someone being wrong on the internet, definitely move on. Offline, you’ll need to follow similar procedures, which will require smiling and nodding.
You’ll also need to filter your incoming sources of non-news to filter out those who bring you too much news that isn’t directly relevant to your life, and especially those who bring you political advocacy. This leads to some tough choices, as there are sources that have a combination of worthwhile things and exactly what you want to avoid. They’re mostly going to have to go.
A second option is to keep very careful track of the physical world conditions, do lots of your own work and not need to rely on secondary sources like newspapers. I assure you that mostly this is a lot of work and you only want to do this in carefully selected sub-realms. It’s taking the local approach and extending it to some non-local things, but it’s difficult and it’s time intensive, and mostly only makes sense if your conclusions are in turn going to be relied on by others. Also, it often needs to complement keeping up your translation matrix rather than substituting for it, as I can attest from experience.
The other option is division of labor and outsourcing.
If you can find a sufficiently trustworthy secondary source that analyzes the information for you, then you don’t need to worry about the trust level of their sources. That’s their problem.
Or to put it another way, you don’t have to have a fully general translation matrix. You only need to have a translation matrix for sources you want to get information from. You get to choose your portfolio of sources.
That can be as simple as your spouse or a good friend that you know you can trust. There is of course a risk of telephone problems if there are too many ‘links in the chain’ but such costs are often acceptable. Using a personal source has the extra advantage that they can filter for you because they have a good idea what is relevant to your interests.
It can also aggregate various community sources. There’s the obvious danger of information cascades here as there is elsewhere, as the upstream sources are still what they are, but it does provide some amount of protection.
You can also choose anything from one or more bloggers to a set of Twitter accounts to a newspaper, radio show or TV program you find to be unusually trustworthy. Or combine any or all of these and other sources.
I sometimes hear that someone has decided to outsource their Covid perspectives to me and my posts in this way. The posts are designed to allow you to think for yourself and reach your own conclusions, but also to save you the work of needing to maintain a detailed translation matrix while doing so, especially since I hope that the correct matrix for DWATV itself is very close to the identity matrix, except for the need to ‘translate into one’s own language’ since my way of framing and thinking about things has quirks and likely doesn’t exactly match yours. But that’s ideally about understanding rather than trust.
I have compiled a lot of sources over the years that I trust to be rather high up on a ‘pyramid of trust,’ meme version not currently ready for publication. This includes most (but not quite all) of my friends, since I value such trustworthiness and careful speaking highly, but even within that set there’s clear distinctions of how careful one needs to be with each source in various ways.
Everyone I list on my links and blogroll qualifies as someone I am mostly willing to trust. If they didn’t count as that, I wouldn’t list them.
That doesn’t mean I fully trust their judgment, or that it’s all created equal, but there’s a sense in which I can relax when engaging with such sources. There’s also, of course, a sense in which I can’t relax even when dealing with most of those sources, to varying degrees. I wish that were not so, but better to accept it than to pretend it’s not true.
The best sources, at least for my purposes, do an excellent job of being transparent about how trustworthy they are being in any given situation. Scott Alexander, as a prime example, is very good at this.
That’s the landscape on a personal and practical level.
Mostly I recommend, for keeping general tabs on the world, collecting a list of sources you’ve decided you can trust in certain ways, and then mostly trusting them in those ways while keeping an eye out in case things have changed. Then supplementing that with one’s own investigations when it matters to you.
For keeping tabs on your own local world, there are no shortcuts. You’ll have to do the work yourself.
But what about the global problem as a global problem? Sure, politicians have mostly always lied their pants on fire, but what to collectively do about this epic burning of the more general epistemic commons?
There are no easy answers there.
My blog is in part an attempt at an answer. This seems very much like a Be The Change You Want to See in the World situation. Thus, one can begin by striving to:
- Being a trustworthy source of information to the extent you can manage. This includes not silently dropping information whose implications you dislike.
- That means being clear on how and why you believe what you believe, and how high your confidence is in it.
- Explicit probabilities are great when appropriate.
- As is holding yourself accountable when you’re wrong.
- Not rewarding untrustworthy sources, including with undue attention. When appropriate, make it clear in what ways they cannot be trusted, but mostly don’t give them the oxygen of attention that they thrive on.
- Rewarding trustworthy sources, including with attention, spread the word.
- Focus on the physical reality, de-emphasize all versions of the Narrative. Look to figure out the gears underlying all this, and create common knowledge.
- Make it clear you are doing this, to provide reason to follow suit.
This doesn’t have to be about having a blog, or even a social media account, or the internet, or any kind of information projection at all. It’s about how people at all levels interact, in the world, with people.
Note for Commenters: The no-politics rules are importantly not suspended here. Some amount of interaction with politics will be necessary. But beyond a clear emphasis on physical-world simulacra-level-1 considerations, advocacy of positions and partisan bickering remain right out. I stand by willing to use the delete button and potentially the ban hammer if necessary, while remaining hopeful they will not be necessary.
This article is good enough that I am likely to reread it to better internalize your points and perspectives. Thank you. Thank you also (though it’s an aside from your main point) for cataloguing in one place some of the ways in which the Powers That Be have equivocated or lied about COVID-19. I’ve been reading your articles for a while, as well as living life and paying attention to it, but it’s easy to lose track of details over moderate time horizons when my focus is not on politics.
Also, I think there’s a factor I think worth mentioning, that I mentioned on SA’s post, which is that the structure in use here has a particular smell of just-world theory. In particular, I get the sense that the fact that there might be rules becomes a kind of meta-justification; okay, yeah, things are bad, but you can still identify the truth, so things really aren’t all that bad, and if you come to the wrong conclusion, it’s kind of your own fault, because you had access to the same information everybody who came to the right conclusion did.
Which is to say: I think the entire framework here represents people mashing the “Defect” button over and over again, and patronizing and utilizing such institutions to extract information is, in an important sense, choosing to defect with them against everybody else (and also leaves you open to be defected against yourself). And in a sense, SA’s article is kind of like an instruction manual for joining that defection more effectively; personally, no, I think the correct societal response is not for everybody to arrive at the correct level of savvy (which just moves the goalposts), but to treat institutions which do this as fundamentally untrustworthy.
I think the rationality community, and more generally/to a lesser degree, the 2000s wider hacker/science-literate/etc. online community, and even more generally/to a lesser degree Americans, used to be fairly good at this? At being generally aware of the slippery slope that propagandic dishonesty led to and thus calling it out immediately… at making sure the guilty party had a large reputation hit (for an extreme example, everyone’s old favorite punching bag, Stephen Jay Gould).
“This seems very much like a Be The Change You Want to See in the World situation.”
For more on this, please see my LessWrong posts on how to strengthen ones virtues of Honesty and Sincerity: https://www.lesswrong.com/s/xqgwpmwDYsn8osoje/p/9iMMNtz3nNJ8idduF and https://www.lesswrong.com/s/xqgwpmwDYsn8osoje/p/haikNyAWze9SdBpb6 respectively.
Pingback: The Urgency of Normal: An Exercise in Bounded Distrust | Don't Worry About the Vase
You say “The difference is that Scott seems to think that the government, media and other authority figures continue mostly to play by a version of these rules that I believe they mostly used to follow.” I strongly disagree with this.
eg Kennedy was a hero when I was young. But it turns out the entire press core decided to lie about his screw up at Bay of Pigs and nearly getting us all into nuclear war with the Soviet Union. They gave Kennedy a pass. ed note: superpower nuclear war is bad. You shouldn’t lie about it. But there you go.
And of course George W Bush post 9/11 wars were caused by massive failures institutionally, where there wasn’t a big lie so much as each level of the government shading the truth up their chain of command until we went to war for very stupid reasons and disastrous consequences.
The old way lies happened was different in the top down centralized mass market media era, with all information flowing up to top of organizations, and then being mass broadcast out. So given the media tech of the time, lying from on high was the style. Totalitarianism and all that.
But in the internet era, every lie can be checked by multitudes. Smartphones everywhere. So there’s an unending erosion of big lies, which breeds conspiracy theories.
Anway, my point that looking at 1995 or 2005 as a golden age of truthful standards seems to me completely wrong. It’s just that when we look back on when we were young, we see via memory a golden haze of energy and youth. And those times seem better. They may have been in some ways, but regarding lying per say, I’d say not. The *way* we lied has changed is all.
This doesn’t really modify any of your points above of course. Since you are saying how to get at the truth. But it does put it into some perspective, that it’s more helpful to view this as a chronic problem to which human nature is prone. Not some special odd historical thing which will be cured with a good checklist.
I’m definitely confused about the state of things in the past, even if we mostly can agree about the state in the present. The JFK example is interesting because it definitely seems like the press was willing to ‘give a pass’ and overlook things back then, but it’s not clear that this translates to making things up the same way.
And either way, no one’s saying its’ something that can be cured (or at least, I didn’t mean to) with or without a checklist. Only that such methods help.
thanks for response. and yeah, I agree on your points as to how to attack this. good post!
I guess my attempt at a contribution here is the odd and peculiar way media outlets exaggerate/lie today is what’s new. Not the fact of information bending the knee to power, which is ancient. I’ve lived through enough cycles of it. The past was not more truthful. The key is understanding why it’s happening in this peculiar way now. In the mass media age, lies came declared top down. The newer dynamics of twitter/news/media ecosystem has its own problems and different ways of hiding truth. And so combating it will also come from those very same technologies. But I think viewing these kinds of battles as innate to how a liberal society works is in a way oddly encouraging.
So many thoughts, but three that really stood out when reading both posts.
1. Both models but especially Scott acknowledge that “things in the physical world that can be easily proven/disproven” i.e. the Yankee Stadium Shooting are carrying a LOT of weight in this model. It is the one fundamental building block from which a chain-of-trust can plausibly be built.
I am surprised that neither asked: How much of this particular type of “trustable-news” in this Bounded Distrust model does any given flow of information contain? I would argue not much, in fact nearly none.
“Physically visible disaster or pre-defined social event happening physically in the real world right on short time scale now (days at most) with visible human or material consequences, large enough that many people and some news outlets will get their cameras on this unambiguous thing” is the only example I can actually think of, but I would like to hear more example. Natural Disaster, Sporting Event.
If that represents <1% of "the news", it seems to me "Bounded Trust" would be a better heuristic, where you assume absolutely everything is lie until you've built your own web of trust up to it. (I think this is the Zvi model – Scott's seems woefully naive.) Trust in True Physical World News seems to be the 99 to 1 exception.
2. Lack of discussion of Meta Game or Theory of Mind. Both Scott and Zvi both speak to a VERY rarefied audience of super high intelligence people, and even still the Twitter takes yesterday by THESE EXACT PEOPLE were mostly "Here Scott Alexander says why you should mostly trust the news". Which is exactly not what he said. Irony of ironies, Farming out your matrix, etc…
Most people only read the headlines. Maybe the lede.
We all acknowledge the headlines are all lies.
This should be the end. Game over.
Most people in the real world are reading all lies and using this as their basis of "reality" updating. That is "Trust Blindly".
Your models rely on 6,000 words each about very carefully parsing each and every sentence for what was said or conspicuously left out and by whom – which nearly no one does, and even the people that supposedly do still often will tweet the link with the exact opposite message
Yes – we have to focus on "what is"….but I would argue "what is"…is that people only read the headline.
I really do hope there are follow up posts.
3. Question: At what point in our current world does the game theory shift and somebody resets the trust-o-meter away from "glorious harvest" and back to "terrible harvest" ?
Surely at some point down this spiral it is valuable to do so, no?
1. I think it’s not quite as bad as all that, but it’s much worse than Scott was thinking it was, and that you can still build a lot off that by assuming everyone is playing well – it’s like “How did you know he had exactly AQ of clubs?” and the answer is you watched the hand.
2. Well, yeah, except game isn’t over, it’s never over, got to keep playing. But also I do think SA was telling people to mostly trust the news in ways that are not a good idea, which is a lot of why I wrote this. And my method is supposed to involve knowing when to raise the alarm and when not to more than always being in alarm mode, hopefully, for most people most of the time (and was also aimed at the people who would actually read it).
3.. At some point, yes. I think it’s a valid play in some situations now.
To what extent do you consider agenda setting as part of this whole trustworthiness question? Take the focus on wealth inequality for instance. I don’t know if many people are lying about the facts there, and I don’t think they are. But just that they are so willing to go on and on about them as though they’re a uniquely damning indictment of capitalism, without focusing on all the ways in which capitalism(and related institutions) have hugely improved our lives feels fundamentally dishonest
Agenda setting is vital to knowing why someone would choose to say the things they are saying and not other things, and therefore to know what other options they had and the state of the game board they’re acting from. You can presume they’re doing maximum framing at a minimum in that spot.
“In my case, if I believed the local wrongthink, I would avoid lying by the strategy of being very very quiet on the whole topic because I wouldn’t want to cash in this type of One Time on this particular topic and risk this being a permanent talking point whenever my name comes up. Wouldn’t be Worth It.”
I just noticed one popular topic you’ve never said anything about. Hmmmm… are you possibly not being quiet enough? — OTOH, I tend to do the same thing in my own more limited social context of asking myself ‘is this a hill I want to die on’ and then answering ‘no, not really’.
There are actually many particular topics Zvi has never said anything on, so I think we should assume he has the maximum possible unpopular opinions on all of them until proven otherwise!
Zvi, why do you think that the sequels are the best Star Wars movies?
because A New Hope had to lay a lot of foundational work first, so The Empire Strikes Back and Return of the Jedi could do more with the world.
I’m interested in how you think about publications’ profit motive vs. their desire to bolster certain narratives.
Do they behave as they do because they’re trying to maximize profits, and the way to do that now is to follow the playbook you described?
Or do they behave as they do because their objective function includes spreading a particular narrative, which either perfectly overlaps with or sometimes displaces their motive of self- and shareholder-enrichment?
I enjoyed the article and agree with most of it, but I don’t understand the tone of outrage surrounding the mask guideline “lies”. I am aware this is a frequently trotted-out example of “experts lie”, but I’ve never quite understood the spirit of this particular example. It seems like the kind of normal, mundane, and expected attempted white-lie, that we have seen in politics since the beginning of time, that reflects the usual tensions and compromises made between the experts on the one hand, and the political/logistical/utilitarian/moral concerns on the other, of deciding how best to simplify and convey public policy to the masses in the most effective way, given various concerns about potential hoarding, historically poor public understanding of uncertainty and nuance, etc, etc. What is your own theory of what happened? What were the internal deliberations/justifications for the “lies” about masking, and why do you find those justifications outrageous?
I’m not playing at outrage here, I’m simply using it as an obvious clear example of lying. I think this had a variety of pretty terrible effects, and also they’re still lying about it. Most white lies don’t keep changing followed by trying to pretend they didn’t change, they’re just one-time lies.
If they are not white lies, you are implying (as best I can tell) something nefarious. What is its design? Again — what is your explanation for the lies?
I don’t get why you are focused on the word ‘white’ here. They wanted to advance some agenda they thought was good for The People, ok, I don’t see how that’s central but if true that doesn’t exactly put it in a non-scary reference class.
To reply to your last comment, I think whether it’s in a non-scary reference class depends absolutely critically on the motivations. Please see my comment here to see my reasoning: https://thezvi.wordpress.com/2022/02/03/on-bounded-distrust/#comment-17719
I’ve seen a bunch of climatologists push back on the media/left wing’s extreme doomerism about climate change, but that’s only because I’m a weather nerd who follows them for other reasons. Even someone who’s famous for a scientist, like Michael Mann, doesn’t have anything like the reach of a national media outlet.
I think there is a charitable case to be made for some model of doomerism (which I think more broadly applies to much of what Zvi criticizes here), which is the following dilemma: what is a scientist to do when they know and/or experience public apathy in the face of data that is superficially comforting to a lay-person but is plausibly terrifying to many expert? It seems natural and morally defensible that they would choose the “One Time” option described by Zvi, and hope for the best. The global warming example is politically charged, so let me give a different but I think representative example: suppose that scientists discover an asteroid with a 1% chance of destroying Earth, and the error bars on the “1%” number are negligible. I think it is entirely rational for an expert to conclude that the situation is so dire as to merit the marshalling of a large fraction of the world GDP in trying to deflect it. And yet, the public, who is terrible with statistics, is unlikely to respond rationally to this threat (even putting aside the issue becoming politicized and/or threatening the profits of powerful and politically-connected global markets). What do you, as an expert, do? You are in a lose-lose situation: you can either ruin your credibility by being a doomer who is 99% likely to be wrong and made fun of, or you can do nothing while the Earth has a 1% chance of being destroyed. I think the optimal and moral strategy is somewhere in the middle: try to do your best to convey the urgent threat in ways that use a translation matrix that attempts to honestly “re-weight” the public’s poor statistical reasoning skills (for example by describing the 1% threat as a 10% threat, or something, at the cost of trading some diminishment of trust in expertise). I don’t know, there are no good answers here, and I think we need to have some empathy for the difficult moral tradeoffs that are typically made in these situations.
Sure, that’s how they get you.
First you agree that in principle you’d be willing to claim a 1% existential risk is 10%, and then you agree that you’d claim a really big risk is basically existential. Then you agree that you don’t need to have the error bars that firm so it can really be a -5% – +10% likelihood really big risk and still be worth lying about. Add a few more steps like that, throw in some personal career risk, and pretty soon you’re arguing that Gain of Function research actually makes the world safer and lab leaks should be covered up, lest ‘dangerous conspiracy theories’ stop your funding.
It’s understandable. Doesn’t mean it’s good for society, and especially doesn’t mean I should listen to you after you’ve started down that path.
You are claiming a slippery slope (uncharitably — I for one don’t go down it) in order to avoid answering the question about the concrete example given. Seriously: if you were in the hypothetical, what would you do? (BTW I don’t think the situation is good for society either, but I think much of the blame lies with those who don’t think rationally about low probability high consequence existential risks — sure the doomers are wrong most of the time, but they only have to be right once to be totally vindicated, but at what cost).
I find it weird to deny the slipperiness of the slope here, even if one thinks oneself capable of not sliding down it, given the motivating example’s slope has proven rather slippery indeed, to the point where – literally – a large fraction of the population expects short-term human extinction. What’s more slippery than that?
The weather thing hasn’t gotten completely out of hand, but I definitely notice the slope and it seems slippery to me despite everyone’s mostly good intentions.
If I was Leo in Don’t Look Up but the probability of disaster was 1% if nothing was done, I predict I would in fact not lie about it, but I also predict that if someone else said 10% I wouldn’t go smacking them down for their motivated error.
It’s a silly hypothetical, which only exists to prime the intuition for that slippery slope. There’s no real-world analogue that doesn’t change it. Not even asteroids. A ‘1% chance with low error bars’ can’t exist for inductive reasoning about something that hasn’t happened before. You only get that probability distribution in a casino, or for things that are so small they’re observed all the time. Existential stuff is always going to be 1% with large error bars, plus a chance of ‘chose the wrong model, assumptions, or made a typo.’ That’s certainly the case with an asteroid; we’re never going to be hit by 1% of an asteroid, the model uncertainty always comes from not having enough observations to precisely determine its orbit.
In which case the appropriate thing to do is to do the work and collect the observations to change your 1% with large error bars into yes or no. And it’s the time to be scrupulously honest, because you’re going to end up with the whole world checking your math before they believe you.
I remember reading in The Signal and The Noise by Nate Silver about weather forecasters doing this. Most of their percentage chance of rain were accurate on a short enough time scale, but they would round 5% up to 10% , because people don’t understand probability.
 I don’t remember what specific numbers he used.
Pingback: Covid 2/3/22: Loosening Bounds | Don't Worry About the Vase
> you should expect to see “we can’t know for sure that wet ground causes rain, but we do know that there’s a strong correlation, and where wet ground you can usually look up and see the rain coming down”
I would also expect to see a name-drop of “hydrological cycle”, an extremely condensed explanation thereof (length permitting), and an insinuation that anybody who doubts the evaporation-condensation-precipitation theory/model questions -o-b-s-o-l-e-t-e-; pardon, wrong word; *long-established* Science(TM).
> It counts as ‘scientific misconduct’ for you to not be able to justify how your research would ‘reduce exclusion and improve integration.’
> Which is odd.
> It means it is official policy that wrongfacts are being suppressed to avoid encouraging wrongthink and wrongpolicy.
My reading is more along “you are still allowed to publish any fact, even those that would _without commentary_ count as a soldier for the wrong side, as long as you surround it with the commentary that weaves said fact into the right narrative”. (Off the top of my head: “the elevated rate of offense is evidence of previously having been / currently being excluded — indeed victimized — by the bulk of society, therefore the correct policy response is to redouble efforts to integrate”.) Also known as “[…] that their discussion of what it means is in good faith”. (Obligatory reference to Kolmogorov complicity and the parable of lightning.)
> Mostly, presume that politicians are incapable of means-ends reasoning or thinking strategically
This is one reason why I like turning the simulacra levels into a 2×2 grid. It’s obvious that successful politicians exert a lot of optimization pressure somehow, therefore they must do something a lot like means-end reasoning in some way. They just do it with the mental toolset of LARPistemology.
Incidentally, this raises the possibility of further levels. If the act of speaking publicly inherently carries the options of throwing out Schelling points, statements with baked-in assumptions (where unless somebody burns status on responding not to the statement but with an explicit challenge to the assumption, the assumption is assented to and thus provisionally becomes a part of common knowledge), etc., then it’s fair to abstract this and say “speech is power”, and treat speaking/interrupting/etc. as a dominance contest.
My understanding of such framing defenses is that it’s complicated and context-dependent. Sometimes you get away with it, but the prosecution can choose to bury you anyway if they feel like it, and sometimes they do that. So it’s risky and I avoid it. Even if it’s fine for now, it creates attack surface.
Great post ! I look forward to Scott’s response as well.
Thoughts optimized for speed. Some things in sub-comments to de-clutter and for ease of deleting if needed.
The NYTimes piece on Scott Alexander is especially interesting because I don’t think that the Times realized that this would be their One Time for so many people. Even the top players of the game get surprised by the rules.
Scott Alexander, in his Why Do I Suck? post: “I think the media has genuinely improved!” (Or he’s improved his sources.) This seems like a major point of disagreement between you.
Abdullah Abdul from Saudi Arabia is white under US law. There was a Supreme Court case in 1915 (Dow v. United States) that decided that Arabs are white. Dow was a Syrian Christian – if you want a Muslim from Arabia, the explicit precedent is Ex Parte Mohriez in 1944.
If you (or someone else) wants to do a more detailed look at the Big Lie narrative (not here), I recommend looking at Georgia’s elections since 2018. Link to a less spinny piece on some of the events: https://www.npr.org/2020/11/18/935734198/trump-hasnt-conceded-georgia-neither-did-stacey-abrams-what-changed
Yeah, that all seems right, although Abrams was an outlier. I don’t see her actions as great, but I do see important differences there – she agreed that Kemp was going to be certified the winner and that as a legal matter that was that, and to settle the matter in the next election. But to avoid getting into politics, I’ll stop it there.
What I find interesting here is the extent to which similar actions (refusing to concede an election, or voting in Congress to reject the results of a state’s presidential vote) can be used as evidence that that person is either pro-democracy or anti-democracy.
I was surprised to see the claim the media has improved. I can see the argument that the media improved in its coverage of scientific papers – they can do less of the ‘wet ground causes rain’ there because there are more people more willing to more loudly call them morons for it, so they put in more of an effort.
I do notice it echoes my position on many other things, which is that in many ways the world has gotten much better, but our standards for what is acceptable have risen faster and thus we think things are terrible instead of unusually awesome (and of course, they are both unusually awesome and importantly unacceptable, both at the same time, more work is to be done, all that, except sometimes when the expectations are just dumb or impossible).
But in some ways I think it’s clear things are worse, and before I explored further I’d be curious to hear Scott say in which ways he thinks they are worse.
Let me first of all apologize for not really substantiating my claims below.
Your turn of phrase “dripping with contempt and hatred” took me back to 2014, when I was paying attention to, and later lending moral support to what I believe were the unfairly maligned underdog faction in a…minor internet flamewar. It’s been a long time since, but I believe this lens about media incentives could be applied to the article that ignited gamergate, and result in some enlightening insights: https://archive.fo/BiMxb
To save the effort of reading it, I’d sum up the insights I’m alluding to thus: I have reason to believe the slippage of standards was well underway by 2014, at least in enthusiast press like gaming journalism. I can’t claim it about other niches, but I expect it to generalize. The article itself is an editorial, so all bets being off also applies. But to me it seems like a clear case of a Narrative Update being pushed out. I’ll also note it’s in response to the precursor of the controversy already brewing, so it cannot be called unprompted. In closing: I think gamers already got a demo version of simulacrum level 3 press about 8 years ago.
I’d be worse off if I didn’t have these posts to explain everything and build up week after week a bigger “bank” of well-investigated examples on which to train the translation matrix. This is excellent and thank you!
Great article! The problem with unbounded distrust is that it’s a lot of work, and you’re likely to be less wrong if you follow bounded distrust on average than if you don’t. People have a tendency to fall into rabbit holes and conspiracy theories, and bounded distrust mostly prevents that. It’s hard to fault as a strategy.
>But there’s a bunch of red lines I thought they had (and that I think previously they did have) that they’ve crossed lately, so how confident can we be?
> Can we say ‘don’t lie directly about specific actual physical world falsifiable claims?’
> I mean, no. We can’t. Because they did and they got caught.
The thing that shocked me most, to be honest, was lying about their subjective probabilities of a lab leak. We now have declassified statements of scientists (Fauci, Collins, etc..) in calls discussing that they personally didn’t find a lab leak unlikely, and then publishing papers saying the opposite the next day. These calls included scientists I had previously thought of as being “honest”, who in the next days went on to claim that the lab-leak scenario was something any virologists thought was plausible I previously thought this would have been a red line. I’ve been in Academia, and where I was, this kind of clear, falsifiable lying did not happen. People dressed up their results, even knowingly, but they did not lie. I expected it to be the same in other fields too, now I am no longer sure. I get that any statement to the effect of “probably not, but yeah maybe it was a lab leak, but we can’t ever really know” might have provoked chaos and anti-scientific sentiment, but still….
s/was something any virologists/was not something any virologists/
Pingback: Convoy | Don't Worry About the Vase
Oh this is the meaning of depressive realism. The more I learn about the world the less I like it.
Zvi, I feel like you exhibit a bounded distrust of prediction markets that you haven’t explain. There are times when you will take prediction market odds as ground truth, or at least a good prior to start your updates from. There are other times when you note that the markets are completely insane and there are huge piles of money lying on the ground. I have no way of telling whether I should expect a prediction market to be accurate or powered by the same algorithm that produced a mid-November prediction of “Trump is 10% to win”. What is your method?
Excellent question. The basic answer is:
1. Markets are not accurate until they have eyes and volume. So if the market is just some guy’s guess, then treat it as such.
2. If there’s been a bunch of volume and attention, trust them more.
3. People won’t move the odds to their full fair, so there’s some amount of inertia/lag.
4. They have known biases you can take advantage of (e.g. overvaluing large underdogs).
5. They have known other ways to get distorted (e.g. people won’t tie up money without enough return, Trump voters gotta Trump, etc)
6. Chesterson’s fence, if I think I can outthink the people thinking, great. But I gotta know why I think that, slash know what I think they’re missing.
7. Calling the odds terrible and claiming free money is often fully compatible with saying they have great info. E.g. in a baseball game they might say 54% to win and I say 57%, and I call that ‘terrible’. Or they say in an NFL game 37% to win and if I hadn’t seen the odds I’d say 67%, so I call that line ‘terrible’ and move my own fair to… 41%.
That’s helpful, I’d read you stating most of those things in some prediction market post but the way you write your average reference to prediction markets doesn’t it clear that that’s the trustworthiness algorithm being applied (contrast your Covid reporting where you do make it very clear why you do or don’t trust a claim).
Why do you think the underdog bias exists? My own model is something like “Very Online people tend to throw money at their preferred outcome without modeling the world, markets often make it hard to profit off correcting a 90-10 split that should be 99-1.”
Yeah, mostly that. If you bet at 90% odds you’re laying 9:1 and if there’s a delay you can only do so well, and also there’s fees on top which make it even worse. So especially when there’s other things to do, why bother? Not worth the effort. And you need $9 in that way for every $1 at 1%. For something that’s only 1% to happen (Ron Paul!) this gets that much worse. The strangest part of course is it is not fully symmetric and the midpoint if 40% not 50% for some reason…
“That and the stakes involved are why I broke my usual no-unnecessary-politics rules in the post after the election was clearly decided to be very explicit that Biden had won the election – it was a form of cashing in one’s One Time in a high-leverage moment, bending one’s rules and paying the price.”
What leverage exactly? The obvious reading of this is that you were trying to appeal to your authority as a smart respected guy so that your right-wing audience would trust the election but that does not make sense to me. Your 2020 statement was a bare assertion of the Official Narrative (neither false nor misleading, but this post just got done analyzing narratives vs evidence and I think you were obviously Supporting The Narrative) and that doesn’t seem like what you would write if your goal was to make anyone update towards election legitimacy. In retrospect, do you think it worked? Was a +EV play?
We will never know – 99%+ of my readers never leave a comment in any case. I do think that making it clear I was willing to break my usual rules and say something at that moment was a reasonably costly/strong signal, and that I’ve repeatedly attacked the Official Narrative enough at that point that it’s clear I wouldn’t do that unless I believed it strongly and for reasons. It wasn’t going through the evidence as such, but it was a very clear “I have looked at the evidence and found it highly convincing” and in context I would take such views as evidence and update in that way. I was hoping I’d accumulated leverage in this way and that I could use some of it to, in some small way, lower the likelihood of very bad outcomes.
Did people update? No idea. Would I do it again in that spot? Yes, and if anything I’d be more explicit about why rather than saying less.
I think you would have been more persuasive replacing what you wrote with a single sentence to the effect of “I think the election was definitely legitimate, if you respect my judgement please update accordingly”. Five paragraphs of bare assertions supporting the narrative pattern-matches to the Facebook shitposts of a mindkilled partisan whose judgement shouldn’t be respected. I imagine the primary update you caused in your target audience was approximately “I see that despite being generally reasonable, Zvi has the capacity for partisan shitposting and should be disregarded on this topic.” That was the update I made and I agreed with you about the facts!
The ‘One Time’ joke is literally the funniest joke in the world to me, probably to the detriment of the people at the tables I play, so I deeply appreciate how much it’s embedded into this piece that I would otherwise find depressing.
Ezra Klein starts to get it: https://twitter.com/ezraklein/status/1490741768425394176
Not the rationalist approach to evaluate trust levels, but at least he is starting to see the problem.
This is excellent work. It greatly clarifies what Alexander made a start at.
The “no politics” rule is a strange one to impose in this context. The distrust problem arises from two things: politics and incompetence. Obviously, one distrusts the incompetent, regardless of politics. As to the competent purveyors of public words and images, the only question is the political one. The tactics mentioned as useful in bounding one’s distrust of these purveyors are all counter-political tactics, intended to compensate for perceived political bias and the strategies utilized to achieve political ends. This being so, one *must* discover the political ownership of each relevant information source, preferably starting at the foundation of the information stream, although the foundation is generally carefully obscured.
I once thought it feasible for the intellectual elite to play the “bounded distrust” game with hostile info sources. Experience has punished this assumption–other than for trivial news stories (like shootings). Either you’re an expert or you’re not. If you’re not, you’re a gambler in someone else’s casino, probably playing a machine. If you are an expert, then the question resolves down to whether you have adequate time and information access (eg, maybe not for bio-weapons programs).
The Powers seem to be most motivated to lie boldly on whatever the most important issue is at that time: the justification for the 2003 Iraq invasion, the condition of the financial system in 2007-8, the response to Covid in 2020-21. The ruling class as a whole and its media system doesn’t have just “One Time” to burn. They have at least one per decade. Given this, would they tell the necessary lies if the 2020 election had been, in some important sense, stolen? Why not? Do they think of “American democracy” as anything other than a convenient fiction to justify their power? You might argue it’s a stupid risk to take, given that most of their power is not dependent on democratic mechanisms anyway. The media, Wall Street, the monopolies and oligopolies, the NGOs, the Pentagon–none of these are democratic institutions. It’s reckless to trifle with the small corner of power the people think they control. The big lies come when they greatly benefit the elite, which coincidentally tend to harm the people–and sometimes non-Americans. One ought to consider whether a given piece of information might be a lie that will be of great benefit to the elite. This is related to the point about the trust relationship being anti-inductive. Also: I think they could do Iraq, the GFC, and Covid again, surprise most people again, and get away with it again. I think “One Time” is a myth as applied to the elite; they have infinite times for so long as they remain in power. Maybe, if overused, their power would collapse. Or: if they used their One Time–for once–to substantively succeed at something, maybe it would be excusable. Instead, we get failure after failure–but only for us, not for them.
Then there are the chronic lies, too many to enumerate. I’ll just mention the Afghanistan occupation and American free trade. There was never much hope of progress in Afghanistan, but the lies about it never stopped–still haven’t. Free trade: it’s not free, comparative advantage must be understood in context, mercantilism works when competently actioned, China is consequently winning and America is losing. America’s bipartisan economic decline continues in part due to this truly stupid, very-much-alive ideology. I don’t think too many non-experts understand that free trade ideology is a ruse. These are “One Times” that go on for decades. Maybe I’m using the phrase too broadly, but these are major failures, in my opinion, underwritten by huge lies which ought to burn credibility down.
Do public intellectuals ever lose their posts due to being wrong over and over or on major questions? They also seem to have infinite times, proving, to my sense, their role as spokesmen for the ruling class.
I remember someone analogized official regime journalism like the Times or the Post to the internal memo system of the ruling class. This internal memo dynamic is one of the guard rails against certain types of lies. The outgroup must become fluent in ingroup cant in order to grasp the informational subtleties of this system–a task neither easy nor pleasant.
As to how old the old rules are:
The NY Times infamously and quite deliberately buried the Holodomor in the early 30s. I’m not sure when those 5 million bodies were finally dug up for the edification of the American public, but it was definitely post-1945. And the Holodomor began before Hitler took power–so that excuse is entirely lacking. In 1965, the people involved in passing the Immigration Act promised it would not result in changing the ethnic composition of the nation: a very bold, consequential, and deliberate lie. The official press has never called them on it. It’s in the old memory-hole.
“But once again, I don’t think Scott’s stated algorithm is a good one, although once again I happily don’t think Scott is using his stated algorithm in practice.” The version of bounded distrust applicable to Scott Alexander: do what he does, not what he says.
Your practical advice at the end is also excellent. It took me many years to come up with a similar approach. I wonder if this degree of realism/cynicism is digestible among people under 30 or so. I had to be burned a couple of times by current events, not history, before I could grok it. Yet, even I have been somewhat surprised by the mismanagement of Covid. Of course, the more we lower our expectations, potentially the more degrees of diabolical freedom the rulers perceive to be available, a vicious circle.
Pingback: The Long Long Covid Post | Don't Worry About the Vase
Pingback: Coasting Downwards | Don't Worry About the Vase
Pingback: How to Best Use Twitter | Don't Worry About the Vase