The Twitter Files: Covid Edition

At long last, this week we got the Covid Edition of The Twitter Files.

I focus here only on this edition, and not focusing on Musk or the Twitter Files more generally. Is there a there there?

Seems like the answer is, yes, but the there that was there seems not so out there?

I’ll walk through the thread.

Go on.

I am shocked that governments express preferences. Go on.

I sympathize. The first rule of panic buying is you do not talk about panic buying. It is the job of The New York Times, let’s say, to have the headline be “Widespread Panic Buying Across Nation” when there is widespread panic buying across the nation. You got to let journalists journalist. We can also realize that doing this is Not Helping. It is individually rational and socially harmful to panic buy, it is valuable information and big news that there is panic buying happening, and the government has a clear interest in hiding the existence of panic buying.

That does not justify censorship or labeling true information ‘misinformation’ because we prefer people not know. But I understand. This is kind of like shouting fire in a crowded theater, which a lot of people keep saying is illegal (but which, actually, is totally legal if there is an actual fire.)

Certainly if an individual speaker decided it was better not to mention something like panic buying on the internet, that is often a reasonable or even wise decision. Asking others politely to consider not mentioning or amplifying this news more than necessary also seems reasonable. I certainly understand, especially if they did not do this with other later issues that did not involve threats to public order.

The thread then skips directly to the Biden administration.

This speaks well of Twitter. A common dunk on The Twitter Files has been ‘oh look at these records that show a reasonable company acting completely reasonably.’ That is not a dunk. It is good to know how such things worked, including when actions were indeed reasonable or even close to ideal. That, too, is useful information. That we are given info when it makes Twitter look good, not only when it makes Twitter look bad, is to the credit of the whole project.

I really don’t know what anyone was expecting. There’s no way to deal with the volumes involved without bots, and the bots are going to make mistakes, more mistakes than even a not very skilled human would make.

Thing is, that’s… fine? I don’t get, or at least don’t agree with, the purity reactions of ‘this is punishment so it can’t be making mistakes.’ Punishments with major consequences, years in jail, ruined reputations and livelihoods and other things like that, require public trust and reliability. When it is small time stuff like ‘this Tweet needs to be deleted and you need to spend 24 hours not posting to this hell website’ I am less sympathetic. So what?

example template—deactivated after Musk’s arrival—of the decision tree tool that contractors used

I reproduced the image in the original, which requires squinting and is basically asking what topic it was about, what was said and what message and position were expressed.

Again, I do not see what reasonable alternative is being proposed. Start with bots for monitoring in the background and dealing with low-level issues. When things get more serious, use humans and decision trees. What else could you possibly do? There aren’t enough ‘experts’ to go around unless you save them for when they are most needed.

This is, again, exactly what you would want? The buck stops somewhere, people can escalate, if they did then some higher level people thought about the situation and made a decision. What is the alternative, other than not moderating at all?

This can go well or badly, depending on who makes the decisions and how.

Image

I find it quite hard to get worked up about the labeling and restrictions, as opposed to the strikes that accumulated and led to his suspension, which is where we should focus. The direct local punishment here was the Tweet couldn’t be interacted with and had a sign saying ‘misleading.’ The Tweet continued to exist. Note that this person is not saying ‘in my opinion’ or even ‘in the opinion of Dr. Science, who knows more than you do’ he is stating his position as fact.

Also, his statement is pretty false and definitely misleading. The idea that ‘everyone’ should be vaccinated (air quotes because everyone presumably in context means everyone over 18 or at least not young children, given this is 2021) is very reasonable. You could make a case for it in each case, for it as a general policy to keep things simple, or make a case against vaccinating some subpopulations. Whereas this claims that it it as scientifically flawed as saying nobody should be vaccinated. Which was by this time straightforward Obvious Nonsense. In the common sense of the word, drawing this parallel, in this way, was clearly misleading.

Even an unreasonable policy that goes too far too often will involve plenty of cases of going the right distance, so saying ‘this particular case seems fine’ would show little if picked at random. If chosen as part of an expose, it’s an odd choice.

18K isn’t nothing. It also isn’t, and I speak from experience, a lot. I love the ‘self-proclaimed’ in public health fact checker here.

So the first thing I notice is that it is called the leading cause of death from disease in children. It does not seem like cherry-picking to then ignore things that are not diseases. That is the category. The start date is cherry-picked, sure, but the end-date was ‘today’ at the time, and that seems like a relevant period to talk about.

So essentially this person is saying that the people saying Covid-19 is dangerous to children are spreading misinformation – as indeed Zweig is saying here – and using call outs of the exclusion of things not in the category in question together with tone to give the impression that the person in question is lying to us.

Whether the original is technically true depends on whether disease means ‘infection you can catch’ or it includes non-infectious diseases, and to what extent ‘this has been the most dangerous thing since time X’ is a valid construction in context. I can see these either way.

Also, seriously, note that the Tweet being replied to hasexactly one reply – the one from Kelly – and two likes, total. This is not exactly punching up at The Man or Speaking Truth to Power.

Nothing happened to Kelly other than the tweet not being shown to all that many people and not being liked or replied to.

Take my likes, take my replies, call my takes misleading lies. That’s okay, I’m still free. You can’t take the blog from me.

ChatGPT finishes:

Look at all the views I’ve got, I’m a star they say a lot. Don’t matter if they disagree. You can’t take the blog from me. People post their thoughts and such, I keep writing with a touch. I’m not gonna let it be. You can’t take the blog from me. I’ve got readers from far and near, nothing’s gonna make me fear. Not for one second, you see. You can’t take the blog from me. My writing’s my own identity, I’ll keep writing ‘til infinity. So don’t you try to flee. You can’t take the blog from me.

Back to the broader pattern. Was it true that things aligned with power and the narrative would stand, while things not aligned would sometimes get these labels? Yes. That definitely happened, and altered somewhat the flow of debate. It still seems rather restrained if these are the best examples David could find.

Image

I notice once again that the person being censored is trying to tell people not to vaccinate, and that once again the misleading tag says ‘learn why health officials consider Covid-19 vaccines safe for most people.’

Is this Tweet misleading in practice, giving the impression the vaccines are going to kill young people? It seems at least reasonable to say yes – the label doesn’t say false or misinformation it says misleading.

Now, a policy against misleading information is a very different policy than one against misinformation. Misleading information is often true, hence the saying ‘the truth is the best lie.’ A policy against misleading information is far more broad and dangerous than one against misinformation. Way scarier.

Do I want things like this censored or labeled? Hell no. I do get it.

Would have been an excellent final result, except that it took months to get the review done. That’s a long time. I’d say the above Tweet is technically fine. I’d also say it is, shall we say, FUD central and not in the best of faith. It is a false positive, but an understandable false positive if the error is corrected when reviewed.

So what about the one that was ruled to be in violation after review?

I see exactly what Andrew is doing here. You probably do as well. This still seems clearly not in violation. There are two inequalities. The data says both are true (the triple arrow is a bit much, but whatever.) They both are valid points in favor of the conclusion Andrew wants to argue for. You have to let someone make that argument.

Which means the real policy is that which this can be found in violation of if Power wants that, although I would expect most cases of similar things not to have been found in violation in practice.

This is another case of Twitter acting correctly where I have no idea what exactly Zweig thinks went wrong.Someone at Twitter had the very understandable human reaction to think that Trump’s tweet was going to get a bunch of people killed, want to do something about this, and to ask why it wasn’t a violation of their policy. The head of safety explained that it didn’t violate the guidelines. No actions were taken against the Tweet or account. That’s exactly what should happen.

Yes, in a perfect world you’d have such amazing pro-speech employees that no one at any level even suggested or asked about doing anything here. That does not seem like a reasonable ask.

This is biased. It is also the only reasonable or possible course of action. If there are real world negative consequences to something, more attention is warranted.

I would love to also have our rank ordering prioritize things that would cause other negative consequences, such as shutting down life or economic activity or causing fear. I also recognize that this is not practical.

I also notice that in terms of Twitter’s moderation, I am happy about this failure of reasoning. Consider the counterfactual. Do you want someone looking for Tweets that will cause negative economic consequences and shutting them down?

As we all know, this was not primarily Twitter unbalancing the playing field. It was the Serious People and Public Health Authorities and Responsible Adults acting as one, and then that pressure being applied to Twitter, both directly and through influencing the views of employees.

Who then, if this thread is any guide, showed remarkable restraint in dealing with the situation. Compared to almost all other times and places, we should be grateful.

These problems are hard. Here’s an example from this week. What do you do about the quoted tweet here, from an account with 30k+ followers?

The video clip’s complete quote is ‘no study in the world shows that masks work that well’ in comparison to air filtration. This is clipped (and grammar is butchered slightly) to say ‘no study in the world that show that masks work.’ What do you do? It is clearly a clip designed to create a false impression. It is also quoting words that were said by a person, and also providing a clip that gives the context. The intent to deceive here is clear. I wouldn’t intervene. Do I understand someone thinking intervention is called for? Yes, I do.

Thumbs were still clearly on the scale. If you wanted to explain that people were being freaked out over risks to children that were quite small, you could get action taken against you. If you highlighted potential vaccine risks, you could get action taken against you. There were cases that aren’t listed here that seem worse to me than the ones chosen, including when people pointed out things like the effectiveness masks (including, early on, that they worked, and then later that they often didn’t), or the safety of being outdoors. The chilling effect of potential sanctions matters, including to people who were never in serious danger, because they don’t know that they are not in serious danger.

So mostly I think this offers additional confirmational evidence that the moderation was asymmetric in favor of things preferred by power, and mistakes got made and disapproved speech especially against the vaccines was somewhat chilled or downweighted, while also confirming that it did not go terribly far and the individual decisions that mattered were mostly pretty reasonable – these were supposed to be Bad Examples and were not so bad.

Compare this to the world that some aligned with power would clearly prefer, e.g.:

Yes. The whole idea of free speech and free expression and a town square and a marketplace of ideas is exactly the stuff you find most reprehensible. If you are making a real effort to get it right, roughly half the time you’ll be too hospitable, half the time you won’t be hospitable enough. If you think the answer is ‘not hospitable in any way whatsoever, show them the business’ that is a position as well, with… implications. I disagree.

I want to keep this focused on Covid decisions. It does seem important to mention that Musk acting like a petty tyrant man-child and messing these issues up in various ways does not excuse actions that came before, if Musk has the reasonable option to not be a petty tyrant man-child. I also notice that I mostly do not care if Musk acts in such ways and starts banning people who, on his own platform, share his private info or insult him or attack his company, if it stays contained to that – one should worry more about him enforcing a CCP line to maintain good relations with China, or taking on a more general terrible censorship regime in various ways, such as briefly banning sharing of info on other social media cites, which was quite bad.

Conclusion

Back on Covid, what happened seems mostly clear.

Individual posts were sometimes flagged as ‘misinformation.’ Mostly this was because they contained false statements, or statements power believed could be safely called false, in the direction power didn’t like.

When this happened, the Tweet would not be seen by that many additional people. If it was seen as sufficiently out of line it would get deleted and the account might be suspended, and mostly that was reserved for cases that plausibly deserved it.

The larger your account, the more attention it would get and the more likely it was that action would be taken against things that were kind of borderline.

If action got taken five times, your account got suspended, which was the real threat. Five chances is enough that you have a good idea that power doesn’t like what you are doing, and you decided to keep doing it anyway, and it really is only a Twitter account. Some people, I think, didn’t too much mind getting suspended. Others did.

(The issue of account-level shadow-banning was not mentioned in this particular Twitter file, so I’m disregarding it here.)

The asymmetries are annoying – if you said something false, or widely believed false, that power did like, and it wasn’t too egregious, they did not take action. And a lot of the effects were chilling effects, or amplification effects that shape the discussion without direct censorship. The thing people were pointing at when they talked about ‘shadow banning,’ that Twitter claimed it didn’t do, was something Twitter did quite a lot, and it works.

There are counterfactual worlds, perhaps where Jack remained firmly in charge and stood up for his beliefs, where we get a much better regime that told the government where the door was located.

We have known for a long time that we don’t live in that timeline. We live in this one, where Twitter tried to balance having rules that it followed and not shutting down speech with what it sees as its job to help shape discussion of vaccines and other issues both related and unrelated to Covid, but especially vaccines, to avoid giving people what everyone at Twitter thought were the wrong ideas that would do harm.

I would prefer the Jack-dominated world of full free speech, even at risk of provoking a crackdown that might make things much worse than status quo.

Compared to the worlds I feared we were in, or the worlds many suspected, this edition of The Twitter Files reveals that our real situation regarding Covid was not so bad, and closer to the Jack world than we realized. Cool heads largely prevailed, procedures took the best form they reasonably could, and people did their best to do the right thing as they saw it.

I hope we can do better in the future. I’d still be happy if we avoid doing worse.

This entry was posted in Uncategorized. Bookmark the permalink.

8 Responses to The Twitter Files: Covid Edition

  1. David W says:

    I don’t see the distinction between your point here that Power often gets what it wants, and your objection to the Jones Act. Or your objection to anything that McKenzie ran into with VaccinateCA. If you’re just talking about how the world is, c’est la vie, fine, this is consistent. Bad things often happen because Power would rather hide problems rather than fix them.

    But if you are talking about how things should be, inserting some random Filipino time server or Twitter programmer into a scientific debate about saving lives under uncertainty does not help us figure out what is true. It means that we systematically make suboptimal decisions, and come to mistaken conclusions.

    • magic9mushroom says:

      I think his point is “this is only moderately bad and/or of comparable badness to what we already thought Twitter was doing”.

  2. Max More says:

    “I would love to also have our rank ordering prioritize things that would cause other negative consequences, such as shutting down life or economic activity or causing fear. I also recognize that this is not practical.” Why is this any more impractical than patrolling for complex medical and epidemiological claims? It seems REALLY clear than forcing businesses to shut down is bad for the economy. It is certainly more clear than whether several types of mask work and in what circumstances, for instance.

  3. J says:

    I think you’re treating this as not a failure mode because it’s such an obvious and foreseeable failure mode.

    As you say at the end, your frame is that conditional on being in a world where Twitter fights misinformation, they seem to have done that in a not totally insane way. I think that’s conceding most of the harm in the premise.

    Twitter and other big companies and the government all loudly espoused the virtues of free speech not long ago, particularly during Arab Spring when it was destabilizing somebody else’s societies. “The algorithm” wasn’t really a thing; I mean, there were algorithms, but they weren’t political footballs yet, and the companies were trying to get as many people as possible onto their platforms, and controversy helped.

    Once the networks started to saturate, their incentives shifted to maximizing value from each user rather than drawing in as many as possible. Ideally we’d spend all day encouraging each other to buy things from their advertisers. Obviously that’s not good for the users, so they can’t do that directly. They also attracted attention from the government, which is like an advertiser with guns who pays in threats.

    BLM and covid presented a triple opportunity. They could make unprofitable users feel less welcome, feel self-righteous and satisfy their own authoritarian impulses by controlling the narrative, and share some of that control with the government.

    All the obvious objections are relevant, and all the reasons we valued their earlier free speech principles are relevant. They responded with lots of placation about how limited their interventions would be and how they were merely defending clearly True things. That wouldn’t have been enough to avoid corruption, and turns out not to be true in the first place.

    So when it comes out that they’re doing about as well as one would expect, that’s *precisely* the damning evidence:

    – They flagged lots of gray area things that are just as gray as the ones coming from the MSM. Turns out arbitrating truth on society’s biggest controversies is really hard just like we said it was when we said they shouldn’t do that.

    – They made a bunch of levers behind the scenes and lied about using them, because that’s what you do when you don’t want to admit the compromises you’re making when you try to be the arbitrator of truth, and also because you don’t want to admit you’re allowing your own biases to influence those compromises, which is exactly why they shouldn’t have gone down that path in the first place

    – While people were saying “it’s only censorship when the government does it”, the government was doing it with lots of dedicated employees, special portals, and lots of saber rattling about regulation.

    tldr: Having discarded the sacred garment of free speech and opened the screaming blood-drenched portal marked “anoint yourself the source of truth”, they proceeded to make locally reasonable choices and get the predicted outcomes. This does not absolve them, and the right course is to close the portal, give up the shiny baubles that led us to the portal, repent, and copy lines from the sacred texts until we can clearly articulate what was so important about that sacred garment in the first place.

  4. Glen Raphael says:

    Regarding #20, Dr. Martin Kulldorff’s tweet, you wrote: “‘everyone’ presumably in context means everyone over 18 or at least not young children, given this is 2021″…but if you read the question Dr. Kulldorff is ANSWERING (rather than just reading his answer) it is clear that ‘everyone’ does NOT in context mean that.

    In context – given the context of the specific question this tweet was a response to – Dr. Martin Kulldorff’s tweet is neither false nor misleading but simply straightforwardly correct. The question the doctor was ASKED was essentially “What about young kids or people who’ve just gotten and recovered from covid? Do even THEY need to be vaccinated?” Thus, the meaning of “everyone” in his reply was “everyone INCLUDING very young kids and people who have natural immunity”.

    In 2021 it was clear given THAT definition of “everyone”, it is scientifically unsupportable that “everyone” needs to be vaccinated. He was fighting for the moderate middle ground idea that it’s okay to say that SOME people need to be vaccinated now – vaccination doesn’t have to be all or nothing. If you have a choice between:
    (a) vaccinate nobody,
    (b) vaccinate everybody (including little kids and the naturally immune),
    (c) vaccinate just the people for whom there is strong evidence that it helps,

    choice (a) and choice (b) are BOTH bad ideas given that the vaccine is still in limited supply. (c) is a better answer than EITHER of those options.

    His parallel statement – saying (b) was AS scientifically flawed as (a) – was meant to highlight that they were BOTH – as you put it – “obvious nonsense”. If somebody read that tweet and assumed away the context, assumed “everyone” couldn’t possibly include, you know, *people who shouldn’t at this time get vaccinated*, that person was not reading carefully.

    …which is the whole point. This was a doctor saying a thing in his own area of expertise which ought to be part of the discussion, answering a direct question with a direct reply. By slapping a “misleading” tag on it Twitter is arguably MAKING the tweet misleading, making Power more likely to misinterpret what is being said and increasing partisan discord.

  5. Steven says:

    Appreciate the summary. I think, though, that there is too much generosity being shown toward twitter here.

    Having adopted various censorship policies they did try to have some reasonable limits to them. Some of the flagged tweets were engaging in hyperbole that arguably crossed the line. However, the line should not have existed in the first place.

    For example, in the Trump ‘optimistic’ tweet it seems to me indeed a problem that the tweet was discussed even though they ultimately did not decide to take action. The issue is the temptation they constantly felt to meddle in the discussion. Every time a discussion arose that went against their bias, there had to be a discussion and inevitably sometimes they took unjustified actions.

    Likewise, while the government may have an opinion and may advocate for their viewpoint, it is wrong for the government to try to suppress alternative opinions.

    In my view free speech should be very highly valued. The existence of these discussions is indictment enough of Twitter’s policies, regardless of the dispensation of individual tweets.

    • rosencrantz says:

      > For example, in the Trump ‘optimistic’ tweet it seems to me indeed a problem that the tweet was discussed even though they ultimately did not decide to take action.

      Perhaps they could institute a free-speech policy where they clamp down on discussion of tweets. They could even shadow ban those in twitter who discuss the tweets and ensure their views are not surfaced to other twitter employees, and perhaps implement 24 hour workplace bans where those who discuss tweets are sent home to think about what they’ve done.

      • Anonymous-backtick says:

        Obviously he meant “the problem was that they discussed taking action against the tweet” and not “the problem was that they discussed anything about the tweet”, you sophist cretin.

Leave a comment