Previously: On R0

This new paper suggests that herd immunity could be achieved with only about 10% infected rather than the typically suggested 60%-70%.

They claim this is due to differences in connectivity and thus exposure, and in susceptibility to infection. They claim that the best model fit for four European epidemics at 16%-26% for England, 9.4%-11% for Belgium, 7.1%-9.9% for Portugal, and 7.5%-21% for Spain.

This being accurate would be excellent news.

The 60%-70% threshold commonly thrown around is, of course, utter nonsense. I’ve been over this several times, but will summarize.

The 60%-70% result is based on a fully naive SIR (susceptible, infected, recovered) model in which all of the following are assumed to be true:

- People are identical, and have identical susceptibility to the virus.
- People are identical, and have identical ability to spread the virus.
- People are identical, and have identical exposure to the virus.
- People are identical, and have contacts completely at random.
- The only intervention considered is immunity. No help from behavior adjustments.

All five of these mistakes are large, and all point in the same direction. Immunity matters much more than the ‘naive SIR’ model thinks. Whatever the threshold for immunity might be for any given initial reproduction rate, it’s *nowhere near *what the naive SIR outputs.

Often they even take the number of cases with positive tests to be the number of infections, and use that to predict forward or train their model.

This naive model is not a straw man! Such obvious nonsense models are the most common models quoted by the press, the most common models quoted by so-called ‘scientific experts’ and the most common models used to determine policy.

The effective response is some combination of these two very poor arguments:

- “Until you can get a good measurement of this effect we will continue to use zero.”
- “Telling people the threshold is lower will cause them to take less precautions.”

Neither of these is how knowledge or science works. It’s motivated cognition. Period.

See On R0 for more details on that.

So when I saw this paper, I was hoping it would provide a better perspective that could be convincing, and a reasonable estimate of the magnitude of the effect.

I think the magnitude they are suggesting is very reasonable. Alas, I do not think the paper is convincing.

### The Model

The model involves the use of calculus and many unexplained Greek letters. Thus it is impressive and valid.

If that’s not how science works, I fail to understand why they don’t explain what the hell they are actually doing.

Take the model description on page four. It’s all where this letter is this and that letter is that, with non-explicit assumption upon non-explicit assumption. Why do people write like this?

I tried to read their description on page 4 and their model made zero sense. None. The good news is it made so little sense that it was obvious that I couldn’t possibly be successfully making heads or tails of the situation, so I deleted my attempt to write up what I thought it meant (again, total gibberish) and instead I went in search of an actual explanation later in the paper.

All right, let’s get to their actual assumptions on page 19, where they’re written in English, and assume that the model correctly translates from the assumptions to the result because they have other people to check for that.

They believe the infectivity of ‘exposed individuals’ is half that of infectious ones, and that this period of being an ‘exposed individual’ takes four days to develop into being infectious. Then they are infectious for an average four days, then stop.

That’s not my model. I don’t think someone who caught the virus yesterday is half as infectious as they will be later. I think they’re essentially not infectious at all. This matters a lot! If my model is right, then if you go to a risky event on Sunday, someone seeing you on Monday is still safe. Under this paper’s model, that Monday meeting is dangerous. In fact, given the person has no symptoms yet and the person they caught it from still doesn’t, it’s very dangerous. That’s a big deal for practical planning. It makes it much harder to be relatively safe. It makes it much harder to usefully contact trace. Probably other implications as well.

What it doesn’t change much is the actual result. These are not the maths we are looking for, and their answers don’t much matter.

That’s because they’re controlled for by assuming the original R0, slash whatever assumption you make about the mean level of infectivity and susceptibility.

Technically, yes, there’s a difference. Everything is continuous, and the exact timing of when people are how infectious changes the progression of things a bit. A bit, but only a bit. If what we are doing is calculating the herd immunity threshold, you can pick any curve slope you want for exactly when people infect others. It will effect *how long it takes *to get to herd immunity. Big changes in average delay times would matter some (but again, over reasonable guesses, I’m thinking not enough to worry about) for how far we can expect to *overshoot *herd immunity before bringing infections down.

But the number of infected required will barely change. The core equation doesn’t care. Why are we jumping through all these hoops? Who is this going to convince, exactly?

This is actually good news. If all the assumptions in that section don’t matter, then none of them being wrong can makes the model wrong.

Second assumption is that acquired immunity is absolute. Once you catch Covid-19 and recover, you can’t catch it again. This presumably isn’t *strictly *true, but as I keep repeating, our continued lack of large scale reinfection makes it more approximately true every day.

Third assumption they suggest is that people with similar levels of connectivity are more likely to connect, relative to random connections between individuals. This seems obviously true on reflection. It’s not a good full picture of how people connect, but it’s a move in the right direction, unless it goes too far. It’s hard to get a good feel for how big this effect is in their model, but I *think *it’s very reasonable.

Fourth assumption is that there is variance in the degree of connectivity, and social distancing lowers the mean and variance proportionally (so the variance as a proportion of the mean is unchanged). They then note that it is possible that social distancing decreases differentiation in connectivity, which would effect their results. I don’t know why they think about this as a one-way issue. Perhaps because as scientists they have to be concerned with things that if true would make their finding weaker, but ignore if they would make them stronger and are speculative. They suggest a variation where social distancing reduces connectivity variance.

I would ask which directional effect is more likely here. It seems to me more likely that social distancing *increases *variance. If R0 before distancing was somewhere between 2.6 and 4, and it cuts it to something close to 1, that means the average person is cutting out 60% to 75% of their effective connectivity. By contrast, at least half the people I know are cutting more than 90% of their connectivity, and also cutting their physical exposure levels when connecting, on top of that. In many cases, it’s more than 95%, and in some it’s 99%+. If anything, the existing introverts are doing larger percentage cuts while also feeling better about the lifestyle effects. Whereas essential workers and kids who don’t care and those who don’t believe this is a real thing likely are not cutting much connectivity at all.

I’ve talked about it enough I don’t want to get into it beyond that again here, but I’d expect higher variance distributions than before. The real concern is whether the connectivity levels during distancing are no longer that correlated to those without distancing, because that would mean we weren’t getting the full selection effects. The other hidden variable is if people who are immune then seek out higher connectivity. That effectively greatly amplifies social distancing. Immunity passports two months ago.

Fifth, they modeled ‘non-pharmaceutical interventions’ as a gradual lowering of the infection rate. This is supposed to cover masks, distancing, hand washing and such. They said 21 days to implement distancing, then 30 days at max effectiveness, then a gradual lifting whose speed does not impact the model’s results much.

They then take the observed data and use Bayesian inference to find the most likely parameters for their model.

To do that, they made two additional simplifying assumptions.

The first was that the fraction of cases that were identified is a constant throughout the period of data reported. This is false, of course. As time went on, testing everywhere improved, and at higher infection rates testing gets overwhelmed more easily and people are less willing to be tested. They are using European data, which means there might be less impact than this would have in America, but it’s still pretty bad to assume this is a constant and I’m sad they didn’t choose something better. I don’t know if a different assumption changes their answers much.

The second was that local transmission starts when countries/regions report 1 case per 5 million population in one day. An assumption like this seems deeply silly, like flipping a switch, but I presume the model needed it and choosing the wrong date to start with would be mostly harmless. If it would be a substantial impact, then shall we say I have concerns.

They then use the serological test in Spain and used it to calculate that the reporting rate of infections in Spain was around 6%. That seems to me to be on the low end of realistic. If anything, my guess would be that the serological survey was an undercount, because it seems likely some people don’t show immunity on those tests but are indeed immune, but the resulting number seems relatively low so I’ll accept it.

They then use the rate of PCR testing relative to Spain in the other contries to get reporting rates of 9% for Portugal, 6% for Belgium and 2.4% for England. That 2.4% number is dramatically low given what we know and I’m suspicious of it. I’m curious what their guess would be for the United States.

Then they took the best fit of the data, and produced their model.

### Anyone Convinced?

Don’t all yell at once. My model Doesn’t think anyone was convinced. Why?

The paper doesn’t add up to more than its key insight, nor does it prove that insight.

Either you’re going to buy the core insight of the paper the moment you hear it and think about it (which I do), in which case you don’t need the paper. Or you don’t buy the core insight of the paper when you hear it, in which case nothing in the paper is going to change that.

The core insight of the paper is that if different people are differently vulnerable to infection, and different people have different amounts of connectivity and exposure, and those differences persist over time, then the people who are more vulnerable and more connected get infected faster, and thus herd immunity’s threshold is much lower.

Well, no shirt, Sherlock.

If the above paragraph isn’t enough to make that point, will the paper help? That seems highly unlikely to me. Anyone willing to think about the physical world will realize that different people have radically different amounts of connectivity. Most who think about the physical world will conclude that they also have importantly different levels of vulnerability to infection and ability to infect, and that those two will be correlated.

Most don’t buy the insight.

Why are so few people buying this seemingly trivial and obvious insight?

I gave my best guess in the first section. It is seen as an argument, and therefore a solider, for not dealing with the virus. And it is seen as not legitimate to count something that can’t be quantified – who are you to alter the hard numbers and basic math without a better answer you can defend? Thus, modesty, and the choice of an estimate well outside the realm of the plausible.

Add in that most people don’t think about or believe in the physical world in this way, as something made up of gears and cause and effect that one can figure out with logic. They hear an expert say ‘70%’ and think nothing more about it.

Then there are those who do buy the insight. If anything, I am guessing the paper *discourages *this, because its most prominent effect is to point out that accepting the insight implies a super low immunity threshold, thus causing people to want to recoil.

Once you buy the insight, we’re talking price. The paper suggests one outcome, but the process they use is sufficiently opaque and arbitrary and dependent on its assumptions that it’s more proof of concept than anything else.

It’s mostly permission to say numbers like ‘10% immunity threshold’ out loud and have a paper one can point to so one doesn’t sound crazy. Which is useful, I suppose. I’m happy the paper exists. I just wish it was better.

There’s nothing especially obviously *wrong *with the model or their final estimate. But that does not mean there’s nothing wrong with their model. Hell if I know. It would take many hours pouring over details and likely implementing the model yourself and tinkering with it before one can have confidence in the outputs. Only then should it provide much evidence for what that final price should look like.

And it should only have an impact then if the model is in practice doing more than stating the obvious implications of its assumptions.

If this did paper did convince you, or failed to convince you for reasons other than the ones I give here, I’m curious to hear about it in the comments.

### How Variant is Connectivity Anyway?

I think very, very variant. I hope to not repeat my prior arguments too much, here.

Out of curiosity, I did a Twitter poll on the distribution of connectivity, and got this result with 207 votes:

Divide USA into 50% of non-immune individuals taking relatively less Covid-19 risk and 50% taking relatively more. What % of total risk is being taken by the safer 50%?

Less than 10%: 27.5%

10%-15%: 22.2%

15%-25%: 19.3%

25-50%: 30.9%

I would have voted for under 10%.

This is an almost exactly even split between more or less than 15%, so let’s say that the bottom 50% account for 15% of the risk, and the other 50% account for 85% of the risk.

If we assumed the nation was only these two pools, and people got infected proportionally to risk taken, what does this make the herd immunity threshold?

Let’s continue to be conservative and assume initial R0 = 4, on the high end of estimates.

For pure SIR, immunity threshold is 75%.

With two classes of people, immunity threshold is around half of that.

Adding even one extra category of people cuts the herd immunity threshold by more than half, all on its own.

If this 85/15 rule is even somewhat fractal, we are going to get to herd immunity *very *quickly.

Hopefully this was a good basic intuition pump for how effective such factors can be – and it seems more convincing to me than the paper was.

Daniel’s simulations from the comments show a smaller effect from some continuous functions, and are motivating for me to do more explorations here at some point in the coming week. There are a lot of different factors to consider and no alternative to getting one’s hands dirty and writing the code.

### Can We Quantify This Effect?

Yes. Yes, we can.

We haven’t. And we won’t. But we could!

It would be easy. All you have to do is find a survey method that generates a random sample, and find a distribution of connectivity. Then give everyone antibody tests, then examine the resulting data. For best results on small samples, also give the survey to people who have already tested positive.

This is *not *a hard problem. It requires no controlled experiments and endangers no test subjects. It has *huge *implications for policy. Along the way, you’d also be able to quantify risk from different sources.

Then you can use that data to create the model, and see what threshold we’re dealing with.

*That’s *the study that needs to happen here. It probably won’t happen.

Until then, this is what we have. It’s not convincing. It’s not making me update. But it is a study one can point to that supports an obviously correct directional update, and comes up with a plausible estimate.

So for that, I want to say: Thank you.

I’m not buying it not their model not your model.

I just want to run some computer code and do a bunch of simulations because too much of this seems like gut instinct reasoning which I don’t like.

I hope you get a chance to run the computer code and do the sims, or build a simpler model on a spreadsheet, which is generally the way I prefer to roll.

Assuming this conclusion is true, and herd immunity can be achieved with 10-20% infected, what would the optimal policy response to COVID-19 be?

We’d want the low connectivity people to keep playing it safe while the virus burns through the high connectivity population. And we probably also want to balance the speed at which high connectivity people get infected: faster spread means herd immunity is reached more quickly, and even low connectivity people can get back to normal soon; but faster spread also raises the risk of an overwhelmed health system.

I wonder if requiring masks in most circumstances, plus quarantine of inbound travelers, would be the right choice. Low connectivity people would continue taking voluntary steps to stay uninfected. The masks wouldn’t reduce spread all the way so high connectivity people would still get the virus, but spread might be slowed enough to allow health systems to keep up. And once herd immunity is achieved and low connectivity people become medium connectivity people, quarantining visitors would prevent further flare-ups.

At this point, if we think we’re approaching the threshold already, it would make sense to be aggressive with masks anyway. They reduce severity of infection. If we’re close to threshold, the policy *should* be to manage getting to threshold quickly.

Presuming you cannot go for the New Zealand approach of 0 cases, optimum approach is to only lockdown once you suspect that your hospitals/ICUs might breach above 100% capacity. Otherwise don’t do anything besides asking people to wear masks and socially distance themselves. What happened instead in many countries/states was a situation where the initial lockdown happened way too early, before any herd immunity could be built up, while also failing to reach the 0 cases threshold. As a result you get a massive flare up as soon as you reopen, which means your lockdown has gone to waste.

Israel and Argentina would be an example of a bad policy – they’ve locked down too hard and too fast (remember that video of policemen chasing people around in a park in Buenos Aires?), so now they’re in a massive wave that might threaten their hospital system. NYC is on the other side of the spectrum – locked down way too late, so their ICUs failed to meet the demand at the peak. We’ll probably see a similar effect in countries like Canada or Norway, sometime in September or October.

Sweden is a good example – did a little bit of interventions, just barely enough to avoid the hospitals from overstretching at the peak. Washington state in the US is a good example too – the death rate has been constant since March, spreading the impact evenly. The absolute best policy (short of NZ style elimination) would keep ICU usage at ~90% for three-four months, after which the pandemic will reach its end, but this is obviously difficult to execute in practice.

“They then use the serological test in Spain and used it to calculate that the reporting rate of infections in Spain was around 6%. That seems to me to be on the low end of realistic. If anything, my guess would be that the serological survey was an undercount”

I’m a little confused by this sentence. If the serological survey is an undercount, shouldn’t the true reporting rate of infections be lower than 6%? On the other hand you’re saying its on the low end of realistic, meaning that you personally believe this number to be higher than 6%?

I’m saying they took the numbers at face value in their model, whereas I think the number was likely somewhat low.

“Until you can get a good measurement of this effect we will continue to use zero.”

Can Sweden be used as proof of what the true herd immunity threshold looks like, given that they’ve been in exponential decay of weekly deaths since April? They’re still rejecting the use of masks and their mobility index has been above January levels since early May. Or how about NYC, which was able to maintain 1% positivity rates for so long now, despite the reopenings, the protests and the increase of mobility?

Sweden is suggestive, but could also be a case of interventions other than herd immunity, or a combination of the two. It’s helpful but not a slam dunk.

My naive question: Doesn’t saying we’re at herd immunity change behaviors, changing the way that 35% calculation works? Or is saying we’re at herd immunity imply the virus has nearly died out in the population at that point and people changing their behaviors won’t influence the growth rate?

I notice that Swedish case rates seem to be in slow decline again, aftering rising in June. Is there an explanation for that *other* than partial herd immunity? I suppose they could be social distancing more but my intuition says they’re probably distancing less as time goes on.

I’ll try reading this paper when I’m not also trying to go to sleep.

But I ran my own simulation and found that the effects just aren’t that big:

A bunch of people proposed exponential distributions with left cliffs, but those didn’t change the results interestingly.

Is your model different?

I did something more simplified/basic in excel and solved for the point where R0=1 for one currently infected person, rather than running a sim, in order to get this out quickly and because I didn’t think about actually writing code, and I did it as two groups with constant percentages to make the math easier and to justify the assumption from the poll. It’s possible that this was kind of a ‘best-case’ division because most of the risky side gets sick and not much of the safe side and that gets you to a 75% reduction, so it’s kind of a semi-clean cut.

What you are doing there is in many ways more interesting and useful! If I have some time I’ll write some code myself and do something similar, since this seems really simple to actually code now that you point it out. I’ll likely start from scratch so I can use C# because when I code in python the code ends up pretty bugged and it seems worth trying a bunch of variations. Dunno if I’ll get to this today.

Questions I’d ask about these distributions to get a better feel for what they mean are things like: “What % of connectivity comes from the safest 50%? The safest 25%?” I’d also want to know the actual sigma. I think the paper was thinking something like 5?

The biggest weird thing is that the 4th sim and 5th sim have similar turning points but substantially different *final* infection rates. That’s interesting.

My 5-minute guess is that the most important variable is the number of people whose connection level is well below 1. Extreme tails don’t matter much – mostly immune isn’t that different from immune here, and ‘catches it on day 1’ versus ‘catches it by week 3’ also doesn’t mater much. Which explains how I got a stronger outcome using a bi-modal distribution. Interesting.

I think this model might be too simplistic. Let’s imagine a very basic social network:

Group A: 5 individuals (A1-A5) connected to each other

Group B: 5 individuals connected to each other. B1 is connected to A1

Group C: 5 individuals connected to each other, C1 is connected to B2

In this configuration, if we first infect A2, the infection could only spread to group C if A2 infects A1 who in turn manages to infect B1 who infects B2 who infects C1. But in the simplified model no such connections are taken into account and group C can always get infected on the next step.

This would roughly match my own personal situation: excluding locations where I’m wearing a mask (i.e. grocery store), the only people I spend long periods of time with indoors are part of my close friend group. And I suspect the situation is similar for most other people, although friend group sizes would vary from 0 to 100 based on the individual. And of course there would be a few superspreaders with a “friend” group in the hundreds like that lady from South Korea who spread it around her church, plus hugely interconnected groups of thousands of people, like meat packing plants.

I haven’t done the research, but I suspect there should be software already for generating realistic social network graphs. After that’s done, modeling the pandemic should be fairly straightforward.

The somewhat-valuable part of the paper is page 18, although I don’t have an intuitive sense of what’s a plausible CV for a gamma distribution.

The problem is that they try to fit their model to actual observed epidemics, and there are just too many degrees of freedom. What drives this home for me is on page 6, where they note that HIT estimates are robust to changes in model. I’m pretty sure this is because the empirical HIT has to match the percent immune/infected when total cases peak, and so is entirely driven by your estimate of that number. (Well, estimated effectiveness of social distancing will play into it also)

That certainly seems right. Trying to fit to the observed epidemic, when the observed epidemic isn’t even the full epidemic, seems pretty doomed with this much freedom.

Also, I can explain the equations on page 4. Note that the left-hand sides of the equations are using the [dot above the function](https://en.wikipedia.org/wiki/Notation_for_differentiation#Newton's_notation) to indicate the time derivative. x stands for a level of innate susceptibility, so e.g. S(x) is the number of people with innate susceptibility level x who are currently susceptible (i.e. not exposed, infectious, or immune). I’m going to write e.g. S’ where they write S-dot and substitute latin for greek letters, for ease of writing.

(1) S'(x) = -L(x) * x * S(x)

(My L is their lambda)

S(x) decreases over time as susceptible people are exposed. The number of people of susceptibility x who get exposed per unit time is proportional to S(x), x (because x is a level of innate susceptibility), and the “force of infection” L(x). They define this as

L(x) = b/N * integral((r * E(y) + I(y)) dy)

(my b is their beta, my r is their rho)

b is a parameter for how infectious the virus is, r is a parameter for how infectious exposed people are on average relative to infectious people, N is total population. The integral is over level of susceptibility; the result basically means ‘the magnitude of the infectious population, weighted by infectiousness’. So L(x) is basically (the proportion of the population which is currently infectious) * (a parameter representing how infectious a given infectious person is).

Sidenote: L(x) doesn’t depend on x. This contradicts the ‘third assumption’ in your post. This is okay because that assumption was something they explored in their appendix, not part of the main model. The fourth assumption was also an appendix-exploration thing FYI.

(2) E'(x) = L(x) * x * (S(x) + s * R(x)) – d * E(x)

(My s is their sigma; my d is their delta)

E(x) increases over time as susceptible people become exposed and decreases as exposed people progress to the ‘infectious’ stage. d is a parameter for the rate at which that progression occurs, and s is a parameter for reinfection rate (most of their analysis sets this to zero). Note that the first term is just -S'(x) if s = 0.

(3) I'(x) = d*E(x) – g*I(x)

(d is delta, g is gamma)

I(x) increases as exposed people progress to infectious and decreases as infectious people progress to recovered-or-dead. The respective parameters express the rate at which these progressions occur.

(4) R'(x) = (1 – f) * g * I(x) – s * L(x) * x * R(x)

(f is phi, g is gamma, s is sigma, L is lambda)

R(x) increases as infectious people progress to recovered-or-dead. f is a parameter for the fraction who die, hence the factor of (1 – f). R(x) decreases as recovered people are reinfected, if that’s a thing; note that the second term is basically equation 1 but with R(x) in place of S(x) and an extra factor of s to allow for partial immunity.

Overall, this model makes sense to me, and I trust the results they derive from it analytically (though I haven’t replicated the derivation). The model-fitting part, not so much.

Pingback: Covid 7/30: Whack a Mole | Don't Worry About the Vase

Cochrane seems to think a high percentage will be necessary.

https://westhunt.wordpress.com/2020/05/13/how-far-weve-come/

Where and why do you disagree? (Specifically on the point he makes of the models working in the past.)

Your link isn’t to Cochrane so I can’t use it to know what their logic is. Did you mean to link to something else?

*cochran. West hunter.

The link is right.

Is cochrane someone else?

Regardless, the post linked is the one I’m interested in seeing a response from you on.

Especially your take SRI models working in the past