Paper is True

Postscript to: Rock is Strong

Always choosing Rock, broadly construed, is freerolling on the efforts of others and/or you are lying. You are in at some important sense defecting. The problem of an information cascade is real. You are not contributing new true information, instead reinforcing the information and heuristics already out there.

Often this is the most accurate option available to you, or at least the most efficient. Doing the work yourself is hard. Many can’t do better that way and even for those who can it is a lot of work. On average those who disagree are more wrong and end up doing worse. Other times, accuracy is not what is desired. None of that makes it easy to pick the correct rock, but often it is straightforward enough.

The defection that comes from not doing your part for the long term epistemic commons often is made up for not only by your lack of a practical option to do otherwise, but also in some cases by the gains from accuracy.

Rewarding those who worship the rock, like rewarding any form of defection, is also defecting. The Queen causes everyone to die in a volcanic eruption because she set up bad incentives and is encouraging everyone to spend down the commons.

This can also be thought of partially as explore versus exploit. The majority of actions should be exploit, and most people should mostly be exploiting. There is more variance in ability to explore than exploit, so most people should explore even less than average and let others explore and report back. Exploration is a public good, so it needs to be rewarded beyond its private payoff or there won’t be enough of it. When competition is sufficiently intense or other factors reward exploitation too much, exploration can die out entirely.

This is the context for a Marginal Revolution post that nagged at me enough to give me enough initial motivation to write this whole thing. Quoted in full, and this is not about the examples it is about the method:

No, you don’t always have to agree with the majority of the educated people, but I would say this.  For whatever set of views you think is justified, try to stick to the versions of those views held by well-educated, reasonable, analytically-inclined people.  You will end up smarter over time, and in better places.  Peer effects are strong, including across your ideological partners.

When I hear that a particular group defends liberty, such as the Ottawa truckers’ convoy, while this is partially true it makes me nervous.  As a whole, they also seem to believe a lot of nonsense and to be, in procedural terms, not exactly where I would want them on scientific method and the like.  Fair numbers of them seem to hold offensive beliefs as well.  Whine about The Guardian if you like, but I haven’t seen any rebuttal of this portrait of the views of their leaders.  Ugh.

I recall taking a lot of heat for my 2007 critique of Ron Paul and his movement, but that example illustrates my points perfectly.  Those people did defend liberty in a variety of relevant ways, but so many of them have ended up in worse spaces.  And that is exactly what I predicted way back when.

Look for strong analytical abilities, and if you don’t see it, run the other way.

Here is a defense of the Freedom Convoy.  You can read it for yourself, but it doesn’t change my mind.  Here is I think a wiser account.  I’ll say it again: “Look for strong analytical abilities, and if you don’t see it, run the other way.”  I’m running.

When I saw this, it set off loud alarm bells in my head. Something felt deeply wrong.

Essentially it was saying that if the Wrong Kind of People, who expressed Wrong Views that oppose Narrative, were advocating or backing something, you needed to run away even if they were right this time. Either the Covid restrictions need to be lifted or they don’t. ‘The people who advocate lifting restrictions hold offensive other views’ doesn’t even have an obvious sign on its direction in terms of what it makes more likely to be true. On the one hand, you could argue those people generally have worse epistemics. On the other hand, there are a lot of offensive views that are true, so anyone who has no offensive views also does not have great epistemics. If the objection is that such folks let others know they hold offensive views, that too has its impacts on epistemics.

And since the argument ‘people who are known to hold offensive views hold this view’ is used to argue this view too is offensive or wrong, there is an obvious bias in the discourse against such views. So even if you think that offensive-view-holders are more often wrong about other things, you need to ask if the marketplace of ideas is updating too much versus not enough on that information before you too can update. Right now, it seems like there is too strong a bias against offensive-view-holders, stronger than is justified when seriously examining their other views, and that this is causing many non-offensive views such folks often hold to be wrongly discounted, and sometimes causing them to become offensive views.

Thus it seems more like ‘offensive-view-holders hold this view, therefore this view will likely become offensive even if it is true and/or holding this view will cause others to lower their status of you because they think you hold offensive views’ and advice to therefore run the other way. With a side of ‘if you hold this view it may cause you to adapt other offensive views because of the associations.’

Which is not bad advice on that level, if you care about such things, but as a Kantian imperative it gives far too much power to those who decide what is offensive and to how things look and sound, and too little power to truth.

A similar thing can be thought about Ron Paul. It does seem true that many who supported Ron Paul now are in worse spaces. Even given that this was predictable, why should it bear on the question of the quality of the ideas of Ron Paul? His statements are either true or false, his proposals worthwhile or otherwise, and ‘who supports X’ being a way to judge whether you should support X seems like letting coalitional politics (simulacra level 3 considerations) outweigh physical world modeling (simulacra level 1 considerations). Which always gives me a giant pit of horror.

Yes, there is a reasonable counter that the OP here is simply saying to stick to the smarter versions of these ideas, but that effectively translates to suggesting in-practice opposition to the practical versions that are on offer. Ron Paul.

And also that specific advice is also worded in a burn-the-commons kind of way that raises my alarms: “For whatever set of views you think is justified, try to stick to the versions of those views held by well-educated, reasonable, analytically-inclined people.”

This reads once more as a ‘do not attempt to think for yourself and decide what is true, instead rely on the opinions of others’ although at least there is room to evaluate potential others a bit. Whereas if you are aware and smart enough to figure out who such people are, it seems like you are also aware and smart enough to be one of them.

The argument that one should choose views for their peer effects scares me. I agree that peer effects of this sort are real, but going down the road where one chooses to believe things for that reason seems terrifying with lots of obvious downsides. A person doing so too seriously, in a real sense, does not have a mind.

And all of this seems to entwine the questions of who should be supported or opposed or lowered or raised in status with the question of what beliefs one should hold generally, instead of keeping them distinct, perhaps on the belief that most people cannot keep those distinct and it is foolish to suggest that they try.

One could however flip this. The above is suggesting that the OP calls for the use of a rock, but its central warning is the opposite.

It is saying beware those who are worshiping rocks, for they worship too strongly.

It is especially saying those who worship rocks that are already giving some crazy answers now, are going to give increasingly worse answers over time. Do not hitch your wagon to them.

If you are a strong analytical thinker, what you very much are not is a rock. You are not following a simple heuristic, or will know when to disregard it.

Ron Paul had a mix of good and bad ideas. A more thoughtful and analytical libertarian would also have had a mix of good and bad ideas. One difference would hopefully be a better mix with more good ideas and less bad ideas. The more salient difference here is the decision algorithm generating Ron Paul’s ideas. Thus, even if he happens to have a lot of ideas you agree with, when you see his other ideas or he generates new ones, they’re unlikely to be as good. The moment his ‘FREEDOM!’ rock goes wrong you’re in a lot of trouble, and you know this because you already have examples, unless one disagrees and thinks he was doing something better. That goes double for those whose rock said “RON PAUL!”

One could also say at least Ron Paul has a rock, and thus has predictable thoughts that are roughly consistent and are not corrupted by various political considerations as much as those of his rivals. Whereas the alternatives are much worse than rocks. Or alternatively, that the other candidates all have rocks that say “DO WHAT GETS YOU ELECTED” and you’d prefer an actual analytical thinker but you’ll take “FREEDOM!” over “DO WHAT GETS YOU ELECTED” any day.

Thus, one could evaluate the Convoy in a similar fashion, and assume that things are bound to go off the rails regardless of whether you agree with where they start out.

It is also a good example of how having a rock, in this case perhaps again that says “FREEDOM!” provides a simple heuristic but one that is not terribly useful. Like any good rock it can be interpreted any number of ways and depends on the context. Somehow the output of this rock became blocking freedom of movement. Most heuristics that seem simple are not at core simple, you still have to massage the data into the correct format and that is trickier than it may sound and often messed up.

It would certainly be a mistake to buy into a highly error-prone intellectual package, once one notices that is full of nonsense. One would not want to become a ‘fellow traveler’ and try to get into Aumann agreement with them or anything, or otherwise link your status wagon to it in the bigger picture, but neither of those need not be the relevant standard.

No wagons need be hitched in order to evaluate what is true. Nor should it much matter that some other solution is first best. Not when evaluating claims and ideas. One needs to hold true to the Litany of Tarski, or else paper stops beating rock. Then rock will truly be strong.

Yet sometimes, at least within a given magisterium, one has to effectively write some name down on a rock, even if it does not fully substitute for your brain. If there is no simple heuristic that will work, making sure there is a non-rock at the end of the chain does seem wise.

This entry was posted in Uncategorized. Bookmark the permalink.

22 Responses to Paper is True

  1. greg kai says:

    Please tell me there is a scissor post-post-scriptum coming :)

  2. John Schilling says:

    For whatever set of views you think is justified, try to *learn from* the versions of those views held by well-educated, reasonable, analytically-inclined people. But if you want those views to be implemented in reality, you’ll want to become conversant in the versions held by moderately intelligent, passionate, action-oriented people. Because those are the ones who are going to get stuff done, while your “reasonable” friends are still analyzing.

    Possibly by throwing rocks.

  3. Basil Marte says:

    > “For whatever set of views you think is justified, try to stick to the versions of those views held by well-educated, reasonable, analytically-inclined people.”
    > This reads once more as a ‘do not attempt to think for yourself and decide what is true, instead rely on the opinions of others’ although at least there is room to evaluate potential others a bit.

    > […] perhaps on the belief that most people cannot keep those distinct and it is foolish to suggest that they try

    While I don’t know OP well, I’d expect the opposite. That because for him and others in his circles keeping truth and endorsements separate is effortless, he inadvertently generalized from his experience, and while he was writing this it didn’t occur to him that many come to the set of views they think justified through a primarily social process. This is quite excusable, if his readership’s composition is sufficiently unrepresentative of the general population.

    In a social setting, this is “now that we all agree that this model is pretty good, let’s also talk about its limitations” rather than “look at these inaccuracies, this model is pretty bad”.

    • TheZvi says:

      I know OP well enough to know he can and does differentiate between what the bulk of people can and should do, and what those who are capable of more than that should do, and addresses messages in light of the audience – and I don’t think he thinks his audience is high-level enough to make this effortless. Also, I don’t think this is as effortless as you think it is even at a high level.

      • Basil Marte says:

        Perhaps I should have said “routine”. And while this is explicitly evidence that I thought wrong:
        > Finally the idea of “low-status” and “bad and wrong” have merged so fully in my mind that […] I only remember it’s true if […] <
        nonetheless the phrase "bad and wrong" near-immediately struck me as — to make a programming metaphor — a type error, particularly given the context implying it to be one thing, two synonymous words. Especially if we change it to "bad and false". While I don't know to what extent my checking operates lexically and thus what its usefulness' upper bound is, the words we use still matter (kabbalah!), and I was thinking of (a non-lexical version of) this strongly-typed thinking when I said that keeping value judgments and accuracy separate could be routine to the point of seeming effortless.

        (There's the nuance where e.g. "counts-as" is purpose-dependent and thus one could argue that, since there's non-negligible density all the way, imposing clusters like this is a mistake. One could also argue that because the dirt stuck to the lawn chair's leg straddles what people mean by "the chair" and "not the chair", talking about "the chair" is a mistake.)

        (Incidentally, some abstract programming theory: testing establishes an upper threshold of program correctness, while static code analysis establishes a lower threshold. Smart typecheckers are basically theorem-provers. If your code compiles without type errors, then it doesn't contain any mistakes of the kind it looked for.)

        On the other hand, something similar to Scott's surrounding paragraph makes complete sense even if an exaggerated version of my assumption is granted. As in, "people can't just say what they mean", and because the words we use matter, as a habit of not-saying a statement forms, this also feeds into the statement becoming less salient to the point of being forgotten. It is not the same, though; when externally prompted with the statement, this model would have no trouble recalling its truth, whereas Scott says "[…] even then, it’s in a condescending way".

        • TheZvi says:

          Sorry, I thought you meant OP here to mean Tyler rather than Scott. I don’t have that direct knowledge for Scott.

          Saying “bad and wrong” to me implies ‘wrong as in bad’ rather than ‘wrong as in false’ where they do mean the same thing. In many people’s lexicons, they are indeed the same word, and ‘wrong’ simply means what people think is bad. This is one of the examples of Scott giving a useful confession that he has lost track of the mission to some extent, so it’s up to us to keep him on or put him back on track.

          I continue to try very hard to not fall into this trap, and hope I have been successful.

        • Basil Marte says:

          By OP I did mean Tyler. I make no reference to Scott in the top-level comment.

          I know it’s an idiom to say “bad and wrong” and usually mean it to be a synonym-repetition, since the meanings of “wrong” cover the whole range even among, uh, us. That’s exactly why I explicitly substituted “false”, both because Scott repeatedly mentioned truth/falsity in the paragraph, and because trying to interpret his sentence as contrasting “low-status” with “bad” produces nonsense (it doesn’t relate to the rest of the argument plus they do overlap considerably). To make the incompatible choice of words less difficult to notice.

          (The full LARPistemology version is in the last weekly covid post, where the defense of the governor sitting maskless among masked children includes the words “false political attack”.)

  4. waltonmath says:

    >One would not want to … try to get into Aumann agreement with them

    To check, are you saying this because trying to Aumann agree with something that is very far from rational is a disaster? I actually wonder how it would even work … if you try to Aumann agree with something irrational, it seems like it would not follow the right process itself, and one would be able to notice this if one wasn’t blindly following the agreement procedure. I don’t have a rigorous understanding of this stuff, though.

    • TheZvi says:

      You do often see attempts though, or things that are like such attempts – people who will try to get together and agree on all issues to present a unified front, and the unified front is not at all logically consistent.

  5. Nate Dogg says:

    I share your dismay.

    The uncharitable reading of Tyler’s advice is: ‘If you wish to remain cool and popular, you must continue to believe what the other cool and popular people believe, or at least remain silent when you don’t.’

    Appropriate advice for a courtier who wishes to remain in favour, but not helpful for Truth seeking.

  6. As far as I can tell, “you should choose views for their peer effects” is a view that should be rejected for its peer effects. It’s objectionable for other reasons as well.

  7. It seems to me that I already “stick to the versions of those views held by well-educated, reasonable, analytically-inclined people.” By doing so I end up incredibly far from ideological agreement with well educated, reasonable, analytically-inclined people, due to the degree to which such people are following life strategies which depend upon failing the Ideological Turing Test which itself sometimes requires that both sides in a (Kayfabe) conflict preserve the norm of concealing the versions of their ideas held by well-educated, reasonable, analytically-inclined people.

    In fact, by adhering to the versions of views held by well-educated, reasonable, analytically-inclined people, I end up so far from adhering to a socially acceptable ideology that I ultimately end up extremely directly and repeatedly publicly attacked and privately threatened, by people who unmistakable think well of me but are threatened by me, or intimidated by those who are threatened by me, until people end up saying things like “I want to clarify that I don’t dislike Vassar, he’s actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He’s also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis.”

    This is not the sort of thing that happens, with anything like the frequency with which it happens to me, according to the exoteric worldview of any group of well-educated, reasonable, analytically-inclined people. Where does that leave me?

    • Anonymous-backtick says:

      Obviously you have no obligations to answer this and probably shouldn’t answer this, but as someone well outside the Bay Area who’s been generally enthralled by your comments here, on SSC, and on LW for almost a decade now, I’d be fascinated to hear what kind of psychosis-spreading you could have been accused of.

      • michealvassar says:

        You’ve seen the Lw post, right?
        And this older one

        With a careful reading, the Vaniver and Anna Salamon comments actually spell it out.

        Specifically, in the latter document, Vaniver said
        “ When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there’s a way in which the truth isn’t “safe” if safety is measured by something like ‘reversibility’ or ‘ability to continue being the way they were.’ It is also often the case that one can’t make a single small change, and then move on; if, say, you manage to convince a Christian that God isn’t real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.

        Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a ‘reversibility’ criterion is sort of ludicrous; it’s not like people can undo puberty, or so on. If you present them with an alternative system, they don’t need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you’re in something of a double bind; it’s somewhat irresponsible to break people’s functioning systems without giving them a replacement, and it’s somewhat creepy if you break people’s functioning systems to pitch your replacement. (And since ‘functioning’ is value-laden, it’s easy for you to think their system needs replacing.)” and Anna said “ back in 2011, a friend complained to me that Michael would cause EAs to choose the wrong career paths by telling them exaggerated things about their own specialness. This matched my own observations of what he was doing. Michael himself told me that he sometimes lied to people (not his words) and told them that the thing that would most help AI risk from them anyhow was for them to continue on their present career (he said this was useful because that way they wouldn’t rationalize that AI risk must be false).” but later said

        “ There’s more to say here, and I don’t yet know how to say it well. But the shortest version is that in the years leading up to my original comment Michael was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various others like having the story I told myself to keep stably “doing my work” criticized/eroded. This, despite the fact that attempting to share reasoning and disagreements is in fact a furthering of our alleged goals and our alleged culture. The specific voiced accusations about Michael were not “but he keeps criticizing us and hurting our feelings and/or our political support” — and nevertheless I’m sure this was part of what led to me making the comment I made above (though it was not my conscious reason), and I’m sure it led to some of the rest of the ostracism he experienced as well. This isn’t the whole of the story, but it ought to have been disclosed clearly in the same way that conflicts of interest ought to be disclosed clearly. And, separately but relatedly, it is my current view that it would be all things considered much better to have Michael around talking to people in these communities, though this will bring friction.

        There’s broader context I don’t know how to discuss well, which I’ll at least discuss poorly:

        Should the aspiring rationality community, or any community, attempt to protect its adult members from misleading reasoning, allegedly manipulative conversational tactics, etc., via cautioning them not to talk to some people? My view at the time of my original (Feb 2019) comment was “yes”. My current view is more or less “heck no!”; protecting people from allegedly manipulative tactics, or allegedly misleading arguments, is good — but it should be done via sharing additional info, not via discouraging people from encountering info/conversations. The reason is that more info tends to be broadly helpful (and this is a relatively fool-resistant heuristic even if implemented by people who are deluded in various ways), and trusting who can figure out who ought to restrict their info-intake how seems like a doomed endeavor (and does not degrade gracefully with deludedness/corruption in the leadership). (Watching the CDC on covid helped drive this home for me. Belatedly noticing how much something-like-doublethink I had in my original beliefs about Michael and related matters also helped drive this home for me.)
        Should some organizations/people within the rationality and EA communities create simplified narratives that allow many people to pull in the same direction, to feel good about each others’ donations to the same organizations, etc.? My view at the time of my original (Feb 2019) comment was “yes”; my current view is “no — and especially not via implicit or explicit pressures to restrict information-flow.” Reasons for updates same as above.”

        In the former post, Vaniver commented “ I think there’s something here where people are projecting all of the potential harm onto Michael, in a way that’s sort of fair from a ‘driving their actions’ perspective (if they’re worried about the effects of talking to him, maybe they shouldn’t talk to him), but which really isn’t owning the degree to which the effects they’re worried about are caused by their instability or the them-Michael dynamic.

        [A thing Anna and I discussed recently is, roughly, the tension between “telling the truth” and “not destabilizing the current regime”; I think it’s easy to see there as being a core disagreement about whether or not it’s better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the “tell the truth, even if everything falls apart” principle.]”

        In light of these comments, might I please ask you to have a go at filling in the details?

  8. Dzhaughn says:

    The izzes of scissors
    are just what they ares:
    What shreds your fine paper
    into tiny pap-ars.

    Paper sans rock
    becomes incoherent,
    But when foundation’ly sound
    Ms. Scissors can get bent.

  9. Garrett says:

    This post reminds me of the passive vs active management debate in finance. Say there is a market that can be represented by an index, for example the S&P500. All investors in this market have their performance evaluated using the index as their benchmark. Some managers just replicate the index (passive managers), and some try to outperform the index (active managers). Replicating the index is easy and has very low costs so the passive managers charge a low fee (0.50% per year). Therefore, the dollar-weighted after-fee returns of the passive managers must be higher than the active managers since the total benchmark-relative positioning of the active managers, and hence their before-fee returns, must net out to zero. This is known as the Arithmetic of Active Management.

    Now, one way active managers can “defect” is to become “Closet Indexers.” If, instead of spending a lot of money trying to beat the index, you secretly try to minimize your relative performance, you can collect a high fee at low cost. However, doing so amounts to overcharging your clients for the service you are providing, which is freeloading off of the work of other active managers to discover correct prices for securities. Since managers are paid as a percentage of the funds they manage, and funds tend to follow recent relative performance, a profitable strategy might be to pick stocks randomly but construct a portfolio that is very different from the index which is therefore likely to either outperform or underperfom in a big way, then if you get outperformance shift to closet indexing once the funds roll in since they’ll be sticky for a while as long as you don’t underperform by a lot. If you underperform then just close your fund and try again.

    Active managers are trying to find incorrect prices and get rewarded with outperformance once those prices correct. Correct prices are a public good that passive managers free ride. How much trading, or how many active managers, a market needs for prices to be close to their “true” levels is unclear, but the big trend of the last 50 years is that dollars have been shifting from active to passive management. So there are a lot fewer active managers today than there were in the past.

  10. Brian Slesinsky says:

    I like the framework of thinking about this as explore versus exploit. But with respect to your disagreement with Tyler Cowen, I’m having trouble understanding what decisions are being contemplated. Can this be analyzed as a bet, and if so, what’s the bet?

    The trucker convoy is, first of all, an item in the news, so I suppose the first decision is whether to learn more about it (explore) or ignore it (exploit a prior that many news stories can be ignored). I’m not sure what “peer effects” he’s worried about or what exactly he’s warning against. What does “run the other way” mean, practically?

    I’m wondering if he’s saying that interviews of the people involved are unlikely to be intellectually interesting?

    It seems like you’re more concerned with evaluating views than people. In which case, maybe you should look for the steelman version of a view? If someone’s bad at explaining a view, look for a better version of it.

    • TheZvi says:

      It’s more like ‘do one’s own research and be willing to disagree with any given source or most sources especially on details’ versus ‘outsource one’s opinions in broad strokes and/or details to others who seem like good analytical thinkers and do not hold offensive opinions’ or something like that.

      You exploit by relying on others. The full rely on others of course is to ignore the situation entirely, in some sense.

  11. George H. says:

    Zvi, Thankyou so much for all your writing. You are a godsend to many. I was also disappointed with Tyler’s comment and reasoning. Besides your points, it also seems he made the mistake of trusting what he reads in the media. I find it very similar to the Joe Rogan situation. If you listen to the media Joe Rogan is a terrible person. But having listened to Joe for years, I find everything I read about him to be not only wrong, but often backwards. It seems the same is true for the Convoy. I read all these terrible things. I’ve now watched several hours of Viva Frei live streaming from Ottawa, where talks to people. It’s amazing, heart warming and heart breaking. (I start to tear-up recalling pieces.) You’ve documented here some of the crazy covid policy, and it’s hard to blame people for losing trust in what they hear. Or even to blame them for who their friends are and who they listen to. We all get caught in our own bubbles. From what I hear the protesters say, they want the vaccine and mask mandates to go away. On a personal level, I feel adrift politically here in the US, but if there was a Canadian Truckers Party, I would join today. Those are my people.
    Finally, If you are going to hold a protest, invite the Canadians. Civility counts for a lot with me, and I wish we had more of it.

  12. Joe says:

    >It does seem true that many who supported Ron Paul now are in worse spaces.

    Interested in why you think this – is it personal experience or data? My own experience showed Ron Paul fans ending up in good places politically! None are MAGA. None are SJW. All are “vaxxed and done”.

    In July 2020 I had people ask me why libertarians were fine with unmarked cop cars throwing protestors in the back… literally every libertarian i know had been incensed by this and tweeted about it. (Including the three biggest names: Rand Paul, Jo Jorgenson, Justin Amash)
    There was a total disconnect between where libertarians actually were, and leftist beliefs on libertarians.
    Might someth similar be happening here?

    • TheZvi says:

      I think there’s a thing where people like us know a very particular subclass of libertarian that aren’t that numerous or popular and are far more intellectual and principled – they are indeed strong analytical thinkers, basically, and I would include Amash and probably Jorgenson in that group.

      But I think that what Ron Paul did was he combined that group with another group that was not analytical thinkers, and were some combination of personal brand supporters and getting there for what we would think are non-sequitor reasons, that this was the bulk of supporters in practice, and that they are now in ‘worse spaces.’ The math says that most of them in fact did not vote for Jorgenson.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s