You Have the Right to Think

Epistemic Status: Public service announcement. We will then return to regularly scheduled programming.

Written partly as a response to (Robin Hanson): Why be Contrarian, responding to the book Inadequate Equilibria by Eliezer Yudkowsky

Warning: Applause lights incoming. I’m aware. Sorry. Seemed necessary.

We the people, in order to accomplish something, do declare:

You have the right to think.

You have the right to disagree with people where your model of the world disagrees.

You have the right to disagree with the majority of humanity, or the majority of smart people who have seriously looked in the direction of the problem.

You have the right to disagree with ‘expert opinion’.

You have the right to decide which experts are probably right when they disagree.

You have the right to disagree with ‘experts’ even when they agree.

You have the right to disagree with real experts that all agree, given sufficient evidence.

You have the right to disagree with real honest, hardworking, doing-the-best-they-can experts that all agree, even if they wouldn’t listen to you, because it’s not about whether they’re messing up.

You have the right to have an opinion even if doing a lot of other work would likely change that opinion in an unknown direction.

You have the right to have an opinion even if the task ‘find the real experts and get their opinions’ would likely change that opinion.

You have the right to update your beliefs based on your observations.

You have the right to update your beliefs based on your analysis of the object level.

You have the right to update your beliefs based on your analysis of object-level arguments and analysis.

You have the right to update your beliefs based on non-object level reasoning, on any meta level.

You have the right to disagree with parts of systems smarter than you, that you could not duplicate.

You have the right to use and update on your own meta-rationality.

You have the right to believe that your meta-rationality is superior to most others’ meta-rationality.

You have the right to use as sufficient justification of that belief that you know what meta-rationality is and have asked whether yours is superior.

You have the right to believe the object level, or your analysis thereof, if you put in the work, without superior meta-rationality.

You have the right to believe that someone else has superior meta-rationality and all your facts and reasoning, and still disagree with them.

You have the right to believe you care about truth a lot more than most people.

You have the right to actually care about truth a lot more than most people.

You have the right to believe that most people do care about truth, but also many other things.

You have the right to believe that much important work is being and has been attempted by exactly zero people, and you can beat zero people.

You have the right to believe that many promising simple things never get tried, with no practical or legal barrier in the way.

 

You have the right to disagree despite the possible existence of a group to whom you would be wise to defer, or claims by others to have found such a group.

You have the right to update your beliefs about the world based on clues to others’ anything, including but not limited to meta-rationality, motives including financial and social incentives, intelligence, track record and how much attention they’re paying.

You have the right to realize the modesty arguments in your head are mostly not about truth and not useful arguments to have in your head.

You have the right to realize the modesty arguments others make are mostly not about truth.

You have the right to believe that the modesty arguments that do work in theory mostly either don’t hold in practice or involve specific other people.

You have the right to not assume the burden of proof when confronted with a modesty argument.

You have the right to not answer unjustified isolated demands for rigor, whether or not they take the form of a modesty argument.

You have the right, when someone challenges your beliefs via reference class tennis, to ignore them.

You have the right to disagree even when others would not, given your facts and reasoning, update their beliefs in your direction.

You have the right to share your disagreement with others even when you cannot reasonably provide evidence you expect them to find convincing.

You do not need a ‘disagreement license,’ of any kind, implied or actual, to do any of this disagreeing. To the extent that you think you need one, I hereby grant one to you. I also grant a ‘hero license‘, and a license license to allow you to grant yourself additional such licenses if you are asked to produce one.

You do not need a license for anything except when required by enforceable law.

You have the right to be responsible with these rights and not overuse them, realizing that disagreements should be the exception and not the rule.

You have the right to encourage others to use these rights.

You have the right to defend these rights.

Your rights are not limited to small or personal matters, or areas of your expertise, or where you can point to specific institutional failures.

Congress has the right to enforce these articles by appropriate legislation.

Oh, and by right? I meant duty. 

 

 

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

41 Responses to You Have the Right to Think

  1. benquo says:

    I love the last line, but don’t think you’d happily s/right/duty/g with no other mods. I wouldn’t.

    Agreed on substance that there are duties here; withholding judgment is a choice, and often quite a nasty one.

    • TheZvi says:

      I faced a trilemma here: Finish it in reasonable time, make it short and read well the first time while saying everything that needed saying, have it work exactly right when you go s/right/duty/g. I chose to sacrifice the third thing. I’m curious if 2 & 3 was a doable thing, and if I find ways to edit and get it closer to working, I will do that. I’m sad about the sacrifice!

      I agree that some of these would need a qualification to work as duties as written (e.g. a “when your world model disagrees”, or a “given sufficient evidence”, and often a “when the stakes are high enough”), but I also think that’s how most duties work, they’re duties under appropriate circumstances, and not all of them can have top priority.

      Also, I fixed one that glaringly didn’t work as written; I modified it to read better the first time and didn’t realize how badly it broke under modification (the sharing disagreement one)

      • benquo says:

        I think the thing that gets lost in translation is that you *also* have a right and duty to prioritize. Having finite reasoning capacity, you also have both the right and duty to believe and act based on incomplete information, even when this will predictably lead to sometimes causing harm relative to what you’d do if you withheld judgment or action until you were better informed, so long as in expectation (again, from your limited-information state) it’s the better thing to do.

  2. Quixote says:

    A lot of these things that you remark people have a right to do may be ‘allowed’, but they are not necessarily wise. For example, almost all of
    “You have the right to disagree with ‘experts’ even when they agree.
    You have the right to disagree with real experts that all agree, given sufficient evidence.
    You have the right to disagree with real honest, hardworking, doing-the-best-they-can experts that all agree, even if they wouldn’t listen to you, because it’s not about whether they’re messing up.
    You have the right to have an opinion even if doing a lot of other work would likely change that opinion in an unknown direction.
    You have the right to have an opinion even if the task ‘find the real experts and get their opinions’ would likely change that opinion.”
    Are usually going to go badly.

    If, every time someone was about to disregard a strong consensus among experts that had worked hard on a problem, I had the opportunity to bet them 100USD that the experts were right and they were wrong, I would swiftly become one of the wealthiest people on earth.

    Not to say one should never disagree with experts, but I think scenarios where its actually “wise” as opposed to just allowed look a lot more like “I worked in this industry for 10 years and assumptions commonly made in modeling it by academics don’t match actual industry practice” and a lot less like “there is some meta level reason, difficult to empirically test, that indicates I’m right.”

    • TheZvi says:

      If we added a few qualifications I would mostly agree with that in the strong case of strong consensus of expert opinion that is known to you. And of course, I think you get to have an opinion before you learn the expert one, even if you know the expert one would change your opinion, and even if you would be wise to figure out what it was if your opinion on the matter was going to be important.

  3. robinhanson says:

    I don’t get how my discussing doubts on the rationality of disagreement threatens to take away your “rights”. There is a sense in which you have the right to be stupid, and the right to be wrong. But once we are talking about which beliefs are rational, you aren’t entitled to think any belief you want in any context to be rational, right? http://www.overcomingbias.com/2006/12/you_are_never_e.html

    • benquo says:

      It seems to me like part of the problem here is that there are two simultaneous arguments. There’s the epistemological argument about what an ideal reasoner should do, given that there are other reasoners in the world with different opinions and information. And then there’s a social argument about how shameful it should be to disagree with consensus. The end of Eliezer’s recent sequence on modesty is basically claiming that many supposed modesty arguments are in fact just rationalizations for status-regulation emotions and not actual epistemic reasoning. Modesty is not about epistemics, basically.

      You seem to be responding exclusively on the epistemological dimension, and Zvi seems to be trying to respond to the shadow that argument casts on the social dimension. The social dimension is really important here, and worth understanding, though I think it’s also important to keep track of when we’re talking about one, and when we’re talking about the other.

      • TheZvi says:

        I think there’s actually far more than two disagreements! Although they’re closely linked.

        There are AT LEAST these disagreements, I think:

        There’s what an ideal reasoner would do.
        There’s how to tell how ideal a reasoner you are, and if you should be confident in that.
        There’s what a realistic reasoner should do under practical conditions to get the best estimates.
        There’s what practical conditions actually are, and how modest an approach they justify.
        There’s what a realistic/ideal reasoner should do under practical conditions to get the best results (which, as Robin agrees, may or may not mean maximizing your accuracy).
        There’s what a realistic/ideal reasoner should do, given the correlations between what they and other reasoners will choose to do, to cause the group to get maximum accuracy and/or results/utility.
        There’s what recommendations are best to give to readers slash other persons, when we don’t know how meta-rational or otherwise skilled they are, or other things.
        There’s what such actions we should reward and punish, and advocate for or against, in this realm, in order to maximize group results.
        As usual, there’s the meta versions of all such debates, but those debates are implied.

        I think Robin is most interested in the epistemic arguments, as my model would have predicted, and I think that they are also the ones he and I should mostly engage in, since the others depend on those and not vice versa, and I think we have more important disagreements there.

      • TheZvi says:

        On reflection and after talking in person, I think Benquo’s two arguments is much closer to the truth / gives a better picture of the central situation than saying there’s many disagreements. The ‘shadow argument’ is something important and was the primary motivator for writing the original post.

    • benquo says:

      Trying to recast this back onto the epistemic dimension, I get something like:

      There is a type of epistemic modesty you would favor as an error-minimizer. You have a right and duty to employ this sort of epistemic modesty.

      There is a type of epistemic modesty you would favor as an accountability-minimizer (i.e. CYA reasoning). You have failed in your duty to reason well if you are trying to be minimally culpable instead of maximally correct.

      • robinhanson says:

        While I agree that people may want to be modest for status reasons, to protect themselves from status assaults, people can also want to be contrarian for status reasons, as a way to bid for high status. Rather than accuse particular people of particular bad motives, I’d rather talk about when it makes sense to disagree, if you had good motives. That’s what I tried to talk about in my post. You think Zvi instead sees me as making a status attack on his lack of modesty?

      • TheZvi says:

        I agree that one should discourage accountability-minimizing, and think the question of how error-minimizing to be is a hard one (e.g. Robin and I agree that when trying to get big gains, one should not fully error-minimize for that reason). I also think full error-minimization leads to bad incentives for others on many levels, and also destroys information, and often leads in practice to errors similar to the big-gains problem, so I think the right approach to that is really complicated. I’ll get into that elsewhere.

        I definitely don’t think Robin is attacking me for lack of modesty! Everyone here has motives about as good as it gets, and I also agree with Robin that such questions aren’t that interesting and that talking about them is much less productive.

    • TheZvi says:

      I replied (at great and probably too much length, apologies for not having time to make it shorter etc) as a separate comment at the bottom.

    • TheZvi says:

      While the long response below shares some (but far from all) of my object-level objections and concerns with Robin’s post, which are important in their own right, only in the opening paragraph and quite weakly does it actually answer Robin’s confusion about “rights” or the implied better question of why I wrote a series of assertions and applause lights rather than an argument. This is indeed quite fishy and weird, and I owe a complete explanation that probably *should* be a post once I edit it properly. Both Ben in person and Sarah with per response emphasize this need, so here we go (I’ll also continue the thread at the bottom which I also think is useful).

      Robin’s post filled me with boiling rage. It sends the unmistakable (to me, anyway) message that one needs a license to disagree, justified by formal argument and outside view, or one is blameworthy and irrational for doing so. That one does not simply, is not entitled to, disagree simply because one has considered the evidence and reached a conclusion. The burden of proof automatically goes *to you* when you want to disagree with an expert or otherwise superior person (how they got that title is not the point). You must apply formally to the designated meta-rationality expert authorities with the admissible evidence, at which point they may grant you a limited-scope disagreement license if they so choose to waive the rest of their endless set of possible objections at their discretion.

      It does this thinking (from my reading) that it’s doing the opposite, which made it that much more maddening. It was saying, Eliezer has identified several things one can put on such an application, and you were willing to grant some (but not yet all) of them. One can use the “do a big thing” exception. One can use the “inadequate equilibrium” exception from the title, and you added the Hansonian “experts think too highly of their own field and are giving motivated answers” exception. Basically, for a local personal practical use case, you were willing to grant the license. But for disagreements between experts, or any general claim based on any evidence whatsoever to claim that one has superior meta-rationality or otherwise go through Eliezer’s claim #3, you say: No, not yet. Perhaps such a claim is true, but Eliezer has not met his burden and likely neither have you, so you are denied such license. You *want* to grant that license, and wish Eliezer had proven his case, but rule that he did not and thus are disappointed and hope we can together so prove it in the future. But for now not even Eliezer gets this one, let alone most readers.

      Which is all, in its own way, quite good! Given that people feel, emotionally and internally, that they need a license, with the voice of Pat Modesto in their heads (he’s internal, of course), or they feel that others will treat them as if they need a license, or others will feel that they need to show that they think others need a license, either to prove that they think they themselves need one or to be seen reinforcing the whole structure, and so on endlessly, it’s very good that people get carve-out exceptions in such places. They need them! And it would be even better if we could get them bigger and better carve outs, if they’re going to go on needing them.

      When I saw you engage on the object-level, I saw an opportunity to engage on the object-level, and thought that was important too and more likely to get better engagement here. I do feel like given the court is going to exist, the court made a bad ruling, that a lot of the object-level modesty arguments are wrong and can be beaten on the object-level. So I started working on, “maybe I can convince Robin on the object-level, which would be a huge win, cause he’d grant a much broader set of licenses, which would carry weight with a lot of people, and also we’d solve some valuable problems in rationality, and that’s a great use of all of our time. So let’s do that.” And indeed, I think that’s a great use of time, and would love to convince, or be convinced.

      But my concern is mostly not that! My concern is that people feel they need a license at all. Which is super double-plus-ungood dystopian nightmare. Yes, giving someone a license helps them, but when you go about the business of deciding who has and hasn’t earned one, or even if you just give one to someone, you are reinforcing the social norm and narrative that the license is needed, And That’s Terrible. It’s especially terrible if people feel they can’t adjust their probabilities *at all* based on evidence that wouldn’t be admitted in this metaphorical license court – that you can’t move 5% of the way away from the expert distribution because who are you to be 5% of the consensus – but it’s also terrible even if used only in the basic sense of ‘not allowed to think they are mostly/probably right about any given thing, and experts mostly/probably wrong.” I could feel that gauntlet tightening as read the words, of many posts.

      And when one engages on the object-level, the risk is that one is implicitly accepting that burden-of-proof is on you, that licenses are needed, and so forth. Overall I thought Eliezer’s book was very good, but I felt its anti-modesty arguments in what you call the second book were way too modest, and conceded too much of the important ground, before the final chapters and epilogue brought things back and stated outright that modest epistemology is not about epistemology. And here we have the original “X is not about Y” guy coming in, and glazing over this without a word, which was super frustrating, in favor of talking about Y and its object-level arguments and details.

      Thus I felt the burning rage, and the burning need to that night, while the iron was hot, before I could lose momentum or nerve, or give my brain time to find all the subtle stupid social reasons that I’ve internalized just like everyone else that I should tone this down or not post it or stick to giving valid arguments, and make sure that someone was asserting out loud that yes, you not only can but must use all your evidence and disagree with things that this causes you to disagree with, or you’re lying to yourself and also defecting on many levels. To tell people’s internal voices, and to make everyone know that everyone else’s voices have been told, someone boldly and without qualification yes it’s not only not blameworthy but actually obligatory/praiseworthy to do this when called for. You need reasons, but they need not be reasons you can justify to an outside viewing judge however wise and patient. People needed to see that the Overton window, so to speak, included this viewpoint. Even I needed to know that I myself had posted it, rather than being afraid to post it! So I posted it, in that form, because I decided people (including me) needed social permission and proof that nothing bad would actually happen, and Applause Lights and a Rousing Speech, someone had to and no one else would, and they needed them right now. So I wrote one, and threw my cap over the fence to force myself to follow it. I’m very glad I did.

      • robinhanson says:

        Consider again the example of P(A)+P(notA)=1. You could also be incensed that someone criticized you for not satisfying it, if you saw that as an arbitrary status “license” being demanded of you. You might just go out of your way to refuse to satisfy it, just to show them they’re not the boss of you.

        Similarly, if you frame criticism of your disagreement as criticism of your low status relative to the high status of those you disagree with, you might be incensed at that implication of your being low status, and then make sure to disagree just to show that they can’t tell you what to do. And the rest of us might applause because we are supposed to applause egalitarian moves, of the low against the high status.

        But what if we flip the status relation? If you presumed that you are higher status, and the people you are disagreeing with are lower status, then you could see the criticism of your disagreement as people presuming that you are not higher status than they. And be incensed. Now your are offended because you think we should presume your innate superiority. But now you are the villain in a usual story, and story readers will side with those low status experts who you presume you are better than.

        I don’t think there is an obvious natural way to frame disagreeing that makes the disagreer the good guy. There are some ways to frame it that way, but other ways that make the disagreer the bad guy. Better to just set aside all those framings and ask when disagreeing improves accuracy.

      • benquo says:

        I think that part of what Zvi and Eliezer (at least in his epilogue starring Pat Modesto) are trying to say here is Don’t Feed The Trolls. Obviously the trolls can always turn around and say that you’re the real troll, but that doesn’t mean that the case is truly symmetrical and you should always engage with them as though they’re arguing in good faith. Sometimes you just have to notice that they’re not helping you get to the truth, make that common knowledge among the people with the same outlook, and move on, even if you end up with some false-positives.

      • TheZvi says:

        Pointing out that someone has P(A)+P(~A)1 is very different from saying that someone is disagreeing with an expert or with consensus, or otherwise should be more modest. You’re saying the person has a contradiction which needs to be resolved, and the “expert” they’re arguing with is themselves (or maybe the laws of probability?). I don’t think this is a good parallel for a modesty argument. And we agree on the right response there, which is to notice you are confused and think about how to reconcile your models and data. I’m worried about implicitly conflating the two, as if failing to agree with experts should be assumed to be along the lines of a math error, and want to make sure the distinction is clear.

        So, it is a natural thing (especially for Robin!) to think of such talk as about who is claiming who is claiming what personal status, or attacking what personal status, and how people emotionally or strategically react to such status claims. And certainly there exist such arguments and reactions, perhaps some along the lines described.

        But I think this framing of my concerns is confused in a few ways. Let status here stand in for a bunch of stuff including expertise and meta-rationality (and also plain old status) to avoid worrying about the word choices – the statements should work with “expertise” substituted for “status” and the grammar fixed where needed.

        Rather than frame modesty criticisms as an attempt to assert that a particular person is higher or lower in status, I would frame modesty criticisms of people’s decision to disagree as a claim that a person’s status is *relevant* to the decision at all. I think getting this right is really important. Alice says that Bob’s claim that X disagrees with Carol’s claim that ~X, and Bob hasn’t earned the right to disagree with Carol, because Bob is lower status than Carol, and by claiming to disagree with Carol, Bob is asserting that Bob is higher status than Carol. Or, Bob thinks, Carol says ~X, and my evidence says X despite Carol saying ~X, but I’m lower status than Carol, so I can’t claim ~X, because if I did that would be claiming to be higher status than Carol. So since Bob can’t believe that, slash has no right to believe it, he feels he can’t, and believes ~X, discarding his evidence of X, which we agree is a mistake.

        A mild version of this is your statement: Remember, to justifiably disagree on which experts are right in some dispute, you’ll have to be more meta-rational than are those disputing experts, not just than the general population.

        A less mild version says in order to disagree with an expert about something in their field, you are claiming to have superior expertise in that field. Which itself is a common claim.

        What Bob wants to say is not, “you’re not the boss of me” or “I am not low status” (or even ‘I am not high status”) but rather “this is object level and not about status.” When people bring modesty arguments in, they are often effectively bringing status arguments into object-level discussions, and making them into status arguments that now determine beliefs. And That’s Terrible.

        It’s really important that people understand that Bob can claim to be right about one particular thing where Carol is wrong, without claiming a status relative to Carol (or at all), and that Bob can knowingly be probably right about this particular thing despite knowing Carol is a better expert in a larger general reference class. Bob can certainly decide between Carol’s view and David’s view in this way by examining the object-level, at least enough to shift probabilities away from the result of measuring ‘who is more expert/status etc” metric in some form. In some cases, Bob can have enough data, and/or enough compute, and/or enough other evidence on a particular question to be quite confident that Carol is wrong and Bob (or David) is right, and it’s important to note this does not have to mean Carol did anything wrong, is low status or isn’t an expert. Such evidence doesn’t need to fit into one of a series of approved boxes.

        My posts that responded to the book directly (Leaders of Men and Zeroing Out, and hopefully soon More Dakka) have been attempts to point out additional ways to find disagreements where you are likely to be right, without it meaning anyone else is doing anything wrong or is low status, and in particular without saying that you are better than or higher status than they are. To both give a useful tool, and to draw a better and clearer distinction between good object-level claims and their implied status claims. I don’t know if that came through or not. In particular, to shatter this idea that being higher status in general or in an area, in some sense, means you are always higher status in every sub-context, which simply is not true.

        Thus, to the extent that there is a ‘good guy’ and an ‘bad guy’ instead of just a bunch of guys, the person who makes it about status at all, and makes status claims or claims that others are making status claims by making object-level claims when they are in fact making object-level claims, is the bad guy, and the person trying to get to the truth is the good guy. Trying to be contrarian or trying to be modest, or accusing others of either, in order to gain status, would make one a bad guy (and calling someone out for doing that, would make you a good guy, unless you were also doing it for status, etc etc). Because the point is to stop engaging in these negative-sum status games that destroy data, and get to truth. Yes, of course the fact that Carol is an expert and has studied the larger reference class a lot is data and should be factored in, but disagreeing is still an object-level claim until someone decides that it isn’t.

        The whole point of talking about all this explicitly, or at least one important point, is to try to shift things towards such claims not being seen, internally or externally, as status claims, and moving people’s calculus and debate away from the status/shadow level into the object level. We bring up the fact that modesty is not about epistemology so that we can get people to stop engaging with that part of modesty, and the people making such non-epistemological modesty arguments, or at least engage with it less, and feel able to actually engage in epistemology.

        Hopefully that explains why I find such issues important, and why talking about them explicitly in that format seemed important, and why I think it was a path forward to then improve our accuracy.

  4. Eliezer Yudkowsky says:

    > “You have the right to believe that someone else has superior meta-rationality and all your facts and reasoning, and still disagree with them.”

    Wait, isn’t that going too far? When is that ever a good idea?

    • benquo says:

      Yeah, I’d want to see examples for several of these, including this one.

    • benquo says:

      I guess this is true in the sense that if I’m not smart enough to properly Aumann with you (since that can require a lot of computational power in practice), it’s sometimes better to retain the structure of my own world-model plus a great big unresolved red flag, than just act based on a model-free “Eliezer is right” beliefset.

      • robinhanson says:

        I disagree with the claim that it must require a lot of compute power to take into account the info embodied in others’ views. As always, it takes a lot of compute power to do that task very well. But not to do it quick and dirty, which can be a big win accuracy wise.

      • benquo says:

        I agree that sometimes, for some purposes, one can profit from a quick-and-dirty analogue to Aumanning. But it seems important to distinguish that from the sort of model update that could seamlessly produce the updated belief even if you didn’t remember the existence of the other person.

      • benquo says:

        This is why I’m often responding to things people (such as you) wrote years ago :) I don’t really know how to do it the other way; it feels dirty.

      • benquo says:

        In general I’d expect the quick-and-dirty approach to do well for making rapid trades in a liquid market, and poorly for things like studying a discipline such as physics.

      • TheZvi says:

        There’s definitely an argument that I didn’t get into elsewhere or think enough about along these lines, that one should not lightly make an update that causes contradictions or confusions in one’s world model, or at least that doing so is pretty weird. I could say a lot about this, e.g. in cases where there’s a market price that makes no sense, where I do update that it’s pretty close to accurate on average, but where that doesn’t really make any sense, so I’m left in this weird contradictory state where I go around modifying things every time I’m threatened with an implicit dutch book trying to fix the situation…

      • Error says:

        “If Alice knows that in such situations Bob will fully update to Alice’s estimate, that’s a hell of a temptation.”

        This seems like the epistemic equivalent of a social argument. There’s people who argue to understand, and people who argue to win, and it doesn’t end well when the two meet. Choosing to Aumann with someone is a prisoner’s dilemma of sorts.

        A superintelligent paperclipper would be more rational than me in every way, but if it tells me the sun is out, I’m still going to check if it’s raining.

      • Error says:

        Ugh, that was supposed to be under Zvi@11:32

    • TheZvi says:

      I can think of several cases. Let’s stipulate that Alice has superior meta-rationality to Bob, and both Alice and Bob know this. They’re cheating to various degrees.

      Scenario 1: Reverse of Benquo’s idea. Alice does not have sufficient evidence that Bob’s facts and reasoning are worth engaging with, and so Alice does not use sufficient computational power to incorporate Bob’s data. Alice (correctly or incorrectly) decides that Bob is probably wrong, and ignores him, or only superficially engages with Bob’s ideas. So Bob updates on his information towards Alice’s view versus not knowing Alice’s view, but does not fully update.

      That’s cheating because Alice doesn’t *really* have Bob’s information, but it’s an important case because it’s very common. Similarly, Alice could lack confidence in Bob’s motivations, or accuracy, or other things, which *also* could be called Alice lacking Bob’s information, but now “having all the information” has to expand to Bob’s abilities and motives and so forth, and it also has to expand to “has actually considered the information to a sufficient extent and done full implicit find-the-correct-full-Aumann-agreement-point.” Otherwise it’s clear this can break.

      Scenario 2: Bob does not fully trust Alice (we’ve covered Alice not trusting Bob under scenario 1, but it also works as a mirror here). Why should he? Even if he knows Alice is generically better at meta-rationality than Bob, that does not mean Alice’s motives are pure, she’s devoting sufficient or matching effort, she feels free to share her true opinion and reasoning, and so on.

      That’s cheating because it’s not really disagreeing with Alice to not fully believe what Alice *says*; were Bob to be told by Omega what Alice’s real opinion was, and Alice has done all her work, and so on, this objection falls away. But in practice, the fact that Bob is not Alice, and Alice might not be giving her true and full honest opinion, means Bob shouldn’t update fully to what Alice says.

      Scenario 3: Alice and Carol are both superior in meta-rationality to Bob, and gave different answers, and didn’t update after he told them about each other’s answers. Bob has to disagree with one of them.

      Had to throw that one in there. You could say that’s cheating because he shouldn’t disagree with them as a group, he should average Alice and Carol here in some fashion.

      Scenario 4: Alice has generically superior meta-rationality to Bob, but Bob has superior meta-rationality to Alice on this particular question. Perhaps he’s using a lot more compute, or has better intuitions, or what not. I would consider myself to have superior meta-rationality to Eliezer in some contexts, and inferior in other contexts.

      Several ways to call this cheating. You could say that Bob’s intuitions and computations are part of Bob’s data, and not passing it to Alice is cheating, but that implies having to transfer large portions of life experience in ways that aren’t really possible – even if in theory I could transfer this much data we don’t have that kind of time or compute. The point here is that meta-rationality is not a uniform skill. You could say that this violates that Alice has superior meta-rationality to Bob, which it sort of does, but also sort of doesn’t.

      Note that in practice, if Alice has superior meta-rationality to Bob and limited compute, since Alice expects Bob to be worse at meta-rationality, her expectation of how much Bob updates should be less than “agree with Alice’s update” even before all these caveats, so she should not be too surprised when Bob fails to fully update, and thus…

      So in practice, I think the following conversation is still often not a mistake by either Alice or Bob (again, despite them both thinking Alice>Bob at meta-rationality):

      Bob: I believe X with probability 0.75 because Y and Z.
      Alice: I have updated on that and still believe X with probability 0.1.
      Bob: Damn. I now believe X with probability 0.25.
      Alice: That’s nice, but I expected that, I’m sticking with 0.1.
      Bob: Damn again. 0.2.
      (they leave)

      or even…

      Bob: 0.75.
      Alice: 0.1.
      Bob: Wow, I expected 0.01! 0.8.
      Alice: That’s my good old Bob. Still 0.1.
      (they leave)

      There’s also the fact that not thinking you have the right to disagree at all is highly exploitable. If Alice knows that in such situations Bob will fully update to Alice’s estimate, that’s a hell of a temptation. It will also lead to severe discounting of Bob’s data by Alice.

      I am worried about the failure mode where Alice gets to use her “superior metarationality” as a club to beat all Bobs into epistemic submission, and think that is bad.

      So I guess if I wanted to reverse the statement, I’d go with something like:

      “If someone else has superior contextual meta-rationality, and all your facts and reasoning including all relevant facts about you on every level sufficient to fully trust you, and has devoted sufficient compute to the problem, and is known to be only motivated by truth seeking, and no one else fits this description, then you have no right to disagree.”

      I owe Robin an extensive reply which I hope to get to later this evening.

    • Doug S. says:

      Obvious case: you suspect that they have an incentive to deceive you. I suspect that the average clothing salesperson is better than me at identifying which clothes look good on me, but that doesn’t mean they’re not going to tell me I look better than I actually do so they can get a commission when I buy something.

  5. TheZvi says:

    [NOTE: This is my reply to Robin’s question above, and the beginning (yeah, I know, long post is long even as a post). I’m putting it as a main comment to make it easier to read. I might edit it into a real post later, might not. I’d rather talk about things we both find more interesting, but to explicitly answer the first sentence, Robin’s post taxes disagreement on various social and epistemic levels, and encourages further such taxes, the power to tax is the power to destroy and I’d rather we subsidize, so I subsidized, etc.]

    Jokes about congress aside, yes I mean rights/duties in the sense of it being the rational and correct thing to do, and mean it this way in this comment as well. (As an aside, I started to type that no one is advocating for legal penalties for disagreement, except that once I typed it I realized that no, actually, a lot of people do exactly that all the time, all over the place, the effects of this are quite bad, and giving such arguments additional power would be very bad, but I agree that you’re not doing that at all.)

    Thank you for the link, it is helpful in clarifying where we disagree. I strongly agree with almost the entire body of that article! The ‘I don’t know’ thing is especially important – one always has a best guess given available information and resources, and people refusing to share that best estimate when it would be helpful, and a much better estimate than I’m capable of, even when I tell them exactly that, is something that drives me insane. It’s worth saying that often ‘I don’t know’ is the right information to give to someone else, and also that ‘I don’t know’ could be the best description of your current epistemic state because you haven’t put any attention into resolving what your best estimate would be. So on reflection you aren’t entitled to say ‘I don’t know’ but you’re entitled to not have yet reflected, or to decide not to.

    So what I’m definitely not saying is that anyone is entitled to an arbitrary opinion, on anything, ever. You emphatically do not have the right to choose whatever opinion you’d like to have. Agreed that (with an understanding that best estimate means best estimate given the information and other resources available to you, not the best estimate that exists anywhere in the world) “There is a best estimate. You are only entitled to your best honest effort to find that best estimate; anything else is a lie.”

    That’s why I say that these aren’t actually rights, these are (with some added caveats) duties. You have a duty, when making an estimate, to make the best honest effort at estimation that you can, and you have a duty to make an estimate when you ask yourself for any information including an opinion. This includes your own observations and reasoning (both object-level and otherwise), and also your knowledge about the opinions of others and their facts and arguments, including experts. You must use all of it.

    So I think we agree on all that, and on such modest modesty arguments.

    The thing is, you could call that “You Are Never Entitled To Your Own Opinion” in the sense that you can’t just go around having whatever opinions you feel like, but you could also could call it “You Are Never Entitled To Not Have Your Own Opinion”! You have a unique set of data and reasoning. You have reason to trust and value some data more, and others less, than any other person. When you simply adopt the opinion of whatever expert, authority figure or group is lying around, or whoever seems to have the best meta-rationality, you’re discarding data with likelihood ratios that aren’t one, and that’s not allowed. You are not entitled to do that, either, except when you need to conserve compute more than you need to improve your estimate!

    I see Robin’s model as then saying (let me know if this is right): By default, your estimate/model should be that of some combination of experts/consensus/others/outside-view/etc, depending on circumstances and which of those we have access to, typically giving priority to experts. Mostly you should find out what experts say, and believe that.

    Sometimes, one can find a reason to disagree with what experts say. Experts make systematic errors we can identify in their recommendations to us, due to the incentives they face and the status and other games they play. In those situations, correcting for those errors can allow us to beat experts (e.g. less education than educators suggest, less health care than health professionals suggest, less avoidance of local streets than locals would suggest, etc etc, although it’s not clear how to know how much less to use and I’m curious how you think about that question). There’s an obvious extension from advice to beliefs that I think you also agree with (e.g. health care professionals thinking health care saves more lives than it does, as opposed to them advising us to get more health care, and similar). You also grant that if you can act directly, as opposed to acting via social institutions, that can constitute an advantage, but only in cases where you can act directly rather than via social institutions.

    [I note that I disagree that if you are acting via social institutions, and they are acting via social institutions, that this means you can’t have an advantage on this dimension, because ‘acting via social institutions’ is not so natural a category. Often one can operate via different social institutions than others can operate via, perhaps even designing social institutions for those aims, applying much more optimization pressure towards a particular target. Many institutions aren’t simply restricted to operating via social institutions but rather particular social institutions in particular ways, which you are not so restricted in doing. Social institutions are periodically replaced and out-competed in large part because such institutions are ill-suited to new problems and situations, and we instead design and create new institutions better suited. Details matter a lot.]

    Robin also agrees with Eliezer’s suggestion that those trying to do big things can usefully reject expert views, even when this lowers the accuracy of their models, due to the asymmetry of its potential practical payoffs – such people might usually be wrong but can win big when right, so it’s OK if they (kind of?) lie to themselves.

    I think I’m still mostly with Robin on all that, noting the disagreement above, and with more optimism about the accuracy of those aiming big (but also respecting that sometimes the right path for such people is to on at least some levels be overconfident, often absurdly so).

    I see Robin’s model as then saying that we should be highly skeptical not only of others’ disagreements with experts (or other best outside views/sources), but of our own disagreements. Most such folk who disagree are wrong. Most such folk who claim important unique data are also wrong. Most such folk who give reasons why their disagreement is special, that seem right to them, are also wrong. Most such folk who claim to have superior meta-rationality, are also wrong. And so forth. Thus, in order to be able to disagree in a way that improves your accuracy, you need evidence that you are superior to that reference class, and superior to the people you are going to disagree with.

    You also need evidence that the evidence you site should count as evidence, since otherwise it’s not evidence. This seems to extend to things like use of Bayes’ theorem. I’m not sure if this point in Robin’s review was meant to claim that the book makes an inadequate argument because the book does not explicitly make the case for use and understanding of Bayes as evidence of superior meta-rationality (in which case, I would reply that perhaps more links would have been a good idea, or even be a good idea to add now, but putting the content in the book would have been strategically unhelpful, and Eliezer has indeed made this case elsewhere at length, although I’m not sure what the best short case he’s made would be, or if such case could be improved, or whether that would be worth it).

    Or perhaps Robin’s claim is that we actually don’t have evidence that this is evidence of superior meta-rationality, either in weak form (it’s evidence, but far from conclusive) or strong form (it’s not evidence at all). The strong form I strongly disagree with. The weak form, that this is one of many skills and pieces of evidence that might apply, and that there exist people who use and understand Bayes that are worse meta-rationalists than others who do not understand or use Bayes, I can’t disagree with, but I do think it’s strong evidence and the likelihood ratio is not small.

    When Robin sees Eliezer (or others) claim to be better at things like meta-rationality, caring about truth and the world, not trying to win status games, make sacrifices, being more knowledgeable or smarter, etc etc, Robin points out that many people make such claims and most of them are wrong, despite many also being sincere about it, so not only shouldn’t others believe him, he should not believe such claims about himself, either. How can he know he is different? Thus, he and others might be advised to believe they are better at some topics (even though this is still unlikely to be true!) but not justified in believing themselves to possess distinguishing skills in general.

    In the general case, I think this is a reasonable response to an unknown person making such claims, if one cannot further investigate. It is indeed true that many make such claims, implicitly or explicitly, and most of them are wrong, even among those who believe their own claims. I do think that people making such claims and believing them are more likely to have such things be true about them, than those who do not believe such claims about themselves, but the evidence is not so strong and many/most are still wrong. In this case, I think Robin and myself are both in excellent position to assess the details of these claims by these particular people, and render our opinions based on our data, although Robin would reply that many people also think that and most of them are also wrong. I would also say that Robin is engaging in miscategorization and reference class tennis in various ways, but Robin would say (I’m guessing) that most people would say that too and be wrong again! And so on. This is a big problem posed by modesty arguments.

    I think there’s a big difference between my ability to make such claims about myself, to myself, and my ability to make such claims to others in a way they should believe, or especially to compactly make such claims to others who do not know me well and have little time to investigate such matters, in a way that they should in turn believe. I think it’s common that my estimate is that I have (attribute) but that others who do not know me well should not believe my claim to (attribute), and those who know me are somewhere in the middle, and I think that’s fine. I also believe that there does exist evidence of all these attributes that should be trusted, and also evidence from others (if you want it) that such evidence is worth trusting, and also evidence that others would consider to be evidence that you can evaluate such evidence, and so on. I think you can and should bootstrap out of (some amount of) modesty in this way, even if you begin as modest with no other way out. This is similar to the idea of testing out cases where you want to disagree and seeing how often you’re right, as suggested by Eliezer, which is also a good idea, and should also work.

    More generally, experts agree (I’m pretty sure, stop me if I’m wrong about this) that we can learn and know much about what attributes we possess, so it seems reasonable even under modesty to be able to test and verify such attributes; we’d be discussing whether a particular method is good evidence, but clearly evidence is available if you want to invest sufficient resources. In some cases that evidence should be trustworthy to third parties, other cases less so, but a lot of my view on such things is that I should believe things even if I don’t think I can convince even meta-rational others of those things.

    There are a number of “modesty is self defeating” arguments, some of which I mostly buy and some of which I mostly don’t, but while it can’t possibly be original, it does seem to me that asking ‘how can you know if you’re X or Y?’ is an odd challenge if experts think there are ways you can tell.

    A good expert system with limited resources can and should obviously say, there exist people who should disagree with us experts, and who can and do know this, even if us experts shouldn’t (yet) agree with them! Even if they gave us their data (I’ve been using data to equal both reasoning and facts, for compactness). Keep in mind ‘their data’ could reasonably include all of someone’s life experiences, and the accuracy of same.

    Meanwhile, to return to Robin’s model. I believe he thinks that even if one could prove one was superior in these ways to most folk,that doesn’t mean you are superior to the best expert, and without confidence in your superior meta-rationality or other skills versus such best expert (who is part of the expert system that agrees) one has no right to disagree with said experts, or pick a side among those experts.

    I’m not sure how hard Robin believes such claims, or how much doubt he believes people really should have about their meta-rationality and what one consider valid evidence (I’m definitely interested in exploring this question more as a practical or theoretical question).

    I would say here that I think this does not set the bar correctly. The meta-rationality of a group’s beliefs is not the meta-rationality of its most meta-rational member; we shouldn’t choose which group is right by looking for the single most meta-rational person (even if we could do this, which would imply we could figure out how meta-rational we were as well). That’s a terrible way to settle a dispute, although there are far worse ways. My meta-rationality need not exceed that of every member of group X before I think I have better meta-rationality than group X does, in general.

    It certainly doesn’t mean I can’t have a better model of a particular question I’ve chosen to focus on, and devoted more resources to, even if it could be seen as inside some area of expertise, even if I haven’t spent years gaining such expertise (or further years proving it). Or that I couldn’t adjudicate a dispute between experts better than chance, or better than ‘choose the side with more people on it’ or more people with high impressiveness or high-seeming meta-rationality, or the side with the most impressive or most meta-rational such person, or some other such simple heuristic, or an average of such things. I’m confused why it even would indicate such a thing.

    Two hidden implications of such a prohibition seem, to me, to be a claim that all object-level reasoning (or more strongly all data) is priced correctly into everyone’s opinions, and that expert opinion isn’t factoring in anything other than truth seeking on any level. This second implication has obvious exceptions, such as Robin’s point that people will think too highly of their own field, so you can correct for that. This observation generalizes quite well! I don’t see why one should assume that biases are being corrected for, either general human ones or those particular to a group. Correcting for them seems like a good idea, if I care enough, as does correcting for people’s motives. The generalization of ‘health care professionals recommend too much health care’ is ‘health care professionals over-believe anything that is good for health care professionals and under-believe anything bad for them.’ I think such exceptions are universal and blatant, and it would be weird to assume otherwise. You should adjust your models and estimates accordingly.

    The first implication is just wrong. Many of the people hold their opinions via information cascade or other modest methods, others reached conclusions from some but not all of the object-level data, others used methods that looked at meta-level arguments only and didn’t check, and so on. There are possible errors everywhere and the incentives to correct them not so strong. So some facts and arguments will be over-considered, some other facts and arguments will be under-considered, both for essentially random reasons, and for motivated or predictable reasons.

    I mean, of course you can beat all that. There are lots of ways to beat that. There are simple such ways to beat even well-functioning two-sided markets (I mostly affirm Eliezer’s claim about 5% moves in the most liquid stocks versus other most liquid stocks within three months, but I will note that if some of those details were relaxed, and not all that much, I would not so mostly affirm). In the coming months I hope to share some simple ways to beat sports prediction markets – real ones that still work – as examples, and probably no one will use them but maybe someone will. And that would be cool. Also, some of them are commonly known and work anyway, although others are not. There’s also tons of examples of sports teams that seem highly motivated to win games doing things that clearly don’t win games, for no good reason, and experts not agreeing with this for a long time, for no good reason, and so on, in ways that amateurs could and did identify and scream about. And cases where that’s still happening. You know those gamblers that I claim are so easy to beat? They’re way, way better at this than the other ‘experts’, same as the traders of Nikkei futures are collectively better experts than the Bank of Japan (and the S&P over the Fed, this is not us picking on the poor BoJ). I’m guessing most any expert who knows the history of any field can point to many such examples.

    A basic flaw, I think, is the idea that in order to improve upon a system, I have to be able to beat the system (see Leaders of Men and Zeroing Out, which were designed to make this point). I’m not better than all of academia or all experts. But I don’t need to be, either. They are offering an endless series of claims, many for motivated reasons, and all I have to do is challenge one (or more) of them, leaving most of them in place. I don’t see why this need be hard, even if the majority of people who do it end up being wrong.

    Also, on an unrelated note, I owe Robin some writing about prediction markets at some point in the next four months (to give myself a deadline I can be called out on), since I have a lot of practical experience that I should share regarding what makes them better and worse functioning markets. I hope this can improve the practicality of proposals in that space, since I want to see real progress on that.

    • robinhanson says:

      “When you simply adopt the opinion of whatever expert, authority figure or group is lying around, or whoever seems to have the best meta-rationality, you’re discarding data with likelihood ratios that aren’t one, and that’s not allowed. … I see Robin’s model as then saying (let me know if this is right): By default, your estimate/model should be that of some combination of experts/consensus/others/outside-view/etc, depending on circumstances and which of those we have access to, typically giving priority to experts. Mostly you should find out what experts say, and believe that.”

      Most rationality analysis is diagnostic, not constructive. The rule that beliefs should satisfy P(A)+P(notA)=1 does NOT say that if you ever see a violation of this rule, you should immediately set your new P(A) to be equal to your old value of 1-P(notA). That would indeed discard useful info that went into your old P(A). You should instead reconsider your whole system, and keep doing so until the rule violation goes away.

      Similarly the observation that rational agents would not knowingly disagree tells you that something is going wrong in your or others’ belief setting processes. It doesn’t tell you to apply any simple mechanical rule to make it go away. But it does tell you there’s a serious problem somewhere, and to look honestly for where it might be. You have to consider it might be a problem in you, and not just in others. That is the main issue in the rationality of disagreement, and my main point was the Eliezer hadn’t made much progress on it in his book.

      • TheZvi says:

        Ah. That makes sense. Stated that way, I agree completely, and I agree that Eliezer doesn’t make much formal progress on that in the book and that this would be valuable and we should try to make more progress. I do think that a lot of his arguments for things one can do are stronger than you think they are. Part of that may be my reading stronger arguments that he did not make, and much stronger immodest positions that I hold and he was unwilling to assert in the book (whether or not he’d agree with me if asked), into the situation; I was disappointed in much the same places you were, and said as much, because I felt like he was making way too modest arguments against modesty and leaving out the stronger evidence available, some of which he makes up for later (I think of the book as 3 parts, with your 2 parts and then the “Modesty is not about epistemology” part at the end, which is important but does not advance the art of actual epistemology as such).

        The problem is that your post doesn’t read, at least to me, like it’s stating that. It feels like it’s stating mostly the other thing, where you should make the mistake of mostly throwing out the data when confronted with experts/consensus, except insofar as you can find a good reason for not doing so – the implied ‘one needs a license’ thing. Although you do point out many cases where you would grant such a license. And many other people’s statements and posts and views seem to also be saying this second thing, not your first thing. The whole frame is in support of that second thing not the first thing. I read Eliezer’s book as trying to get people to move from the second thing to the first thing, and give people some tools to identify where their own data, or gathering one’s own data, might be worthwhile as opposed to looking to others, and point to examples of that, and to explanations of *why* such a technique would work and how to reconcile the facts in some situations. But the main thing is to move people away from thinking that they need a formal justification to not throw out their previous data that informs their private without-looking-at-others-opinions-in-some-important-way P(A). And in particular, to be able to use that evidence to be able to do his cases #1-3, especially big things.

        I think Eliezer was doing important work that urgently needed doing (although I agree his results in the second part of his book were less good than I was hoping for, given the author), and that he was right to prioritize it over the work you were hoping to see, but would love to do more work on advancing the object-level questions about the rationality of disagreement. They’re important too and it seems clear we have very different approaches there, but not what most readers needed most in my (and I think Eliezer’s) view.

      • robinhanson says:

        For some reason I can’t respond to Zvi’s response below directly, so I’ll put my response here.

        I like my blog posts to be about one main thing, so I avoided engaging the “modesty is about status” argument. Yes, some people may invoke modesty primarily to regulate attempts by some to rise in status via disagreeing. But others may disagree primarily to try to raise their status. We can’t conclude from all this whether there should be more or less disagreement. To evaluate that, we have to go to accuracy arguments.

      • TheZvi says:

        You can’t reply directly because I limit comment width to two levels for readability reasons (I replied to myself in the other thread, for the same reason). If anyone reading this has suggestions for a better way to do WordPress comments, I’m open to suggestions, but I think this is better than what SSC does where after more levels you get comments that are several pages of very narrow text. Ideally I’d want something close to what LessWrong 2.0 is doing, with great use of space to allow threading to go deep (but without the karma, obviously).

        I sympathize with the idea that blog posts should be about one thing; I share that desire. Here the topic was, to me, “review of book” so leaving out one of a book’s three major sections, not even devoting a sentence to noting it is there, seems rather odd, especially when it is a central reason why one of the other sections is there at all, and is (to me) the central point of the book. But, especially if you’re planning a second post addressing that argument, I do get it, and if you had a sentence saying “he also makes this other claim that I’ll get to later” or even just “this claim that is interesting but which this post does not have scope to discuss” I would fully approve. I’m guessing you framed your post as discussing the question of the rationality of disagreement rather than the book, which explains why you didn’t feel this was needed.

        I realize that if you were writing a full post you would make a longer and stronger argument, but I think that the short argument you make here about it is quite wrong. It equates the two sides as equal or at least hard to distinguish in impact: Those using modesty to regulate disagreement because those disagreements are attempts to raise status, and those who use disagreement to raise status.

        This accepts the central frame that disagreements are attempts to raise status, and that disagreeing is largely a route to higher status, which is exactly a frame I think is bad (damaging) and also incorrect, and want to weaken. I explain this also in the other thread, but basically: When Bob disagrees, he is mostly making an object-level claim, unless Bob has become so hopelessly corrupted that he thinks that epistemic claims are primarily status claims. This is especially true when Bob is choosing what to believe internally, and his inner voice is treating this as a status claim and trying to regulate it.

        When folk think that experts are wrong, they are (I would claim) mostly not broken in this way. Mostly, folk who disagree with experts/consensus actually believe that the expert/consensus is wrong on the object-level. They then are reluctant to disagree, because others will smack them down for what is seen as a status claim, or even they smack down themselves (this gets internalized over time) to avoid this problem if anyone ever realized that they so disagree.

        On the contrary, I claim that when someone disagrees with consensus/experts, this is *bad* for their status situation most of the time, especially if they do so without concern for their status. In many cases, others will simply consider them lower status for believing something stupid, or different from the group or different from experts. In others, they will be actively smacked down because they seem to be making a status claim (or because this particular disagreement actually is one, in addition to its object level truth, such as “I can do X” instead of “X is true”). In general, most of the time folks *don’t* want to make status claims non-strategically, because we’ve evolved defenses to status claims to prevent this, and thus we learn to only make such claims when they are likely to succeed.

        Occasionally, yes, there will be a time when disagreeing is done because it will gain someone status, and times when others see this and try to lower their status for trying to do this, which they need to do to prevent such moves from happening more often. Ideally, we will be at an equilibrium where this happens mostly when the person disagreeing is confident they are right, and will gain status when right and lose it when wrong, at which point we would want them to do so and would have earned their new higher status.

        But I think that the vast majority of the time, status and social concerns weigh *against* speaking disagreement or even having disagreement. We want to conform to norms, and that includes belief norms, and we want to reward those who have and uphold/enforce such norms, and punish those who question or defy such norms. We get far, far less disagreement than we would get without such pressures, which are almost entirely one sided, in addition to the relatively smaller interplay of people trying to gain status via contrarian/disagreement claims. And recently, we’ve seen stronger modesty arguments and beliefs in modesty, which causes more folks (especially folk in our circles) smacking down other folk for disagreement, or restricting what forms of disagreement don’t get so smacked down, and enforcing more norms against disagreement such as the requirement to justify such disagreement via an outside view.

        If people are, as I claim that they are, vastly more often suppressing disagreement due to status and social concerns than they are creating disagreement due to those concerns, we can safely say that we probably want much more disagreement, even if we cannot distinguish between such cases. Saying there’s no difference in magnitude, and we can’t learn anything here, would be the fallacy of grey. We could also say that it’s not that hard to tell when disagreement would cost one status versus when it would potentially gain or lose one status, and that in the first case we obviously have much less disagreement than is ideal, and we should attempt to move norms to encourage more of the first type even if we can’t judge the second type. To use my terminology, we can ‘zero out’ the accuracy question, note that after people judge the accuracy of claims they are massively suppressing disagreement to conform to norms, and wish they’d do less of that, confining such suppression to places where we see value in norms that actively suppress disagreement (which is a whole different question, and I’m willing to entertain arguments that there exist larger or smaller such areas).

        Of course none of that excludes accuracy arguments from validity. I’m all for accuracy arguments. My original post does make the object-level claim that it is sometimes correct, in all the situations mentioned, to disagree, on a pure likely-to-be-correct level, even if the situation reference class as framed makes one unlikely to be right before considering what evidence one might have. And that’s before considerations like doing-big-thing, or don’t-destroy-your-data-and-hurt-group-epistemics, or experiments-are-good (see Sarah’s post for more on these lines), or what not. In fact I made the strongest such claims I felt were true in practice, to make clear in practice that even in such situations one can have reason to disagree (and that in terms of exact probability, one should often disagree a little, but also that sometimes one should actively disagree in large ways). That does not mean that everyone who reads this is disagreeing too little with experts/consensus/others (as always, consider reversing all advice you hear) but I think that in 2017, given who reads my posts and Eliezer’s posts/book, that it is true of most of the reference class of people who will read the information. And even for those who do not need to disagree more, it is good that they adopt and encourage norms that people feel freer to disagree.

  6. Pingback: The Right to Be Wrong | Otium

  7. Pingback: More Dakka | Don't Worry About the Vase

  8. Pingback: I Vouch For MIRI | Don't Worry About the Vase

  9. Pingback: as alive as I can

  10. Pingback: In the presence of disinformation, collective epistemology requires local modeling – Unstable Ontology

  11. Pingback: Book Review: The Elephant in the Brain | Don't Worry About the Vase

Leave a comment