Why Rationality?

Previously: Responses to Tyler Cohen on Rationality

Recently: Yes, We Have Noticed the SkullsWhen Rationalists Remade the World

Repeat as Necessary: Principle of Charity

A lot of people do not understand the appeal. I was surprised to find that Will Wilkinson was among them. Tyler shared a tweet-storm of his, which I will reproduce below:

1. Some thoughts on the controversy over the rationality community sparked by @tylercowen’s offhand comments to @ezraklein
2. Avoiding the most common errors in thought has surprisingly few personal benefits. Why try?

3. Vigilance against bias and fastidious Bayesianism has to appeal to something in you. But what? Why are you so worried about being wrong?

4. Some rationalists talk a lot about status signaling and tribe-based confirmation bias. What’s the theory of shutting this off?

5. Plato’s right. Will to truth is rooted in eros. Longing for recognition must be translated into a longing for a weird semantic relation.

6. Hume is right. Good things, like reason, come from co-opting and reshaping base motives. Reason’s enslaved to the impulse that drives it.

7. I see almost no interest among rationality folks in cultivating and shaping the arational motives behind rational cognition.

8. Bayes’ Law is much less important than understanding what would get somebody to ever care about applying Bayes’ Law.

9. If the reason people don’t is that it would destabilize their identity and relationships, maybe it’s not so great to be right.

10. Aristotle says a real philosopher is inhuman.

11. If the normal syndrome of epistemic irrationality is instrumentally rational, then you’ve got to choose your irrationality.

12. The inclination to choose epistemic rationality is evidence of being bad at life.

13. It’s understandable that these people should want a community in which they can feel better and maybe even superior about this.

14. We all need a home. If that desire for safety/acceptance/status produces more usefully true thoughts, that’s awesome.

(His later replies in the thread, which mostly happened after I read it the first time, are also worth looking at, and indicate the view that what has been done is good, but inadequate to the tasks at hand.)

At first this made me angry. I read this as a direct attack not only on my community, but even on the very idea of truth. How dare we pay careful attention to truth? We must have baser motives we are ignoring! We must be losers who are bad at life! We only seek truth so that we can meet up with other truth seekers and feel superior to the evil outgroup that does not sufficiently truth-seek!

Who knew the post-truth era started with Plato? I mean, I can see why one might pin the blame on him, but this is a little extreme.

Truth? You want the truth? You can’t handle the untruth! So all right, fine, I guess you can go seek truth, but it won’t help you much. Let us know if you come up with anything good.

Then I calmed down and had some time to think about it, and I realized that if one applies the aforementioned Principle of Charity, Will is asking good questions that deserve real answers, and it is not reasonable to say ‘it is all there in the sequences’ any more than it is fair for philosophers to answer your questions by telling you to read the following five books – it might be good advice in both cases, but it is not especially practical and acts to prevent dialogue.

Then Scott wrote his post and a lot of people commented, I had a day to think about things, the thread expanded, and I realized that Will too is offering us high praise. There is no enemy anywhere. A link today described Scott’s post as ‘a defense of the Rationalist community,’ then about ten minutes after I wrote that, Noah Smith posted a link saying simply that he ‘defends rationalists’ and I suppose all that is accurate – but no one is attacking!

Once again, we are being called to a greater mission. Once again, it is an important mission, and we should choose to accept it. The twist is, we might not agree on some of the details, but we are already on this mission, and have been for years.

So let’s take it from the top.

2. Avoiding the most common errors in thought has surprisingly few personal benefits. Why try?

In response to someone else’s rude and snappy answer, Will replied “Guess I should have done that 7th year of philosophy grad school. Sigh.” I would ask, what was Will doing in philosophy grad school in the first place? Did he not notice that going to graduate school in philosophy has surprisingly few personal benefits? Why go? I hope it was not for the generous benefit packages. I hope and believe that it was because Will wanted to seek knowledge, and use that knowledge for good (and to figure out what good is). If you want to be a top expert on human happiness as it relates to public policy, knowing how to correct for bias and figure out what is likely to be true seems rather important!

Every philosophy student I have met (sample size admittedly not that large, but at least three) gives some version of the same answer, which is that they want to think about important things for a living. Thinking about the truth, the good, ethics and other similar things seems like some combination of the most important thing they could be doing (either for their own sake, for their usefulness, or both), and the most fun thing they could be doing. I agree! That sounds pretty great. I even considered going, up until that part where I realized the surprising lack of personal benefits. True story.

So here are at least some of my reasons:

I practice “vigilance against bias and fastidious Bayesianism” (hereafter I will say rationality), in part, because it is to me the most interesting thing. It is fun. 

I practice Rationality, in part, because it is to me an intrinsic good. I value epistemic rationality and truth for their own sake, and I think everyone else should too. Truth seekingin my opinion, is an important virtue. I doubt Plato or Hume would disagree.

I practice Rationality, in part, because it is to me a useful good. Instrumental rationality has helped me immensely. It has helped the world immensely. Good thinking is a necessary condition for nice things. Producing more useful true thoughts really is awesome! For certain problems, especially related to AGI, FAI and the AI Control problem, at least the level of rigor we are currently seeking is necessary or all is lost. Rationality is and has been very useful to me personally, and very useful to the world. I agree with the fact that in general it has surprisingly few personal benefits, but part of that is that the benefits are highly situation-dependent (in a way I will talk about more below) and part of that is because one would naively expect the benefits to be so amazing. You can get less than you would have expected, and still get far more than you paid for.

I practice Rationality, in part, because it is my community and my culture. When we talk to each other, online or in person, interesting conversation results. When we get together, we have a good time and form deep and lasting friendships. Most of my best friends, I met that way. I met my wife that way! If someone else also thinks being right is the most interesting thing chances are very good we’ll get along. These are amazing people.

I practice Rationality, in part, because it is my comparative advantage. I am better at going down this path, relative to my ability to go down other paths, compared to others. I do not think hardcore study of Rationality is for everyone, or is for everyone who is ‘smart enough’ or anything like that. Some of you should do one thing, and some of you should do the other. I even have thoughts on who is who.

I even occasionally practice rationality because something is wrong on the internet. I am not especially proud of that one, but there it is.

3. Vigilance against bias and fastidious Bayesianism has to appeal to something in you. But what? Why are you so worried about being wrong?

The instinctive reaction is to say “I’m not afraid of being wrong! There are plenty of other good reasons!” and there are lots of other good reasons, but I’m also rather afraid of being wrong, so among other reasons:

I am worried about making a bad trade or placing a bad bet and losing my money, and it also being my fault. I have done these things while trying to make a living, and they involved me being wrong. It sucked. Being less wrong would have helped.

I am worried about making mistakes and therefore losing when I compete in competitions, such as Magic: The Gathering tournaments. I am also afraid of this loss therefore being my fault. I have done this a lot. I prefer the alternative.

I am worried about making bad choices in life in general and therefore losing expected utility. I do this every day, so it seems like a reasonable thing to worry about. I would rather make better decisions.

I am worried about being wrong because being wrong is bad. I attach intrinsic value to being less wrong, partly because I decided a long time ago that doing so would lead to better outcomes and I am a virtue ethics kind of guy, and partly because I was brought up in a conservative Jewish tradition and I have always felt this way.

I am worried about being wrong because others are counting on me for information or advice, and I do not want to let them down, and also I want a reputation for not being wrong about stuff.

I am worried about us collectively being wrong because some time in the next century someone is likely going to build an Artificial General Intelligence and if those who do so are wrong about how this works and try to get by with the sloppy thinking and system of kludges that is the default human way of thinking, we will fail at the control problem and then we are all going to die and all value in the universe is going to be destroyed. I would really prefer to avoid that.

Note that all of these, and especially this last one, while certainly sufficient motivation for some, is in no way necessary. Any of the reasons will do fine, as will other motives, such as simply thinking that trying to be less wrong is the most interesting thing one can do with their day.

4. Some rationalists talk a lot about status signaling and tribe-based confirmation bias. What’s the theory of shutting this off?

These are some pretty big problems. Shutting them off entirely is not practical, at least not that I can see, but there are reasonable steps one can take to minimize the impact of these problems, and talking a lot about them is a key part of those plans.

Tribe-based confirmation bias, like all other biases, can be consciously (partly) corrected for, if one is aware of what it is and how it works. The internet is full of talk about how one should go out into the world and read the people who disagree with you politically, advice which a sadly small number of people take to heart, and which if anything should go farther than the damnable other political party. The thing is, the trick kind of works, if you sample a lot of opinions, consider other perspectives, and think about why you and your tribe believe the things you believe. It works even better when one of your tribe’s values is pointing out when this is happening. If you get into the habit of looking for status-quo bias, and thinking “status-quo bias!” whenever you see it, and others around you help point it out, you don’t kill it entirely, but you get to catch yourself more often before falling prey, and do at least partial corrections. Tribe-based confirmation bias is not any different. Awareness and constant vigilance, and good norms to promote both, are the best tools we know about – if you can do better, let us know! If anything, we as a species are not talking about it enough because it seems to be one of the more damaging biases right now.

Status signaling is important to discuss on several levels.

As Robin Hanson never tires of pointing out, status signaling is a large part of the motivation of human activity. If you do not understand that this game exists and that everyone is playing it, or you do not understand how it works, you are really screwed.

In terms of your own actions, you will lose the game of life, and not understand how or why. Yes, most humans have an instinctive understanding of these dynamics and how to navigate them, but many of us do not have such good instincts, and even those with good instincts can improve. We can learn to see what is going on, be aware of the game, play the game and even more importantly play the better game of avoiding playing the game. Humans will make status moves automatically, unconsciously, all the time, by default. If you want to avoid status ruining everything, you have to think constantly about how to minimize its impact, and this pretty much has to be explicit. Signaling cannot be completely defeated by the conscious effort of those involved, but the damage can sometimes be contained.

There is also the fact that it is bad for your status to be too obviously or crassly signaling your status. Talking about status can raise the costs and lower the benefits of status signaling. In some cases, talking about signaling can make the signals you are talking about stop working or backfire!

You can also design your systems, and the incentives they create, knowing what status signals they enable and encourage, and do your best to make those positive and productive actions, and to make the signals yield positive outcomes. This is a lot of what ‘good norms and institutions’ is actually about. Your status markers are a large part of your incentive structure. You’re stuck with status, so talk over how best to use it, and put it to work. It can even be good to explicitly say “we will award status to the people who bring snacks” if everyone is used to thinking on that level. We say “to honor America” before ballgames, so it isn’t like regular people are that different. It may sound a little crass, but it totally works.

If one does not think hard and explicitly about status signaling and related issues when thinking about the happiness impact of public policy, this seems like a huge mistake.

These two problems even tie together. A lot of your tribe-based bias is your tribe’s opinion of what is higher or lower in status, or what should be higher or lower in status, or what signals raise and lower status. Talking explicitly about this allows you to better see from the other tribe’s perspective and see whether they have a point. Treating that all as given goes really badly. So talk about all of it!

If there are ideas on how to do better than that, great, let’s talk about those too. These are hard and important problems, and I don’t see how one can hope to solve them if you’re not willing to talk about them.

Alternatively, if the question was, why would shutting this off be good, I can site the SNAFU principle, or the lived experience that containing the status problem leads to much happier humans.

A last note is that under some circumstances, being unaware of status is a superpower. You get to do the thing that cannot be done, because you do not realize why you cannot do it. I have even taken steps to protect people from this knowledge on occasions when I felt it would do harm. Knowing and not caring at all would also work, but it rather inhuman (see point 10!). But this is the exception, not the rule.

5. Plato’s right. Will to truth is rooted in eros. Longing for recognition must be translated into a longing for a weird semantic relation.

I disagree, unless we are taking the ‘everything is about eros’ model of the world, in which case statement does not mean much. I never really ‘got’ Plato and ended up not reading much of him, I felt he was portraying Socrates frequently using definition-based traps on his less-than-brilliant debate partners, but I do feel like I get the Socratic method in practice and I do not understand why eros is involved – I notice I am confused and would appreciate an explanation. Will to truth can be and often is practical, it can and should be the curiosity and playfulness of ludos, and eros is often truth’s enemy number one.

The times when my mind is focused on eros are, if anything, exactly the times it is not especially worried about truth, and most media seems to strongly back this up.

When it comes to what type of love is the love of truth, I think of the six Greek words for love, I would put eros at least behind ludos and philia, and probably also agape. I could be convinced to put it ahead of pragma, and it presumably beats out philautica, but all six seem like they work.

The more time I spend thinking about this point, the more I notice I am confused by Plato’s perspective here, and I would love some help understanding it, as my instinctual and reflective readings of his point do not seem to make any sense.

I also don’t see this as a problem if Plato is right, since I already know about how much and how well people reason, so it wouldn’t be bad news or anything. But I am definitely confused.

6. Hume is right. Good things, like reason, come from co-opting and reshaping base motives. Reason’s enslaved to the impulse that drives it.

I agree that this is the default situation. I view it as something to overcome.

7. I see almost no interest among rationality folks in cultivating and shaping the arational motives behind rational cognition.

It is not reasonable to expect outsiders to know what insiders are up to before criticizing them, so I will simply point out, as Scott did, that this takes up a huge percentage of our cognitive efforts. Someone not noticing is a fact about their failure to notice, rather than a fact about the our lack of interest or lack of action.

We are very aware that we are running on corrupted hardware. Attempting explicitly to get yourself to do what you logically conclude you should do, and often failing, helps a lot in recognizing the problem. A lot of us work hard to battle akrasia. Avoiding rationalization is an entire sequenceWe study evolutionary psychology. We recognize that we are adaptation executors so we seek to figure out good habits to install, and then to install good habits that will help us achieve our goals, only some of which involve being rational. There is a lot of implicit recognition of virtue ethics following from consequentialism, even if 60% identified as utilitarian on the Less Wrong survey.

If anything, I see our community as paying much more attention than most others to how to shape our arational motives, rather than less attention. That applies to sources of irrational action and cognition, so they can be overcome, and also to sources of rational cognition, so they can be encouraged, enabled and improved.

8. Bayes’ Law is much less important than understanding what would get somebody to ever care about applying Bayes’ Law.

Seriously? I personally have tried to figure out what would get people to care about Bayes’ Law, as well as trying to get people to care about Bayes’ Law (and trying to teach Bayes’ Law). We are pretty much the only game in town when it comes to trying to get people to care about Bayes’ Law. Not everyone cares simply because this is what probability and truth actually is. You need a better argument than that. We teach and preach the good word of Bayes’ Law, so to do that we try to write new and better explanations and justifications for it. We kind of do this all the time. We do it so much it sometimes gets rather obnoxious. I challenge anyone to name three people outside the Rationalist community, who have put more effort into figuring out what would get somebody to ever care about applying Bayes’ Law, or more effort into getting them to actually care, than Eliezer Yudkowski, and if you do, provide their contact information, because I want to know how it went.

If anyone has good ideas, please share them, because I agree this is a totally, really important problem!

(It also seems logically impossible for Bayes’ Law to be less important than getting someone to care about applying it, but yes, it can and might be the harder part of capturing the benefits than learning to apply it)

10. Aristotle says a real philosopher is inhuman.

Funny you should mention that. There is a good chance that soon, we will have Artificial General Intelligence, and if we do, it better be a real and damn good philosopher. The alternative, if we fail to achieve this, is likely to be a universe with zero value. This is why we sometimes say that philosophy has a deadline. Alternatively, we are busily creating automated systems now that, while only narrow AIs, are optimizing our world for things we may not want, along with things like public policy. If our real philosophers, our most important philosophers, are not human, it seems important that they get their philosophy right, and that means we have to do this work now. Thinking about these problems very carefully is really important.

I would also say that if you are a human doing philosophy, and it results in you becoming inhuman, you probably need to do better philosophy!

9. If the reason people don’t is that it would destabilize their identity and relationships, maybe it’s not so great to be right.
11. If the normal syndrome of epistemic irrationality is instrumentally rational, then you’ve got to choose your irrationality.

12. The inclination to choose epistemic rationality is evidence of being bad at life.

YES! AGREED! The Way is not for everyone. It is for those who wish to seek it, and for those who wish it. It is not a universal path that everyone must adopt, for it is a hard path.

It is the writer who tells the class that if you can imagine yourself doing anything other than writing, then you should write, but you should not be a writer.

It is the master who looks at the student and tells her to go home, she is not welcome here, until she persists and convinces the master otherwise.

Those of us who are here, and have come far enough along the path, are not a random sample. We were warned, or should have been, and we persisted.

Rationality is a distinct path for figuring out how the world works. Without it, one’s brain is a system of kludges that mostly get reasonable answers. To improve that on the margin, you add another kludge. Some of those even involve learning Bayes’ Rule, or learning about some of the biases, in ways we have found to work on their own, complete with tricks to get better outcomes.

To make ourselves all that we can be, you need to dismantle and throw out large parts of the code base and start again. That is not a fast solution. That is what rationality does.

It says, we are going to figure things out from first principles. We are going to logically deduce what is going on. We will say explicitly what others only say implicitly, so we must have the freedom and a space to do so. Question everything. Your punches might still bounce off everything, but the important thing is to work on your form until your hand bleeds. Rather than instinctively playing status games, or aiming the bat vaguely at the ball, think about the underlying mechanisms. Learn the rules, learn the rules for learning the rules, and, if you need to, number them. When you get it wrong, figure out why. When you get it right, figure that out too. Make becoming stronger your highest virtue. Go meta all the time, perhaps too much (better too much than not enough, but if you disagree, let’s talk about that…).

Eventually, you come out stronger, faster, better, than the ones who never rebuilt their systems. But the process of upgrading is not fast. For many it kind of sucks, although for those for whom figuring out how things work is the interesting thing, you get to enjoy the process.

Think of this as a version of the resource curse. You go through that process because the old model wouldn’t cut it. If you can’t chop wood and carry water, perhaps it is time to seek enlightenment.

If your life is already going pretty much OK, having an illogical thinking process does not especially bother you, and you do not have something to protect, why would you go through all that?

You wouldn’t. And you shouldn’t! The normal syndrome of epistemic irrationality is a good enough approximation for you. Pick up a few tricks, some defenses against those who would exploit you, a link to how to buy low-cost ETFs or mutual funds, and get on with your life! You’ve got things to do and slack in the system.

If taking the already long and hard path would also destroy your relationships or threaten your identity, that’s also a damn good reason to turn aside. Maybe it’s still worth it. Maybe you think truth is more important. Maybe you have something to protect. Probably not, though. And that is fine!

13. It’s understandable that these people should want a community in which they can feel better and maybe even superior about this.

14. We all need a home. If that desire for safety/acceptance/status produces more usefully true thoughts, that’s awesome.

Amen, brother. Truth requires good institutions and norms supporting it. If every time you choose the truth over the convenient, the weird over the normal, you took a social hit, your good habits are in a lot of trouble. Is it right for us to feel superior?

Well, yeah. It is. That is how you build norms and institutions that reinforce the things you want. You value them, and the people who have them get to feel superior, thus encouraging those things. And presumably you think more of those things is superior, so feeling that way seems pretty reasonable.

Then everyone else gets all the positive externalities that come when people work hard to figure things out, and then work hard to tell everyone all their findings.

Others, of course, have said a mix of things including many that are… less friendly. Of course some people pull out Spock, or point to some arrogant thing Eliezer Yudkowsky said once (but, to be fair, probably actually believes), or cites an example of utilitarian math giving the ‘wrong answer’, or any number of other things. A lot of it isn’t constructive or fair. So that means we can take our place among groups like men, women, gender non-binary people, gays, straight people, black people, white people, Hispanics, Asians, Jews, Muslims, Christians, Atheists, Buddhists, Democrats, Republicans, Libertarians, Socialists, Communists, Fascists, Astronomers, Astrologers, Architects, Teachers, Cops, Economists, and fans of various romantic pairings on many different television series, among many many others, in the category of groups that the internet said a ton of completely ignorant, false, mean and unhelpful things about today. 

I am fine with that. Bring it on. I have the feeling we can handle it.

Advertisements
This entry was posted in Good Advice, Rationality and tagged . Bookmark the permalink.

4 Responses to Why Rationality?

  1. Doug S. says:

    ::applauds::

  2. benquo says:

    >I never really ‘got’ Plato and ended up not reading much of him, I felt he was portraying Socrates frequently using definition-based traps on his less-than-brilliant debate partners

    You’d have to read the dialogues as intellectual dramas, not hamhandedly rhetoricized treatises, to get anything important out of them. The Platonic dialogues are largely an attempt to portray the process of leading people to philosophy, not an attempt to lay out a set of arguments for a single coherent worldview. (Plato pretty directly says this – and that the philosophy can’t taught by writing down the content – in his 7th Letter, and puts similar words into Socrates’s mouth in Phaedrus.) Often Socrates sneaks bad arguments past people, or he gets people to accept premises we would see as strange. This is, as far as I can tell, often intentional; in those cases, ask why his interlocutor accepts the premise.

    Bloom’s introduction to his translation of Republic is the only translator’s introduction to a great work of philosophy that I’d actually recommend reading, and gets across a bit of the sense of how to properly read Plato.

    • TheZvi says:

      First time I’ve heard about this. It is a great example of something that is huge if true, and which no one ever told me, including prior to assigning me multiple works by the man in college. It also raises the question of why the way to lead people to Philosophy is to hammer them with bad arguments, is it to show them that you can convince them of pretty much anything, and here’s how I’m doing it, so they best learn some philosophy? Or is it just a skill-level thing, you want to let people learn by seeing why they’re being tricked, and so actually sound arguments would be bad?

      More to the point, now that I am aware of this, given what you know about me, do you think that I should try to read Plato again, and if so which work? (Also interested in other recommendations, of course)

  3. Pingback: Bad Religion – Put A Number On It!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s