Against Facebook

Leads to: Against Facebook: Comparison to Alternatives and Call to ActionHelp Us Find Your Blog (and others)

Note: WordPress seems to be eating line breaks. I hope I have them all fixed at this point.

Epistemic Status: Eliezer Yudkowsky writing the sequences. They sentenced me to twenty years of boredom. Galileo. This army. Chris Christie to Marco Rubio at the debate. OF COURSE! A woman scorned. For great justice. The Fire of a Thousand Suns. Expelling the moneylenders from the Temple. My Name is Susan Ivanova and/or Inigo Montoyo. You killed my father. Prepare to die. Indeed. It’s a trap. Tomfidence. I swear on my honor. End this. I know Kung Fu. Buckle up, Rupert. May the Gods strike me down to Bayes Hell. Compass Rose. A Lannister paying his debts. The line must be drawn here. This far, no farther. They may take our lives, but they will never take our freedom. Those who oppose me likely belong to the other political party. Ball don’t lie. Because someone has to, and no one else will. I’ve always wanted to slay a dragon. Persona!


This post is divided into sections:
1. A model breaking down how Facebook actually works.
2. An experiment with my News Feed.
3. Living with the Algorithm.

4. See First, Facebook’s most friendly feature.
5. Facebook is an evil monopolistic pariah Moloch.
6. Facebook is bad for you and Facebook is ruining your life.
7. Facebook is destroying discourse and the public record.

8. Facebook is out to get you.
A second shorter post will then lay out what I believe is the right allocation of online communication. Some readers will want to skip ahead to that one, and I will understand.
I felt I had to document my explorations and lay out my case, but I trust that most of you already know Facebook is terrible and don’t need to read 7000 words explaining why. If that is you, skip to the comparison to alternatives or the call to action at the end. I won’t blame you.

A Model Breaking Down How Facebook Actually Works


Facebook can be divided into its component features. Some of these features add value to the world. I will start with those, because they form the foundation of the trap. These are the friendly parts of the system, that are your friends. They are not out to get you. If the rest of the system was also not out to get us, or we had it under control, I would use the good and mixed parts more.
The good:


Contact Information


Facebook’s best reason to exist is as a repository of contact information. If you know someone’s name, you have a simple way to request access to their email and their phone number. If you are already friends with them, that information is already waiting for you without having to ask. Effectively we have a phone book that only works at the right times. This is a very good thing.


Event Planner

The Event Planner is quite handy. Note the structure that it uses, because it will contrast with other sections. If you are invited to an event, it is easy to find under events or your notifications, as it should be. If you go to the event page, it prominently contains the key things you need most, allowing you to easily see name, time, location, who is going and details in that order (I would swap details and who is going, but this is a quibble). There is a quick button to show a map. If you want to search for events with similar descriptions, or at the same location, that’s a click away, but it is not forced upon you. Related events are quietly and politely listed on the right side.




The only downside is that there are people who feel it is appropriate to invite hundreds or thousands of people to their event without checking to see if they even live within a few hundred miles or might plausibly be interested. Facebook seems to lower the psychological and logistical barriers to doing this, but also makes it easier to turn an invitation down, without asking people to use an additional similar planning system.
Overall, good stuff, and I wish I felt comfortable using it more.
The bad:


Messenger Service


Facebook’s messenger service is perfectly serviceable in a pinch. I strongly prefer to use other services, because they are not associated with the evil machine, but that is the only real reason (other than Signal’s encryption, or wanting to move to video) this is effectively any different from chatting over text, Skype, Google, WhatsApp, Signal or anything else. On my phone, I use Trillian to unify a whole bunch of such services, which I used to use a lot, but I no longer find this worth bothering with on desktop.




Groups are a good idea. Who doesn’t like groups?


The first problem is that literally anyone on your friends list can add you to any group at any time unless you explicitly block them group by group. This is our first (mild) hint that Facebook might be out to get us. A system that was not out to get us would simply ask us, do you want to join? There would be a button marked “Yes” and a button marked “No.” Instead, the system presumes you want in, so there will be more content to throw at you.


The second problem digs deeper, and is a less-bad version of the problems of the News Feed: The groups are horribly unorganized. All you have is a series of posts you can try to endlessly scroll through. If people want to comment on something, there are unthreaded comments on the posts where it is not obvious what is and isn’t new.


If your goal is something like have the discussions of a Magic team, you’re screwed. You have to constantly go check for new things. Even if you do, you have little confidence that any new thing will be noticed. If there are types of things you care about, you have to scroll through pick them out of the scroll of fully expanded items like this is the Library of Alexandria, scanning for new comments.


Except wait. Even then, you are still screwed. See this thread. 

This means:
You cannot count on the posts being in chronological order.
You cannot count on the posts being in the same order as last time.
There is no depth of search that assures you that you have seen all the posts.
There is no depth of search that assures you that you have seen all the new comments.
Each post you see needs to be carefully scanned for new comments, since the order does not tell you if any comments are new. If you don’t remember every comment on every post, good luck not wasting tons of time.
There is no way to know that your friends have seen a post or comment you make, no matter what procedure your friends commit to doing.
Because Facebook is willing to silently change such rules, other than maybe carefully scanning the entire archive of the group, you cannot count on anything at all, EVER. Even if you did find a solution, you could not assume the solution still worked.


This may sound like a quibble. It is not. When my Magic team Mad Apple teamed up with some youngsters, we agreed to try their method of using a Facebook group for discussion instead of using e-mail. This was a complete and utter disaster. I spent a stupid amount of time checking for new comments, trying to read the comments, trying to see answers to my comments. When I posted things, often I would refer to them and it was obvious others did not know what I was talking about. Eventually I gave up and went back to using email, effectively cutting discussion off with half of my team, because at least I could talk to the other half at all. I did not do well in that tournament.


I even heard the following anecdote this week: “When browsing a group looking for a post, I have even seen the same post multiple times because there was enough time while scrolling for Facebook to change its algorithm.”


How did things get so bad? I have a theory. It goes something like this:


Facebook uses machine learning in an attempt to maximize the number of posts people will view, because they think that ‘number of posts viewed’ is the best way to measure engagement, and determines the number of advertisements they can display. At first glance, this seems reasonable.

They then run an experiment where they compare groups that are in a logical order that stays the same and is predictable, to groups that are not in a logical order and are constantly changing.


Some people respond to this second group by silently missing posts, or by only viewing a subset of posts anyway; those people barely notice any difference. Other people are using groups to actually communicate with other people, and notice. They then feel the need to scroll a lot more, to make sure the chance of missing anything is minimized. They might want to change group platforms, but groups are large and coordination is hard, so by the time some of them actually leave, the algorithm doesn’t think to link it back to the changes that randomized the order of the posts – by now it’s changed things ten more times.


The more the algorithm makes it hard to find things, the more posts people look at. Thus, the algorithm makes finding posts harder and harder to find, intentionally (in a sense) scrambling its system periodically to prevent people from knowing what is going on. If people knew what was going on, they would be able to do something reasonable, and that would be terrible.


To be fair to Facebook, this is not automatically a problem. It is only a problem if you want to reliably communicate with other people. If you do not care to do that, it does not really matter. Thus, if your selected group is “Dank EA Memes” then you could argue that this particular problem does not apply.
The high ad ratio applies.
The problem of ‘you have to look at entire posts and can never look at summarizes’ applies.
The problem of ‘your discussions have no threading’ applies.
The problem of ‘tons of optimization pressure towards distorted metrics that destroy value’ applies.
The problem of ‘Facebook is evil’ still, of course, applies.
The problem of ‘They have made efficient navigation impossible’ though, is one that this type of group can tolerate. I will give them that.
We’ll talk about those other problems in other sections, since they all apply to the News Feed.
Games and Other Side Apps
Technically Facebook still offers games and other side apps, but my understanding is that people have learned not to use them, because they are the actual worst, and for the most part Facebook has learned that everyone has learned this, and quit bothering people on this front. I will at least give the site credit for learning in this case.



The News Feed
The News Feed is the heart of Facebook. When we talk about Facebook, mostly we are talking about the News Feed, because the News Feed is where everything goes. You post something, and then Facebook uses a machine learning based black box algorithm to determine when to show the post and who to show the post to. When composing, you think about the box. When deciding whether to respond, you think about the box. You click boxes all over the place trying to train the algorithm to give you the information and engagement you want, but the box does not care what you think. The box has decided what is best for you, and while it is willing to let you set a few ground rules it has to live by, it is going to do what it thinks is going to addict you to the site and keep you scrolling.


There is one feature that actually kind of works, which is the “See First” option you can select for some people. Facebook will respect that and put their content first, allowing you to (I think) be reasonably confident that if they post, you will have seen it the first time, and see it before other things. That does not give you any reasonable way to keep tabs on ongoing discussions, but it does at least mean you won’t miss anything terribly important right off the bat.


Beyond that, the system does not respond well to training, or at least to my attempts to train it, as this will illustrate.


This is a random sample of my news feed. Before I write the rest of this I pre-commit to cataloging the next 30 things that appear after the ad I just saw (to start at the beginning of a cycle). I will censure anything that seems plausibly sensitive.


1. Nathanial Mark Price was tagged in a photo.
Facebook thinks that when Ben Baker, who I have never heard of, posts a photo containing one of my thousand friends, that I should see this. My attempts to teach Facebook that I could not possibly care less (e.g. actively clicking to hide the last X of these where X is large) do not seem to work. It thinks the problem is Ben Baker, or the problem is Nathanial Mark Price. Neither of them are the problem. Is this pattern really that hard?
Seriously, if anyone understands why a machine learning algorithm can’t figure out that some people generally don’t like to see photos of their friends that are posted by people who are not their friends, when those people are explicitly labeling examples for it, then I can only conclude that the algorithm does not want to figure this out. If there is an actual reason why this might be hard, please comment.

2. Diablo: In case you missed it: Patch 2.5.0 is live!
Useless, since I finished playing a long time ago, but I did follow them at one point when I was trying to use the site. Or at least, I’m assuming that this is true. All right, my bad, I’ll unfollow. Oh wait, there is no unfollow button? So that means either I wasn’t following and they put this here anyway, in which case either this was an ad and pretended not to be, it actively thinks I would want to know about a patch to a game I bought several years ago (I’ll give it credit for knowing I own Diablo III), or I was following and they didn’t give me an unfollow option. Instead I chose to hide all posts from Diablo, so if they announce Diablo IV, I’ll just have to figure that out one of twenty other ways I’d learn about it. I can live with that.

3. John Stolzmann was tagged in this. (This is a photo and video by Beryl Cahapay, who I have never heard of, called ‘Day at the races’).
Facebook seems to believe that being tagged in a photo is an example of a post being overqualified to be shown to me. All you really need is that one of my friends was tagged. That friend being John Stolzmann. Who? Since I did not actually remember who he was before I Googled, I unfollowed John Stolzmann, although normally I prefer to wait until the person actually posts something before doing that.

4. Nicole Patrice was tagged in a photo.
Note that the photo does not, in fact, contain Nicole Patrice. The photo was posted by Nora Maccoby Hathaway, who is not even listed as having mutual friends with me when I hover over her name. Great filtering, guys.

5. An actual post by a friend! Giego Calerio says: “Given cost 3.5 G-happy-mo’s…what’s the Exp life gain of freezing bone marrow now?”
I decline to click on the link because if I do that, Facebook’s algorithm might get the wrong idea, but I’m not sure how it could get much worse, so maybe I worry too much. Giego is at least asking a valid question. He does seem to be making some bad calculations (e.g. he is treating all hours of life as equal, when youthful hours should be treated as more valuable than later hours, from a fun perspective) and is considering a surgical procedure where his expected ROI is 3.6 months of life in exchange for 3.5 months lost, which to him says “obvious yes” and to me says “obvious no” because you don’t do things like let someone do a costly surgical procedure unless you think you are getting massive, massive gains due to model error, risk/reward of being right/wrong, precautionary principle and other similar concerns. It is certainly not a ‘no brainer.’ But I don’t want to signal boost when someone is being Wrong On The Internet, and also I don’t comment on Facebook, so I say nothing. Except here.

6. Hearthstone Ad
All right, I basically never play anymore, but good choice. Points.

7. Tomoharu Saito says in Japanese, according to the translation: “There’s an American GP in the next camp.”
I think something was lost in translation.

8. Tomoharu Saito says in Japanese, according to the translation: “Rishi, I’m too tired, w. I’m tired, w. I got a barista from RI.”
Either the man is a poet and doesn’t even know it, or more likely Facebook needs to make a deal with Google Translate. Either way, looks like I can’t follow people posting in Japanese.

9. Adrian Sullivan posts he “is now contemplating a new Busta song featuring a zen-like feel, “Haiko ‘couplets’”, with Russiagate and Michael Flynn as its subject!
Go for it?

10. Ferret Steinmetz notes that “It is now officially impossible to preorder Mass Effect Andromeda”
Which makes sense since it was released last Tuesday.


11. Arthur Brietman posts something about shipping apps I saw on Twitter and I don’t know enough technical details to grok.
I’m sure it is thought out, though.
12. Kamikoto Ad for a stainless steel knife at about 85% off!
Swing and a miss.
13. Michael Blume asks: “I think I’m starting to be out of touch – can anyone tell me why people keep photoshopping the same crying person onto Paul Ryan?”
Can’t help you, sorry.
14. Tudor Boloni links to a Twitter post that links to a paper, saying “it’s hard to interpret.”
Oh yeah, that guy. I really should unfollow him. Done. Paper could in theory have been interesting I guess.
15. Robin Hanson posts: Ancient Hebrews didn’t believe in immortal soul, nor do most Christian theologians/philosophers today.
Saw that earlier on Twitter, which makes sense, he likely cross-posts everything. I put him in the See First category anyway just in case since his Twitter posts on average are very good and perhaps the discussions are good here, or some posts are not cross-posted. I guess Points.
16. Mandy Souza posted two updates. One is ‘lingerie model reveals truth about photoshoots by taking ‘real’ photos at home.’ The second is from Thug Life Videos.
I admit that the video was mildly amusing. The article is obvious clickbait. Hid them both.
17. Ferrett Steinmetz posts “Perfect for all your Vegan Chewbacca” needs and a picture.
OK then. Told it to show less from Twitter.
18. Nate Heiss shared The Verge’s video. It seems Elon Musk’s solar glass roofs can be ordered next month.
So, congrats, Elon?
19. Mack Weldon Ad for airflow enhanced underwear.
I’ll get right on that. One for three.


20. Brian-David Marshall thinks he has found the best ice cream scoop for hand to hand combat.
You can always count on Brian for news you can use.


21. Michael Blume retweeting Sam Bowman saying: My politics in a tweet: Use free markets to create as much wealth as possible and redistribute some of it afterwards to help unlucky people.
That idea sounds great. Glad he’s endorsing it, I suppose.



22. Phil Robinson wishes Happy 113th Birthday to Joseph Campbell.
And a very happy unbirthday to you, sir.



23. Teddy Morrow started a tournament on [some poker app]
How obnoxious. Hide all posts from the app, please.



24. Jelger Wiegersma is 6-3 at GP Orlando, shares his deck.
Points. I’m guessing David Williams gave him a B+ on the photo?


25. Teddy Morrow spun the Mega Bonus wheel on [same poker app as #23]
That’s even more obnoxious.



26. “Remarkable” add of a tablet you can write on like paper.
I guess if you gotta give me ads that’s not obnoxious.



27. Rob Zahra posts a link to “People Are Really Conflicted About This Nude Claymation Video” and says “it’s not sketchy…”
I choose to remain unconflicted.



28. Mike Turian posts: At the Father Daughter dance! Stopping for a quick arts and crafts break!
This one made me smile. Points.



29. Adrian Sullivan is suddenly craving grilled cheese…
He’s in Wisconsin, so I think this will work itself out.



30. Ron Foster posts photo and says “Sculpture seen in downtown Kirkland. Look familiar, Brian David-Marshall?”



The good news is I do remember who Ron Foster is. That’s all of the good news.
So let’s add that up:
Number of posts that got ‘points’: 3, or 10%. I could argue that this should be as high as 4 or 13.3%.
Number of posts I would have regretted missing or provided meaningful news about someone: 0
Number of posts that attempted to provide intellectual value: 4 if you want to be really generous.
Number of posts that provided intellectual value: 0 or 1 depending on if you count duplication
Number of ads: 3 or 4, hard to tell. Not too bad?
Number of posts that 100% I should never see but can’t figure out how to stop: 7 out of 27 non-ads (so 1/3 of posts are this or ads).


That went… better than I would have expected given my other experiences, but I am attempting to be a good and objective scientist, and will accept the sample.
Now think about whether you see that list and think “I want to take something like that, and hide our community discourse inside a list like that, and leave what to display up to a black box algorithm that is maximizing ‘interactions’!”

Great idea, everyone.


Living with the Algorithm


Now that we have seen the algorithm in detail a bit, it is time to ask how the algorithm actually works and what it does. Since it is constantly changing this is not an easy problem. One can do this by observing the results, by theorizing, or by reading up on the problem. My strategy here will be a mix of all three. I’ve already done some theorizing with respect to groups. Similar logic will apply here. I have also taken a sample of the feed and analyzed it, and generally looked through a large number of posts looking for other patterns. This is also where I stopped writing in order to Google up some articles on how the algorithm works, in the hopes of getting a more complete picture that way.
First principles say, and both reading and casual observation confirm, that Facebook’s primary tool will be to use interactions. If you interact with a post, that is good. That means engagement. If you do not interact with a post, that is bad, it means you did not engage. Thus, posts are rewarded if they create interaction, punished if they do not.



Time for another experiment! Let’s see how big this effect is. For the next 20 posts, excluding advertisements since those are paid for, let’s record the number of interactions (likes/reactions plus comments) and then compare those 30 posts to the 6th-10th posts in the same person’s timeline (excluding the original post, and only by the person in question, that second requirement added after I realized other people’s stuff appears in timelines a bunch); the delay is so that people have time to react and new posts are not overly punished by comparison. Note that in the first experiment, the feed was close to ‘looping around’ to the start of another session, which is why it turned out to ‘improve’ somewhat in the later half, and this is unlikely to be the case here.
While running the experiment, let’s also rate posts by how happy I am to have seen them (on an arbitrary scale of 0 means I would not have missed at all but I am not actively unhappy to have seen it, -5 means OMG my eyes or fake news, +10 means big win, +20 means they got married or something. System 1 has final say.


Our prediction is that the interaction numbers will be higher, but with large uncertainty as to how much higher, and that a similar thing will happen for ratings. Note that whose posts are shown is also not random, and we are intentionally taking that out of the equation for now, so sorting is much stronger than this would suggest on its own.



Since I will be evaluating entire timelines, I will not include names. If two posts come from the same person, the second will be skipped.
Also excluding what I consider ‘Facebook spam’ stuff like ‘reacted to a post.’ Note that the average post in the timeline (even without ads) is lower than the average rating this system will generate, but it is not hugely lower.


Post 1: 15 interactions. Rating 0. Mildly amusing tweet. Was #8 in timeline.
Timeline posts 6-10: 6 (-3), 1 (-3), 24 (+2), 9 (0), 5 (-3). Negatives here come from person’s need to do constant political commentary.
Post 2: 2 interactions. Rating +1. Mildly amusing video. Was #9 in timeline.
Timeline posts 6-10: 5 (-1), 1 (1), 4 (0), 22 (+2), 2 (-1). Person mostly posts little things intended to mildly amuse.
Post 3: 161 Interactions. Rating +3. Personal message related to actual life event. Was after #10 in timeline.
Timeline posts 6-10: 50 (-2), 34 (+1), 40 (0), 15 (0), 15 (0). Someone figured out how to get people engaged!
Post 4: 9 Interactions. Rating -5. Fake Magic spoiler.
Edit: Well, it is April 1 as I write this. But still. Not cool.
Timeline posts 6-10: 11 (+2), 78 (0), 34 (+1), 39 (0), 9 (-1). Mostly Magic content.
Post 5: 0 interactions. Rating 0. Wikipedia link. Was beyond #10.
Timeline posts 6-10: 0 (0), 5 (+5 for actual intellectual interest), 3 (+1), 1 (+4 again!), 18 (+2).
He posts links to science and philosophy stuff I would otherwise miss and seem worth investigating! No way I would have known if I hadn’t looked at the timeline. Promoted him to See First.
Post 6: 101 interactions. Rating +2. Important life PSA (for others who need it, I did not need it). Was beyond #10.
Timeline posts 6-10: 25 (0), 110 (+1), 28 (0), 70 (+3), 85 (0).
Person lives in The Bay, uses Facebook largely to coordinate events. If I was local and looking to hang out, this would be very good, but I am more of a thousands-of-miles-away person who has met her once.
Post 7: 95 Interactions. Rating +1. Magic preview card. Was before #6.
Posts 6-10. 14 (0), 61 (0), 66 (0), 24 (+3), 42 (+1).
Posts links to his Magic articles and activities.
Post 8: 48 Interactions. Rating -1. Was beyond #10.
3 (-1), 24 (0), 215 (+2), 159 (+1), 30 (+2).
Has interests that do not overlap with mine, also some that do.
Post 9: 14 Interactions. Rating -1. Was #10.
Posts 6-10: 4 (1), 3 (1), 13 (0), 4 (0), 2 (0).
Shares AI-related articles. They do not seem like they are worth reading.
Post 10: 12 Interactions. Rating +1. Was beyond #10.
Posts 6-10: 21 (0), 10 (+3 because F*** California), 21 (+1), 17 (+2), 39 (+3).
Post 11: 51 Interactions. Rating +1. Was #5.
Posts 6-10: 6 (+1), 7 (0), 9 (+1), 19 (+3), 12 (-1).
Placing bets!
Post 12: 15 Interactions. Rating +1. Was beyond #10.
Posts 6-10: 38 (+3), 51 (0), 30 (+1), 12 (0), 10 (0).
Always the jokester.
Post 13: 38 Interactions. Rating -1. Was beyond #10.
Posts 6-10: 9 (0), 44 (-3), 8 (-1), 15 (0), 21 (+1).
Confident opinions, confidently held. Negative is for political echo chambering.
Post 14: 4 Interactions. 0 Rating. Was after #10.
0 (-1), 5 (0), 11 (0), 2 (0), 2 (0).
No interest overlap. Got an unfollow.
Post 15: 6 Interactions. 0 Rating. Was after #10.
14 (-1), 10 (0), 2 (-3), 2 (-3), 10 (-1).
Political screaming.
Post 16: 3 Interactions. 0 Rating. Was #7.
1 (-5), 6 (0), 1 (-3), 2 (-3), 2 (-1).
Video guy.
Post 17: 7 Interactions. 0 Rating. Was #4.
9 (-1), 12 (0), 18 (0), 8 (0), 1 (-2).
A friend who is a lot smarter in person than they appear online, including about politics. Sometimes in these situations I wonder which one is real…
Post 18: 15 Interactions. 0 Rating. Was beyond #10.
4 (0), 8 (+2), 23 (0), 26 (0), 20 (0).
Magic related.
Post 19: 1 Interaction. -1 Rating. Was beyond #10.
2 (-1), 1 (-1), 0 (0), 0 (-1), 0 (-5).
I’ll just say this one is basically on me.
Post 20: 16 Interactions. +1 Rating. Was #6.
10 (+1), 0 (0), 2 (0), 4 (0), 2 (0).



Before examining the data statistically, it seems like the algorithm is not adding much value. It certainly was not adding as much value as some simple heuristics would have, depending on how easy it would be to determine post types. If you wanted to predict interactions, that too seems pretty easy, although I wasn’t studying this so it didn’t show up in the data: The big numbers all revolve around a few types of posts.
If nothing else, the algorithm of “choose all the posts of the top X people” seems like it would crush the algorithm if combined with the right amount of exploration, even if you did nothing else to improve it.


The obvious counter-argument is that my refusal to interact with Facebook, other than to tell it what I do not want to see, is preventing the algorithm from getting the data it needs to operate correctly. This seems like a reasonable objection to why the system isn’t better in my case, but it should still be better than random or better than blindly obvious heuristic rules. It certainly does not take away my curiosity as to what the system does in this situation. In addition, Facebook is known to gather information like how long one takes to read a post, so the data available should still be rather rich.


Some Basic Statistics


The average rating of a post was 0.07 if it was not selected by the algorithm, or 0.1 if it was. That’s not a zero effect, but it is a damn small one. The standard deviation of all scores was 1.67 and the difference in average rating here was 0.03, also known as 3% of the difference between “my life is identical to not seeing this post except for the loss of time” (score of 0) and “I found this slightly amusing/annoying” (score of 1 or -1).
The number of interactions was different: 30.65 for selected stories versus 20.33 for non-selected, versus a standard deviation of 33.65. If we use a log scale, we find 2.66 vs. 2.34, with a standard deviation of 1.25, so this effect is not concentrated too much in very large or very small numbers.


What happens if we use the algorithm “show the 20 posts with the most interactions, from anyone”? We see 20 posts with a mean of 80 interactions versus 10 for unselected, and we see a much more dramatic rating differential: 0.6 average rating for selected posts, -0.03 for unselected! At first glance, it looks like not only is the algorithm not doing much work, if you control for number of interactions, it is doing negative work! Even if you need to take half your posts from the non-interaction section in order to figure out what posts people interact with, that’s still a much better plan.


What about if we use “show the top interaction-count post from each of the 20 people”? Now the posts shown will average 51 interactions (vs. 16 for other posts), and still have an 0.6 average rating. That is an even stronger result, and it makes sense, because different people have different friend groups and tendency for people to interact with their posts.


It is also worth noting that within-person ratings were highly correlated, which implies that some combination of the system and my own filters on top of the system needs to get rid of more people who do not provide value, and put more focus on the ones that do. This is a slow process, as like many of us, I have a lot of Facebook friends and they need to be tuned one by one.


Whenever you have a complex multi-factor algorithm, the first step should be to test it against simple baselines and see if it can at least beat those. Here, the system has failed to do that.


Reading Up


I started my reading with this story. It confirms the basic elements of the system, and includes such gems as:


The news feed algorithm had blind spots that Facebook’s data scientists couldn’t have identified on their own. It took a different kind of data—qualitative human feedback—to begin to fill them in.


Really. You don’t say! What is worth noting is not that the algorithm had blind spots in the absence of qualitative human feedback. What is worth noting is that this is something that had to be realized by Facebook as some sort of insight. How could one have presumed this to be false?



This may prove to be part of the problem:

Facebook’s data scientists were aware that a small proportion of users—5 percent—were doing 85 percent of the hiding. When Facebook dug deeper, it found that a small subset of those 5 percent were hiding almost every story they saw—even ones they had liked and commented on. For these “superhiders,” it turned out, hiding a story didn’t mean they disliked it; it was simply their way of marking the post “read,” like archiving a message in Gmail.

Thus, even though hiding is usually a strong negative signal, if you cross a certain threshold, the system now thinks you are no longer expressing an opinion. Or maybe it is this gem that follows soon thereafter:

Intricate as it is, the news feed algorithm does not attempt to individually model each user’s behavior. It treats your likes as identical in value to mine, and the same is true of our hides.

Dude. You. Had. One. Job.
They also do not understand how impact works:

Even then, Facebook can’t be sure that the change won’t have some subtle, longer-term effect that it had failed to anticipate. To guard against this, it maintains a “holdout group”—a small proportion of users who don’t see the change for weeks or months after the rest of us.

Facebook is an integrated system. Keeping a small number of people on the old system isn’t quite worthless, but if the changes you make lead to long term effects that destroy the Facebook ecosystem, or damage the world at large, a reserve will not prevent this.
Thus we get ‘insights’ like this:

The algorithm is still the driving force behind the ranking of posts in your feed. But Facebook is increasingly giving users the ability to fine-tune their own feeds—a level of control it had long resisted as onerous and unnecessary. Facebook has spent seven years working on improving its ranking algorithm, Mosseri says. It has machine-learning wizards developing logistic regressions to interpret how users’ past behavior predicts what posts they’re likely to engage with in the future. “We could spend 10 more years—and we will—trying to improve those [machine-learning techniques],” Mosseri says. “But you can get a lot of value right now just by simply asking someone: ‘What do you want to see? What do you not want to see? Which friends do you always want to see at the top of your feed?’ ”

Yes, it turns out that people actually want to see posts by some friends more than other friends, and it only took years for them to figure out that this might be a good idea. People have strong, simple preferences if you let them express those preferences. The stupidity here is mind boggling enough that it seems hard for it to be unintentional. The reason why they do not let you fine-tune the news feed is not because doing so would not make the feed better. The reason why is because it would make the feed better for you, and they are invested in making it worse for you instead. Everyone knows that a proper Skinner Box needs to avoid giving away too many rewards if you want to keep people pressing the buttons and viewing the advertisements.



Facebook’s case is that this is not what they are up to, because they understand that in the long term people realize they are wasting their lives if they don’t have good experiences doing so:

There’s a potential downside, however, to giving users this sort of control: What if they’re mistaken, as humans often are, about what they really want to see? What if Facebook’s database of our online behaviors really did know us better, at least in some ways, than we knew ourselves? Could giving people the news feed they say they want actually make it less addictive than it was before?
Mosseri tells me he’s not particularly worried about that. The data so far, he explains, suggest that placing more weight on surveys and giving users more options have led to an increase in overall engagement and time spent on the site. While the two goals may seem to be in tension in the short term, “We find that qualitative improvements to the news feed look like they correlate with long-term engagement.”

The author notes that “That may be a happy coincidence if it continues to hold true” which I think is not nearly cynical enough. There is the issue of whether the long-term goals are indeed aligned, but there is the bigger problem that even if Facebook wants in some sense to focus on the long term, the tools it has been given push all parties away from doing so.


What the Algorithm Effectively Does


The algorithm attempts to find those things that promote interaction. It then rewards them with a signal boost, allowing the best to go viral. In response, people got to work optimizing their posts so that Facebook would predict people would want to interact with them, and so that people would in fact interact with them, so that others would see their posts. Professional and amateur alike started caring about approximations of metrics and got to work creating de facto clickbait and invoking Goodheart’s Law.


There is some attempt by Facebook to define interaction in good ways, such as measuring how long you spend off site on articles you click on, and there is some attempt to crack down on the worst offenders. Links to spam sites filled with advertising are being kept down as best they can. Obvious fake news gets struck down some of the time, and so on.

However, there is still a double amplification effect going on here. I choose who I want to follow based on what I think I will like, and then Facebook subfilters that based on what it thinks I will like. No matter how much Facebook wants to stay in control of things, at a minimum I can choose who my friends/follows are on the site. I will attempt to create a mix that balances short term payoff with long term payoff, safe with risky, light with dark. Facebook will then take that mix, and do its best to return the most addictive stuff it can find. I can observe this and ideally adjust, creating a pool of potential posts that is full of deep stuff with only a small number of cute videos, and perhaps that will work, but no one is going to make it easy for me.


Everything anyone write gets warped by worrying about this. Those who rely on Facebook then get triply filtered. They choose who to follow, those people choose what to share based on what is likely to get traction (as Josh says on Crazy Ex-Girlfriend, got to keep up the LPPs, or likes per post), and then Facebook filters with the algorithm.

See First, Facebook’s Most Friendly Feature

If you must use Facebook to follow certain close friends and family, and chances are that you feel that you do need to do this, there is a solution: See First. See First is a recently introduced feature that turns the news feed from something that is out to get you into something that is not out to get you. This is because

Facebook is an Evil Monopolistic Pariah Moloch


When I think about posting anything, anywhere on the internet, such as here on this blog, I have to worry about what the algorithm will say. If someone shares my post on Facebook, will anyone see it? Will then comment about it?


Then, people comment on Facebook instead of commenting on your post, in order to help ‘signal boost’ the share, which then leads to more comments being on the share. The majority of all discussion of this blog takes place on Facebook right now. The conversation becomes fractured, impossible to find and hard to follow, and often in a place the author does not even know about. We are forced into this ecosystem of constantly checking Facebook in order to have a normal conversation even if we never post anything to Facebook in any way at all.


In the long term, this means that Facebook ends up effectively hosting all the content, controlling what we post, how we discuss it, who sees what information, what memes spread and which ones die. It does this in the service of Moloch rather than trying to make life better for anyone, slowly warping us to value only what it values. Meanwhile, we are then forced to endure endless piles of junk in order to have any hopes of seeing what is going to or what any of our friends are doing or talking about.
Well played Facebook, I guess? Very bad for the rest of us. We cannot permit this to continue.


Facebook is Bad for You and Is Ruining Your Life


I could rattle off a bunch of links, but there is no need. I was going to say that this is the most recent study I have seen and it in turn links back to previous research. Then today I saw this one. I have not examined any of them for rigor, but would welcome others to share their findings if they do examine them. Either way, my opinion here is not due to research. My opinion is due to witnessing myself and others interact with Facebook, and also the opinion all of those people have about those interactions.


Without exception, everyone who uses Facebook regularly, who I have asked, admits that they spend too much time on Facebook. They admit that time is unproductive and they really should be doing something else, but Facebook is addictive and a way to kill time. They agree that it is making their friendships lower quality, their social interactions and discourse worse, but they feel trapped by the equilibrium that everyone else uses Facebook, and that it is there and always available. If anything is on Facebook and they do not see it, they are blameworthy. People still assume I have seen things that were on Facebook until I remind them that I don’t use it. Facebook then hides those morsels of usefulness inside a giant shell of wastes-of-time that you are forced to wade through, creating a Skinner Box. Fundamentally, Facebook is out to get you.


Facebook warps our social lives around its parameters rather than what we actually care about, and wastes time better spent on other things. That is not to discount its value as a way to organize events, share contact information, as a messenger service, or the advantages of being able to stay in touch. That is to point out that the cost of using that last one is that it does a bad job of it and will incidentally ruin your life.

Facebook is Destroying Discourse and the Public Record


Most things I read on the internet are public. When something is public, others can repost it, extend off it, comment upon it and refer back to it. The post becomes part of our collective knowledge and wisdom, and we can make progress. The best thing about many blogs is that they have laid the foundations of the author’s world view, so Scott Alexander can pepper his work with links back to old works without having to repeat himself, and if someone wants to soak up his writing there is an archive to read. When something is especially interesting, I can link or respond to that interesting thing, and see the responses and links from others.


I can’t deny that most words posted to the internet are not great discourse, but some of them are, and those are a worldwide treasure that grows by the day. When we take our conversations to the semi-private realm of Facebook, we deny the world and even our friends that privilege. I have seen a number of high quality posts to Facebook that I would like to link to or build upon, but I cannot, because that is not how Facebook works, and their implementation of comments is rather bad for extensive discussions.


When we look back a few years from now, we will not remember what was posted to Facebook. It will be as if such things never existed. That is fine for posting what you ate for lunch or coordinating a weekend trip to the ballgame, but we need to keep important things where they can be shared and preserved. It is the internet version of The Gift We Give Tomorrow.



Facebook is Out To Get You



Some things in the world are fundamentally out to get you. They are defecting, seeking to extract resources at your expense. Fees are hidden. Extra options you do not want are foisted upon you unless you fight back. The service is made intentionally worse, forcing you to pay to make it less worse. Often you must search carefully to get the least bad deals. The product is not what they claim it is, or is only the same in a technical sense. The things you want are buried underneath lots of stuff you don’t want. Everything you do is used as data and an opportunity to sell you something, rather than an opportunity to help you.



When you deal with something that is out to get you, you know it in your gut. Your brain cannot relax, for you must constantly be on the look out for tricks and traps both obvious and subtle. You can’t help but notice that everything is part of some sort of scheme. You wish you could simply walk away, but either you are already bought in or there is something here that you can’t get elsewhere, and you are stuck.



Their goal is for you not to notice they are out to get you, to blind you from the truth. You can feel it when you go to work. When you go to church. When you pay your taxes. It is the face of both bad government and bad capitalism. When you listen to a political speech, you feel it. When you deal with your wireless or cable company, you feel it. When you go to the car dealership, you feel it. It’s a trap.



Most things that are out to get you are only out to get you for a limited amount. If you are all right with being got for that amount, you can lower your defenses and relax, and you will be in a cooperative world, because they have what they came for. The restaurant wants you to overpay for wine and dessert but it is not trying to take your house. Sometimes that is the right choice, as the price can be small and one must enjoy life.



The art of deciding when to act as if someone or something is out to get you, and when to sit back and relax, is both more complex and much more important than people realize. Most people are too reluctant to enter this mode, but others are too eager, and everyone makes mistakes. I intend to address this in more depth in a future post, and ideally that one would go first, but I want to get this one out there without further delay.



If you remember one thing from this post, remember this: Facebook is out to get you. Big time.



Facebook wants your entire life. It wants you to spend every spare moment scrolling through your feed and your groups, liking posts and checking for comments, until it controls the entire internet. This is the future Facebook wants.



Fight back.

Posted in Facebook Sequence | Tagged , , , , | 20 Comments

United We Blame

Blame Index Funds: Overbooking and Cross Selling (first story)

Blame The Law: The Deeper Scandal of That Brutal United Video

Blame The Culture of Law Enforcement in Aviation:The Real Reason a Man Was Dragged Off That United Flight and How To Stop It From Happening Again

Blame The Price Cap: A Proposed RegulationDelta Authorizes Volunteer Offers Up to 10K

Blame Industry Consolidation: The Airline Industry Is a Starving Giant Gnawing At Our Ecnomy

Blame United’s PR Department: United Airlines Offers Refunds as Outrage at a Violent Removal Continues (NY Times, but with that title, could it be anyone else?)

Blame And Sue United: Michaela Aleach on Twitter

Blame The Cult of Low Prices: In Brief: A United Airlines Theory

Don’t Blame Capitalism, Blame Lack of Capitalism: United Is Why People Hate Capitalism

Don’t Blame, Instead Ask Who Blamed Who And Why: It’s Time For Some Game Theory, United Airlines Edition (Marginal Revolution)

Blame YOU, Basically: Why Airlines Are Terrible (Thanks, Vox!)

Blame Reporters and Mangement: United Passenger “Removal”: A Reporting And Management Fail

Blame Mostly Unrelated Bad Airline Reporting, Not For This, Just In General: I’m All Out of Clever

No, Seriously, Blame United, They’re the Worst: How United Turned the Friendly Skies Into a Flying HellscapeUnited Airlines Made Me Abandon My Mobility Device At the Gate, and honestly I could go on for a while.

Part 1: United Is The Worst

It is true. United Airlines is the worst. This is not a recent development, nor is it something we learned in the past week. United Airlines has been Brita-level worst for years. Spirit Airlines may offer worse service, but it has the common decency not to pretend it is anything but the slimeball at-your-own-risk-on-every-level experience. I can respect that. United’s slogan is “come fly the friendly skies.” My ears interpret that like they do the Domino’s ad that tells me I have thirty minutes: As a threat. I was already willing to pay a substantial premium to avoid United. If anything, that makes this a positive for my impressions of United, since they have gone from the airline everyone privately knows is awful to the airline for whom its awfulness is common knowledge.

Scope insensitivity is important to keep in mind, as is the impact of video. One passenger was forcibly removed from one plane. This type of removal is very rare. The chance of being involuntarily denied boarding a flight that actually takes off is quite low. The chance of the flight itself being cancelled outright is much higher, in which case you will most definitely be involuntarily denied boarding. The chance that you will miss the flight because of traffic, airport security and other such considerations is also much higher.

The chance of being involuntarily deboarded – removed from the plane after getting a seat – is much, much lower than even the chance of being involuntarily denied boarding. There are lots of rules designed to make this hard to do and not fun for the airline. Among these are the risk that the situation will turn violent, and in turn the risk that the police will handle themselves abysmally as occurred here. I am still unsure how culpable United is for what the police did, for the violence we want to mostly blame the police here (even if I would rather blame United, since again, they’re the worst), but the percentage of blame for United is not zero.

The case remains important for two reasons. One is that even though the situation in question is rare, the way it went down, and the way the company handled things afterwards, provides strong evidence and common knowledge of United’s worstness. As Paul Graham notes, at first this could have been one bad gate agent combined with some overzealous Chicago police, but the reaction clearly shows that it is endemic to United the company. The other reason is that it invites conversation about the airline industry, which is also widely known to be the worst even if it actually is not.

Seriously, everyone: Do not fly United Airlines. This is not a ‘boycott’ due to their awful behavior in this one case. This is based on the fact that I used to be a frequent flier, have been on a lot of flights, and I assure you that any discount they may be offering you is not worth the experience you will get. Pay a little extra if you have to, and get someone else.

Part 2: Airlines Are The Worst

The more interesting question to me is why airlines are so bad, if indeed they are so bad, and whether or not there is anything that can be done to fix it. Several of the links at the top are not about incident at all, but rather about the history of airline deregulation and consolidation, and the pressure to offer low sticker prices at any cost. There is no question that things are far from optimal.

My economic model of the airline industry is that the key elements are high fixed costs, increasing returns to scale, low marginal costs, heavy regulation (when we say ‘deregulation’ of the industry, we refer to something important, but the idea that they are essentially free to put anything they want in the air however they want is downright silly), a combination of unionization and anti-unionization, highly variable consumer surplus and huge preference for low sticker prices over superior service or even lower actual prices. The combination of these factors leads to highly sub-optimal outcomes no matter what policy you use, and solving for the best practical option is tricky.

High Fixed Costs

Having an airline at all requires an expensive infrastructure. Having access to a new airport, or maintaining a hub, also requires additional expensive infrastructure. Maintaining a route that is run every day or two is expensive, and you can’t cancel flights whenever there is not too much demand for them.

Increasing Returns to Scale

The bigger you are, the better you are able to spread many of these fixed costs across many flights. Having an additional hub somewhere massively increases the utility of your airline, as you can offer efficient transfers for both passengers and employees, and invest in more on-the-ground infrastructure. Your slack and ability to cope with situations increases, your airport lounges make more sense. Your frequent flier program looks more appealing.

Low Marginal Costs

Once you offer a flight, filling the seats on that flight costs you essentially zero dollars. An empty seat is an economic disaster. If you charge anything like marginal cost for your seats, you are quickly bankrupt, which is how entire industries can end up losing money for decades. Frequent flyer programs make this worse as well. Getting someone’s business once gives them miles that can then help lock up a customer long term, which means that it makes sense to lose money on a given flight, but if you always lose money, you lose money.

Heavy Regulation

If airlines were free to offer wildly different experiences and services, they could better differentiate their products. If they were able to reasonably offer flights that were not automatically tied to a particular origin, destination, time and place, with a guarantee of service, with tickets needing to be secured in advance for security reasons, then we could get creative about serving people’s actual needs. Instead, we have been dictated to on a structure of what the experience needs to be like, what features must be locked in when and how, what safety measures must be taken and so forth. That is not to say that this regulation is bad or unnecessary. It is to say that the structure that leads to the other elements of the picture has been locked in. While First Class certainly exists, and in some ways it is quite nice, fundamentally it is exactly the same experience as coach with bigger seats and nicer service, which is why it is such an awful deal. We also instinctively blame airlines for a lot of things that are the fault of the TSA and FAA (whether or not they had a good reason).

The other problem with heavy regulation is that it shuts out airlines that want to exist, but that cannot bear the high fixed costs of compliance with the regulations, or are being shut out because we do not domestically play nice with foreign airlines. Regulation destroys competition.

Unionization and Anti-Unionization

Without getting into whether unions are good or bad, they have practical effects that make good service more difficult. In this context, I think of unions as having both a cost effect and a regulatory effect, plus an anti-unionization effect. Union labor costs more, raising costs, but this is limited by the lack of profitability of the companies and even the unionized employees, from what I can tell, have not been doing that great. The other effect is that union employees are subject to union rules. If people started being late, they would have risked ‘timing out’ of employees’ shifts, and flights would start getting cancelled outright. This sounds like the kind of thing that could be solved with (relatively) small compensatory payments, leaving everyone better off, but the rules don’t work like that. Union rules make it difficult to adjust to circumstances, and they make it impossible to make Caosian bargains. Some people asked ‘why didn’t United just put their employees in a $300 Uber?’ or on another airlines’ flight, and the answer is union rules. By having a rule for everything, you prevent abuse, but you also prevent flexibility. Getting extra labor when you need it becomes especially expensive, as does getting rid of a surplus, as does making any major change.

You can still sometimes get a little progress by spending a lot of Imperial Focus Points, but even when it works the exchange rate is really bad.

You also have the problem of Anti-Unionization. In order to save money, airlines turn to subcontractors and shell companies, and use any means necessary to use non-union labor. The problem (in addition to sucking for the workers) is that doing so complicates the situation even more, splits the available labor into non-compatible sections, and thus takes its own toll on flexibility. The other problem is that this creates such a tangled mess that any control over the quality of work that gets done is severely compromised. The race to the bottom continues.

Highly Variable Consumer Surplus

This problem seems underappreciated. The airlines understand what is going on and attempt to minimize the damage, which causes its own massive dead weight losses and solves only a portion of the original problem.

The average flight I go on costs $500. If there was a 100% additional tax and it cost $1,000, I would still go on the majority of those flights, and the revenue-maximizing tax on me is likely substantially higher than that. The ability to fly is worth a lot!

In situations in which we want to get somewhere in a hurry, or change our flight details, often those last minute arrangements are worth vastly more than anyone could reasonably charge. That flexibility occasionally provides massive consumer surplus.

The same goes for the existence of certain flights to under-served areas. People speak of airlines ‘killing towns’ by cutting off flight service. I believe it. That means either that many flights are much more valuable than their cost, or that the ability to fly when you need to is itself massively valuable.

If there was a surplus of airline seats and flight paths, these massive surpluses would be available to all and flights would be more pleasant, but someone has to pay for them, and the airlines are not especially profitable.

Sticker Price Preference and Search Algorithms

Think about a trip that you might want to take at some point. If you were going to book it, chances are you would go to a site like Kayak, Orbitz or Hipmunk. These sites will allow you to specify number of stops or which airlines you prefer, but once you have told the program what your dealbreakers are, the sorting is purely by price. Price is king.

Price should be king, but here the sticker price is an absolute monarch. If you are not going to choose the cheapest flight, you need a strong reason to overcome that prior. That means that the tricks work. If you transfer fees to hidden places, or give up service to save a few bucks, the experience gets worse, but you get more customers to buy now. The reputational effects might eventually catch up to you, but that takes quite a long time.

The system actively makes me feel bad for not saving every last dollar, even though I know the savings are often not even real.

If the engines included an ‘effective price’ that included fees you would likely be charged for use of the overhead bin, checked bags and other incidentals, using a combination of letting you enter what you need and using averages from surveys, they could show an ‘effective price.’ If that was then combined with numbers that showed other features such as seat size, leg room, average amount late and percent chance of on time, and available Wi-Fi and entertainment and meals, and those numbers were combined into a unified score, and that was how they sorted the results, you would get a very different set of default choices. Even better, you could also put dollar or percent values on different airlines, or on particular travel times. Customers would choose flights they actually wanted.

The key is to get that information to automatically not only display but to change the search rankings by feeding into a ‘flight rating’ of some kind. If you have one thing that has a number, and other things that do not feed into that number, anything that doesn’t feed into the number gets undervalued. The calculation you have is the one you make decisions with, even if you know it is incomplete, because it feels objective and real. At best, other factors that can’t be easily quantified get considered with their lower-bound values.

With an integrated but diverse system, the airlines would then go about trying to optimize against the new rankings, rather than purely against the lowest price. That would mean balancing different factors, and having different flyers and websites care about different things, so it would be a much better approximation of ‘be a good airline.’ There would be room for low-cost terrible airlines like Spirit and United, and room for better ones as well.

I call upon the websites to do this for us, ideally as the default sorting system, but at least as available information you can choose to use.

In general, I think ‘force the display of information’ is one of the ways that regulation can provide value rather than destroy value. Requiring airlines to disclose a bunch of hard-to-fake statistical data would help a lot.

Solutions to Improving Air Travel

It is misleading to think of ‘deregulation’ as having been good or bad in an industry as inherently complex and regulated as air travel. One must instead think about whether particular rules are good or bad, and design a system that gives everyone the proper incentives without destroying too much value.

One thing to keep in mind is that making air travel better makes air travel better. If you improve the experience, more people want to fly and are willing to pay more, which means more flights and more competition, which means better air travel. Solving any problem in air travel helps solve every problem!

Changing the Details

Changing the details splits into several categories of things we can improve, without changing the big picture incentive structure problems involving fixed and marginal costs.

The first category of things involves weakening stupid FAA and TSA rules and regulations. A lot of what we hate about air travel is security theater and safety theater. We can cut way down on that. We can also stop treating everything involving a plane as a potential criminal action, and allow airlines to more easily add flights or adjust which plane they use, when demand is high. Everybody chill.

The second category is to improve the process of flight selection and booking. I talked above about how the booking sites could help. Regulation could potentially also help here by requiring anyone selling a ticket to disclose information about the flight and airline being selected, and the nature of any fees. Something as simple as ‘here is how much the average passenger on this flight with that class of ticket pays the airline for anything that isn’t the ticket and here’s the resulting average trip cost’ could be a big game.

The third category is pricing, and the tendency of airlines to continuously change prices over time in order to price discriminate. The current pricing system causes massive deadweight losses, but can we do better? The airlines need the extracted revenue rather badly, and a reckless regulation might result in last minute flights frequently being completely unavailable. I am going to consider getting into these details beyond scope, but I find the design of a good system here quite the interesting problem. It is also very hard.

The fourth category is to better handle situations similar to the United flight, and other cases where someone has to get bumped or flights need to be cancelled. Getting better at auctions, and allowing those auctions to go to much higher prices (e.g. Delta’s new rule of going to $9,950) will go a long way. There are flights I have been on where I would have turned down $9,950, because that would have made me miss a Magic Pro Tour, but I think that literally every other flight I have ever been on, including those where I’m going to a Pro Tour but would have still made it in time to play, I would have been thrilled to take less than half that much money. It is enough, and if people do not think it is enough, they have a bigger status quo bias problem than even I can imagine.

Allocating more things by auction, and allowing transfers of tickets and seats via payment, both with customers paying and being paid, would allow prices to be otherwise lower, and give people the feeling that flying was like having a lottery ticket – someone might pay you big bucks for that seat! Similarly, if you cared enough, no flight would ever be sold out, and any time you arrived early, you could bribe your way onto an earlier flight if you cared enough to pay, and cheapskate travelers with time on their hands would reap the benefits. A true secondary market would be even better. You could worry that this would cause scalpers to buy all the tickets, but if the scalpers tried that too early, extra flights could be added, so regular people would have a reasonable window to secure travel. Near the travel date, prices already sometimes go through the roof, and allowing people to cash out when that happens would make this problem less bad rather than worse. There is a concern that this would cut down on the airlines’ ability to extract money via price discrimination, but I think they would more than make it up in other ways.

Changing the Frequent Flier Programs

Frequent flyer programs make everything worse. They exacerbate the problem of high fixed costs and low marginal costs. They seem obviously anti-competitive. They increase competition for someone’s initial ‘loyalty’, but once that ‘loyalty’ is secured, that person is now effectively forced to fly only on that airline, meaning those who fly the most don’t benefit from most of their available options, and airlines get away with giving them bad service. An airline that can’t form a complete set of offerings can’t compete, which results in the big/small pattern we see. If you are big enough you compete for the frequent flyers in earnest, if you are not big enough then you have comparative disadvantage in mediocre long routes and get priced out. Even if you could expand to be big enough, the inertia involved in the programs makes it hard to get off the ground as a new big player.

The programs are also essentially frauds. You earn miles that have to be used under rules designed to frustrate you with restrictions and fine print. Those rules can and do change at any time for the worse. The rewards for earning levels can and do change at any time, mostly for the worst. There are some nice rewards, and marginal cost is low, so they are still worth using as a customer. The real rewards are the allocation of zero marginal cost resources like seat upgrades and boarding order, so the system effectively makes the default flying experience worse.

It is not even clear that the airlines want to offer these programs. Frequent flyers are likely to be relatively price insensitive, so giving them a lot of free goodies is the opposite of good price discrimination. If they could all agree to do so, they would likely stop offering such programs, but if one airline went first it would be an obvious disaster for them, and collaborating like this is illegal, so they can’t make a deal.

An outright ban seems reasonable.

Changing the Big Picture

We currently have a small number of large airlines due to economies of scale and large fixed costs, and we have a shortage of less popular routes because the airlines cannot extract the value from those routes. What do we do about this?

There was a long period where there was heavy competition in airlines, plenty of spare capacity and a lot of flights to random places. The problem was that during that time, the airlines combined to lose massive amounts of money. That means that if we want that world to return, we will need to do something to allow airlines to be profitable.

One solution is to simply use taxpayer dollars. The sum of the losses of the airlines over their dark period was about $60 billion, so for a mere $4 billion a year we could solve this problem. That seems very reasonable. Having higher quality, more available air travel at a cheaper price would change the entire atmosphere of the nation. The problem is that giving private companies direct taxpayer payments almost never ends well, and the public would (quite reasonably) not stand for it if it was direct.

What we should do instead is change the relationships involved so that airlines have the incentive to compete slightly more than they would be inclined to, and provide a little service to areas where it makes some sense to provide service. Market signals are still the best signals. If they are distorted, best is to introduce a counter-distortion.

One of our basic problems is that the marginal cost of filling a seat is almost zero.  We need to raise that cost. If that cost were higher, competition would be much less ruinous, as would flying on routes with  unreliable demand. The most obvious solution is to directly raise the marginal cost by doing two things: Taxing tickets more (which raises marginal cost directly), and using that money by offering a virtual customer to the airlines that is willing to buy those empty seats. The cross-subsidy solution is very American, and I’ve come to appreciate it; it feels just that a subsection of the economy ‘pays its own way’ in this sense, causing the system to be accepted.

This effectively transfers money from full flights to non-full flights. It also happens to be effectively a progressive wealth transfer, which is a nice bonus. There are obvious failure modes, for example if you pay too much too easily, half-empty or even fully empty flights could be profitable by design, or a larger plane that is largely empty could be better than a small one mostly full. Thus, we want to put a cap of some sort in the payments both in terms of size and quantity. Either the price goes down as more seats are bought, or there is hard cap on how many, or both. The price paid can be a function of the route and the average ticket price paid, with a cap that ensures that empty seats are never desirable.

One obvious objection to this is that you would want people on standby or otherwise looking to fly to be allowed to do so at just above marginal cost, whereas now the airline has a reason to refuse service. My response is that  the airlines already refuse to bargain in this spot, to not incentivize people to wait until the last minute, so the opportunity is not there to be lost. Moving a passenger up from a future flight to the current flight still will have marginal cost $0 to the airline, so that should not be impacted much.

Note that if a flight was already full, taxing that flight on the margin will not change the ticket price, since willingness to pay did not change. The full cost is then absorbed by the airline. Prices will go slightly up overall from the marginal cost effect (since every flight has some monopoly power), but the increased competition should cause prices to net go down. Even if that proves false, it seems impossible for the deadweight loss from flights not taken to come anywhere close to the deadweight loss saved by increasing competition and available flight paths. The missed flights are marginal flights that the customer was close to indifferent about taking, whereas the flights we care about are the ones where they care a lot.

This system then allows the resulting competition to take its course, with minimal or no net subsidy required, if the details are handled well. Since this would be a government intervention, the details likely will not be handled well, so such a system might end up backfiring as such things often do, but it is the best solution I have been able to come up with.


Many aspects of the current system combine to ensure relative awfulness in air travel. If we are willing to change how we regulate airlines, we can make things better. Ideally we can do this without effectively increasing the true amount of ‘regulation’ in the system by much, while structuring to increase competition, resulting in a de facto more free market than before. Proposed solutions include eliminating or restricting frequent flier programs, having full flights subsidize partly empty flights, and improving information presentation to customers to improve default behaviors. It would also help more than people realize to have less stupid security/safety theater. We can also implement obvious fixes to broken systems like the one that caused the recent United situation, but that has small bearing on the overall picture.

Also, seriously, do not fly United. Ever. They’re the worst.

Posted in Impractical Optimization | Tagged , | 1 Comment

Escalator Action

Epistemic Status: Slow ride. Take it easy.

You Memba Elevator Action? I memba.

A recent study (link is to NY Times) came out saying that we should not walk on escalators, because not walking is faster.
From the article:
The train pulls into Pennsylvania Station during the morning rush, the doors open and you make a beeline for the escalators.
You stick to the left and walk up the stairs, figuring you can save precious seconds and get a bit of exercise.
But the experts are united in this: You’re doing it wrong, seizing an advantage at the expense and safety of other commuters. Boarding an escalator two by two and standing side by side is the better approach.
We will ignore the talk about which method is better for the escalator, which seems downright silly, and focus on the main event: They are explicitly saying that when you choose to walk up the stairs, you are doing it wrong.
Since walking is trivially and obviously better than walking, this result is a little suspicious. And by a little suspicious, I mean almost certainly either wrong, highly misleading or both.
Certainly individually, on the margin, for yourself you are quite obviously doing it right.
Consider a largely empty escalator. If Alice gets on the escalator and sits there, it takes her 40 seconds. If she walks up the left side, and no one is in her way, it takes her 26 (numbers from article). Given everyone else’s actions, if she wants to get from Point A to Point B quickly, and I strongly suspect that she does, she should walk up the escalator.
Consider an escalator in the standard style. On the left people walk up, on the right people stand. If there is enough space for all, then nothing Alice does impacts anyone else unless she blocks the left side, so assume there is not enough room. In that situation, demand for the right side almost always exceeds demand for the left side, so Alice is almost certainly going to not only get to the top faster by walking, she is helping everyone else get there faster too. Yay Alice.
Consider an escalator where people are already standing on both sides without walking, Alice will hit a wall of people if she tries to walk. She now is faced with either asking people to let her through, and paying that social cost, or not doing so. If she does ask, if she gets turned down no one moves any slower or faster. If people agree to move, then she gets to walk, and since no one is going backwards, no one gets there any slower. So worst case is someone else is a little irritated, but nothing is slowed down.
This seems to cover all cases, so the bailey of ‘don’t ever walk on escalators’ is nonsense, Q.E.D. However, we also want to deal with the motte, and see how to deal with that. Should people stand two by two on the escalator with no one walking at all?
During a non-peak period, meaning any period where reserving the left side for walking would not result in anyone waiting to get on the escalator, clearly people should walk, and the win is substantial. This means that we would need people to vary their behavior depending on the situation, or else accept a big loss in the default case, in order to get a no-walk equilibrium to hold when we want it. Tough crowd.
During the peak period, what matters is throughput. We need to get as many people from Point A to Point B as possible, to reduce the wait to get on the escalator, or even more pressing, to prevent a permanent and ever-increasing line waiting for the escalator, which is a disaster (a disaster I tell you!). The throughput of the right side is fixed, as is its speed, so what matters is the throughput of the left side. How do we maximize that?
There are many ways to analyze this in theory. I think the easiest is to consider multiple possible systems:
System #1 (ideal standing): Everyone stands, no one walks, we use every step. We get one person per step to the top (e.g. one person per 40 seconds per step)..
System #2 (ideal walking): Everyone looks at the step above them. If it is free, they walk onto that step. If it is not free, they stand until it is. Is this faster?
If every step is occupied anyway this is just System #1, and we get equal performance.
Let’s now consider the marginal case: One of the steps is empty. Thus, Instead of 100 people on the escalator (let’s say), there are 99, but that 99th is currently walking up a step. In exchange, the 100th person is waiting at the bottom. So we win if and only if that one person is going twice as fast as they would otherwise. On a sufficiently slow escalator, this could happen, but in the base case (40 vs. 26) it is not the case, and the missing step is clearly costing time even in the perfect case. If they really only use every third step that is a disaster.
Plus, walking sounds like work.
Thus, we conclude that the basic idea that we should put someone on every step is correct, given people are not generally comfortable moving until the following step is clear. No, in general, when demand exceeds supply, the first best case is for people to not walk on escalators.
Looked at another way, this is even more obvious. Suppose you have a full escalator, or just an escalator with someone on step 2. You can choose to get on that escalator at step 1, or you can chose to wait and then get on step 0 (when it becomes step 1), and then walk to step 1. That seems obviously stupid, so why should there ever be a gap on the left side? Why doesn’t the whole thing fill up quickly? How does anyone get the ability to walk in the first place?
They gain that ability because people, in some places, adopt a norm that standing on the left side is not acceptable even when the right side is full. It is worth noting that New Yorkers are too smart for this. If things are busy the entire escalator will be packed. People act the way ‘the experts’ want them to. How do we get this outcome? We get it because people are willing to enter the escalator on the left side without waiting for three steps of room, and/or without intending to walk, and if even a small number of people do this, the result is the standing equilibrium. In fact, it takes a strong norm against standing on the left side to avoid that outcome.
Here’s the thing. A lot of people do not want to walk. When they get to the escalator they choose the right side. Given this fact, and that the right side is packed, all you have to do is make them feel all right about standing on the left side. You do not need, as the article implies, “altruism.” Appealing to altruism can be the right thing to do, but often it’s an unworkable solution and the appeal to it does more to make people feel bad than to accomplish anything, whereas a more simple solution would work great.
So when “experts” say in the article things such as: “Overall I am not too optimistic that people’s sense of altruism can override their sense of urgency and immediacy in a major metro area where the demands for speed and expediency are high” and “In the U.S., self-interest dominates our behavior on the road, on escalators and anywhere there is a capacity problem, I don’t believe Americans, any longer (if they ever did), have a rational button.” I don’t exactly want to claim that they do have a rational button, since I certainly have not seen such a thing, but locally they seem fully capable of reaching the correct collective solution, and also you don’t need some sort of altruism or collective action or superrationality. You don’t even need rationality.
It’s actually even worse than that. The altruistic action is the people refusing to go on the left side and not walk. The altruists are almost all of us and they are ruining it for everyone. All you need is not to have an actively bad social norm where people act altruistic to coordinate against the right answer. Because some people, neigh, most people, are lazy and don’t want to walk. Alternatively, people are in enough of a hurry (around these parts, anyway) not to get attached to ludicrous amounts of personal space, and that quickly leads to the same outcome (everyone starts moving in starts and stops, and quickly things slow to a crawl).  Talk about your Ineffective Altruism! How irrational!
On the plus side, as Robin Hanson puts it, Hail Most Humans for keeping to the cultural norms even when they have no real personal incentive to do so. Good job, everyone!
There is a counterargument to all this, which attempts to rehabilitate people’s altruism and irrationality, which is that actually people probably should walk on some escalators after all. Let’s do some math:
When 40 percent of the people walked, the average time for standers was 138 seconds and 46 seconds for walkers, according to their calculations. When everyone stood, the average time fell to 59 seconds. For walkers, that meant losing 13 seconds but for standers, it was a 79-second improvement. Researchers also found the length of the line to reach and step onto an escalator dropped to 24 people from 73.
This seems like an extreme case, where we have a very large bottleneck at the escalator even in the good case, but let’s go with it.
Are we sure that the 59 second outcome is worse? Some people have places to go and people to see. Others, not so much. Let us not become too attached to equality. People are self-selecting into the walking group and the standing group. Isn’t that interesting? The walkers take 46 seconds, the standards 138 seconds. That’s a minute and a half (plus two seconds) lost to not walking. So 60 percent of people are choosing, as determined by a time market, to not be willing to walk for a few seconds on an escalator in order to save more than twice that amount of time. Walking must be really averse to them. A few of them will have actual physical issues, but I have a hard time believing walking up some stairs is an issue for anything like this many people.
Alternate framing of that same thing: 138 seconds and 46 seconds with walkers, 59 seconds for both otherwise. The standers have been delayed by 92 seconds. Those 92 seconds represent time spent waiting for the escalator, or else I am deeply confused. So there is a 92 second line to wait in, in order to stand. Whereas the walkers do their entire path in 46 seconds. Their line is very short. And yet, this is the equilibrium. This is what people chose to do. I know we hate walking, but do we really hate walking that much?
Instead, one could reasonably claim that those who choose to walk at any given moment value their time a lot more than people who are content to stand. Six times as much? Doesn’t that seem like a stretch? Perhaps, but you don’t choose in advance. You choose at the time. Everyone has been on that trip where they absolutely, positively cannot be late. Everyone has also been in the situation where they are transferring to another train that won’t come for ten minutes, so the time does not matter almost at all. A difference of a factor of six seems more than reasonable for the same person on different days. So by using willingness to walk as a form of price discrimination, we have managed to (at a cost to those who cannot physically walk up at a reasonable pace) give people who need the time, either right now or in general, the chance to save a little time when they need it most.
Do I think the math works in this situation? No, but the math is highly suspect. Let’s walk through it. If we are talking about a factor of six, given the people for whom walking is a large cost, this seems definitely worse, but that required not only a general time advantage but a colossal time advantage. It required a backup by a factor of more than two. In order to have a factor of more than two, you need a semi-permanent backup of flow. If a train empties, and everyone takes the escalator before the next train can arrive, then a split escalator must have a throughput of at least 50% of the non-split case (since two split escalators contain a non-split one as a subset), which means that double the time is a strict upper bound. If we use the 26:40 ratio and put someone in every third step, we would get about half the throughput from a walking line than a non-walking line, so the upper bound would be about 33% additional time rather than 100%. If we used the 40% number for the people who walked, since how did they get that experimental result exactly if that wasn’t the throughput ratio (4:6) then we get almost the same answer. We violated even the 100% bound, which means that we have a continuous pile-up here: Before train #1 can finish getting its passengers out of the station, train #2 arrives and more people get in line behind them, slowly getting worse throughout rush hour. Alternatively, we have a similar case entering the station, and the escalator is not capable of handling all the passengers in its slow throughput state (so this would then be a limiting factor and actively reduce ridership, which I have seen actual never anywhere, but maybe?). Otherwise this case does not seem possible.
If we assume you can get train #1 through before train #2, slash we in general have enough throughput on average, we can cap the loss at 33% (since that represents maximum clumping, and less clumping will mean less lost time). At this point, the factor of six above drops dramatically, and we are looking at what is likely a factor of less than two. At that point, I have no trouble believing that the half who care more value their time more than twice as much as the half that care less. Everyone chooses which path to take, so mostly this seems fair, and you have to fall back on a “walking sounds like work and has large disutility” argument to rescue the non-walking argument, even in the case where things are pretty busy.
I conclude four things.
One, that the media and social scientists will do everything they can to spin altruism and acting “rationally” as the solution, and lack of altruism and people acting “irrationally” as the solution, no matter what the data says.
Two, that we should consider having a norm where it is acceptable to stand on the left side of escalators when there is a substantial line to use that escalator, but that unless there is a long-term bottleneck such that the escalator’s throughput is a limiting factor for large time periods and/or across multiple clumping events, it is far from clear that this is a win, especially given the win in other cases from having a walking lane.
Three, score another win for New York Culture and norms, versus other major cities. You gonna be efficient bout this or what?
Four, that London Underground and Washington D.C. Metro need to stop being such cheapskates and put in more escalators.
Posted in Impractical Optimization | Tagged , | 4 Comments

You’re Good Enough, You’re Smart Enough, and People Would Like You

Epistemic Status: Playing the odds

Originality Status: Very low. These are not unique views and this has been said many times before, so apologies if you’ve seen it many times before, but the message never gets through, so we keep trying.

The rationalist community can be intimidating. It is especially intimidating to those who are most suited to contributing. It is exactly those people who understand the need for high epistemic standards, who worry whether their writing skills are good enough to post, who worry if their thinking is good enough or their ideas original and interesting. Less Wrong, or even a person’s personal blog, can feel like a sacred space that one is loathe to profane with their unworthy presence.

That instinct is good! Those high standards, that sacredness, are what keep quality high, and make actions like ‘read the entire Less Wrong discussion section once a week and large percentages of the comments sections’ reasonable things to do. Our self-imposed standards allow us to almost skate by without any need for moderation. I am all about self-imposed high standards.

The problem is that our calibration is bad. The Fear has gone too far, and is keeping too many people quiet too often. Calibration is hard, and calibration of your own skill level is very hard. It speaks well of us that we would rather think of ourselves a level below where we are, than think of ourselves as a level above, but getting it right would be better still. We also are too afraid of trying to go one level too high, and not afraid enough of false humility. I am going to suggest that everyone adjust accordingly.

Consider levels on the following (0-5) online scale: Absent-Lurker-Commenter-Poster/Blogger-Organizer-Leader. You could have a similar offline scale: Absent-Silent-Talker-Presenter-Organizer-Leader.

My rule of thumb would be: Assume that you are ready to be one level higher than you think you are ready for.

If you are absent, you presumably are not reading this, but in case you are or someone can pass this advice along, I say that if you are interested in the ideas being discussed, go ahead and lurk. Show up to a meeting and listen; read some blog posts. Some of it won’t make sense. So what? Many of the times I have learned the most are in places where a lot of things did not make sense to me. Slowly, more and more of them did, and if some of it goes over your head permanently, perhaps you gained less, but you also did not lose.

If you have been lurking and observing for a while, but do not feel qualified to start commenting or talking, there is a good chance you are wrong. You are afraid people will point out your mistakes or make you feel stupid. I wish it was easy to say that this simply isn’t a big deal and you shouldn’t worry about it, but it isn’t a big deal and you shouldn’t worry about it. If you ask questions, we don’t think you’re stupid – we think you’re curious and want to learn (sure, some might not, but seriously, screw those people). You should pick your spots carefully, especially if you worry about such things, but you should pick some spots.

It is worth noting that the views to comments ratio on posts I write on my blog, or articles I post on Magic: The Gathering, seems to be about a thousand to one. So if someone looked at three of my posts a day, they would comment about once a year. I would write an article behind a paywall, get five thousand views, and three comments. Even if most people don’t finish reading the article, that is insane. Especially since a few people offer up most of what few comments we do get! You certainly have something worth mentioning, or something worth asking, say, one time in a hundred. In fact, I bet you have something worth saying one time in ten!

I have been there. In the early days of Less Wrong, I was afraid to comment. I felt like I was not up to the level of intellectual discussion taking place on the site. Posting the occasional small comment felt like an act of huge courage. When I attended the first New York meetups in a bake shop on 81st Street, I barely talked, especially when Michael Vassar was there. I felt outclassed. Eventually, I realized I was wrong.

If you have been commenting or talking for a while, especially if it has been years, and you would like to do more, but you do not consider yourself qualified to lead a meetup or write your own posts or your own blog, then you are probably mistaken. It is time to step up!

If it is online, write what you are interested in, and write what you know. The way to get better at writing is to write. That is why we have a national book writing month where (almost) all the books that do get written get permanently put into dark drawers. I look back at my early Magic writing, and my lord is a lot of it awful. Seriously, there is good strategic content there, but a lot of the writing is cringe-level bad. I remember that I was writing for The Dojo while going to college, and my Logic & Rhetoric essays kept getting C+/B- level grades. I now understand that this was because the grades were an accurate assessment of my skill level (although, in a tale for another day, not an assessment at all of the actual essays), but at the time I assumed the grades were dumb, so I kept writing, and slowly I got better. By contrast, I always wanted to write fiction, never did, so I never got any better and don’t think of myself as good enough to do it. Maybe I should start (and if I do, even if I post it here, you are under no obligation to read it or pretend it is any good).

We will respect the attempt. Yes, a lot of the explicit feedback will be negative; the community and the whole internet are like that. Try to not let that get to you, and keep at it, and you’ll improve and learn what works. Advice past that point is beyond scope here, and I am not confident I am worthy to give such writing advice which is how bad the ‘not worthy’ problem is, but I hope at some point soon to do it anyway.

Doing presentations and leading or organizing meetups is also, in my experience, far easier than people think. It is not easy to do perfectly, but it is surprisingly easy to be pretty good and provide people with value.  If the logistics are a solved problem, and you already have a place to go and an email list to message to get the word out, the hard parts are coming up with an idea, and deciding that the idea and yourself are good enough to go ahead and do it. If the logistics are not a solved problem, you need to start an email list and pick a location, but in a pinch any public place that will allow people to hang out will work so long as it is in the general area you want, if you can’t do better. Email lists are known tech.

How to come up with good topics is a good question, and when I led the NYC group more actively, what that mostly meant was thinking hard about what topics did and did not work, and coming up with one every week – that was the hard thing and over time it can get harder. It continues to be hard for the same reason that picking a movie or making conversation is hard. You get good ideas, but then you use those good ideas until your idea space is no longer so good, and it takes time to replenish it. If you ask me to pick a movie for someone exactly like me except they have never seen movies, I can make a pretty great list. For myself, not so much.

For further help, see here. For ideas on meetup formats to try, check out this as well which contains a ludicrous number of ideas on that front.

If you have been providing help with meetups or posting original content for a while, but not doing any organizing or providing leadership because you do not think you have enough status to do that, or you aren’t smart enough, or it is scary, but it appeals to you, you should go ahead and do it. Yes, Raymond observes correctly that The Bay has too many cooks starting too many projects and not enough helping, but that problem is unique to The Bay (and inherent in the incentive structures there) whereas places like New York have the opposite problem of not enough people to step up to start new things. Even where we have too much eagerness to start new things versus helping with other people’s things, there are leadership roles and projects to do that are supporting others and that need your help.

The important thing to know is that you are likely still underestimating your status and skills and ability to provide leadership, even when you are contributing on a lower level. It is also important to tell people that they do not need to be the best, or even especially great, at rationality relative to the group, in order to step up in this way. There are many skills, there is much need, and willingness to put it in the time goes a long way. Our leaders do not need to be the best of us in every way, as different people have different skills and different interests and availability.

Everyone else is thinking what you are thinking. I still think it all the time – I’m not that great at this. I thought it every step of the way. I was afraid to comment at all. Then I was afraid to initiate posts or meetings; even when I was leading the NYC group, I was still scared to post to Less Wrong. I didn’t think I had the chops to talk to key members of the community. And so on. Even with my Magic writing, it got started by accident because I did not dare start writing publicly on purpose! Instead, the publisher of the leading Magic website somehow started getting CCed on all my team’s emails, and started asking if he could publish some of mine. I have no idea if I would have ever stepped up if that hadn’t happened.

I will end with a pledge. If you contact me and let me know that you wrote stuff because of reading this, I will check it out. If it’s good, I’ll keep reading, and I’ll let people know.


Posted in Good Advice | Tagged , , | 8 Comments

Why Rationality?

Previously: Responses to Tyler Cohen on Rationality

Recently: Yes, We Have Noticed the SkullsWhen Rationalists Remade the World

Repeat as Necessary: Principle of Charity

A lot of people do not understand the appeal. I was surprised to find that Will Wilkinson was among them. Tyler shared a tweet-storm of his, which I will reproduce below:

1. Some thoughts on the controversy over the rationality community sparked by @tylercowen’s offhand comments to @ezraklein
2. Avoiding the most common errors in thought has surprisingly few personal benefits. Why try?

3. Vigilance against bias and fastidious Bayesianism has to appeal to something in you. But what? Why are you so worried about being wrong?

4. Some rationalists talk a lot about status signaling and tribe-based confirmation bias. What’s the theory of shutting this off?

5. Plato’s right. Will to truth is rooted in eros. Longing for recognition must be translated into a longing for a weird semantic relation.

6. Hume is right. Good things, like reason, come from co-opting and reshaping base motives. Reason’s enslaved to the impulse that drives it.

7. I see almost no interest among rationality folks in cultivating and shaping the arational motives behind rational cognition.

8. Bayes’ Law is much less important than understanding what would get somebody to ever care about applying Bayes’ Law.

9. If the reason people don’t is that it would destabilize their identity and relationships, maybe it’s not so great to be right.

10. Aristotle says a real philosopher is inhuman.

11. If the normal syndrome of epistemic irrationality is instrumentally rational, then you’ve got to choose your irrationality.

12. The inclination to choose epistemic rationality is evidence of being bad at life.

13. It’s understandable that these people should want a community in which they can feel better and maybe even superior about this.

14. We all need a home. If that desire for safety/acceptance/status produces more usefully true thoughts, that’s awesome.

(His later replies in the thread, which mostly happened after I read it the first time, are also worth looking at, and indicate the view that what has been done is good, but inadequate to the tasks at hand.)

At first this made me angry. I read this as a direct attack not only on my community, but even on the very idea of truth. How dare we pay careful attention to truth? We must have baser motives we are ignoring! We must be losers who are bad at life! We only seek truth so that we can meet up with other truth seekers and feel superior to the evil outgroup that does not sufficiently truth-seek!

Who knew the post-truth era started with Plato? I mean, I can see why one might pin the blame on him, but this is a little extreme.

Truth? You want the truth? You can’t handle the untruth! So all right, fine, I guess you can go seek truth, but it won’t help you much. Let us know if you come up with anything good.

Then I calmed down and had some time to think about it, and I realized that if one applies the aforementioned Principle of Charity, Will is asking good questions that deserve real answers, and it is not reasonable to say ‘it is all there in the sequences’ any more than it is fair for philosophers to answer your questions by telling you to read the following five books – it might be good advice in both cases, but it is not especially practical and acts to prevent dialogue.

Then Scott wrote his post and a lot of people commented, I had a day to think about things, the thread expanded, and I realized that Will too is offering us high praise. There is no enemy anywhere. A link today described Scott’s post as ‘a defense of the Rationalist community,’ then about ten minutes after I wrote that, Noah Smith posted a link saying simply that he ‘defends rationalists’ and I suppose all that is accurate – but no one is attacking!

Once again, we are being called to a greater mission. Once again, it is an important mission, and we should choose to accept it. The twist is, we might not agree on some of the details, but we are already on this mission, and have been for years.

So let’s take it from the top.

2. Avoiding the most common errors in thought has surprisingly few personal benefits. Why try?

In response to someone else’s rude and snappy answer, Will replied “Guess I should have done that 7th year of philosophy grad school. Sigh.” I would ask, what was Will doing in philosophy grad school in the first place? Did he not notice that going to graduate school in philosophy has surprisingly few personal benefits? Why go? I hope it was not for the generous benefit packages. I hope and believe that it was because Will wanted to seek knowledge, and use that knowledge for good (and to figure out what good is). If you want to be a top expert on human happiness as it relates to public policy, knowing how to correct for bias and figure out what is likely to be true seems rather important!

Every philosophy student I have met (sample size admittedly not that large, but at least three) gives some version of the same answer, which is that they want to think about important things for a living. Thinking about the truth, the good, ethics and other similar things seems like some combination of the most important thing they could be doing (either for their own sake, for their usefulness, or both), and the most fun thing they could be doing. I agree! That sounds pretty great. I even considered going, up until that part where I realized the surprising lack of personal benefits. True story.

So here are at least some of my reasons:

I practice “vigilance against bias and fastidious Bayesianism” (hereafter I will say rationality), in part, because it is to me the most interesting thing. It is fun. 

I practice Rationality, in part, because it is to me an intrinsic good. I value epistemic rationality and truth for their own sake, and I think everyone else should too. Truth seekingin my opinion, is an important virtue. I doubt Plato or Hume would disagree.

I practice Rationality, in part, because it is to me a useful good. Instrumental rationality has helped me immensely. It has helped the world immensely. Good thinking is a necessary condition for nice things. Producing more useful true thoughts really is awesome! For certain problems, especially related to AGI, FAI and the AI Control problem, at least the level of rigor we are currently seeking is necessary or all is lost. Rationality is and has been very useful to me personally, and very useful to the world. I agree with the fact that in general it has surprisingly few personal benefits, but part of that is that the benefits are highly situation-dependent (in a way I will talk about more below) and part of that is because one would naively expect the benefits to be so amazing. You can get less than you would have expected, and still get far more than you paid for.

I practice Rationality, in part, because it is my community and my culture. When we talk to each other, online or in person, interesting conversation results. When we get together, we have a good time and form deep and lasting friendships. Most of my best friends, I met that way. I met my wife that way! If someone else also thinks being right is the most interesting thing chances are very good we’ll get along. These are amazing people.

I practice Rationality, in part, because it is my comparative advantage. I am better at going down this path, relative to my ability to go down other paths, compared to others. I do not think hardcore study of Rationality is for everyone, or is for everyone who is ‘smart enough’ or anything like that. Some of you should do one thing, and some of you should do the other. I even have thoughts on who is who.

I even occasionally practice rationality because something is wrong on the internet. I am not especially proud of that one, but there it is.

3. Vigilance against bias and fastidious Bayesianism has to appeal to something in you. But what? Why are you so worried about being wrong?

The instinctive reaction is to say “I’m not afraid of being wrong! There are plenty of other good reasons!” and there are lots of other good reasons, but I’m also rather afraid of being wrong, so among other reasons:

I am worried about making a bad trade or placing a bad bet and losing my money, and it also being my fault. I have done these things while trying to make a living, and they involved me being wrong. It sucked. Being less wrong would have helped.

I am worried about making mistakes and therefore losing when I compete in competitions, such as Magic: The Gathering tournaments. I am also afraid of this loss therefore being my fault. I have done this a lot. I prefer the alternative.

I am worried about making bad choices in life in general and therefore losing expected utility. I do this every day, so it seems like a reasonable thing to worry about. I would rather make better decisions.

I am worried about being wrong because being wrong is bad. I attach intrinsic value to being less wrong, partly because I decided a long time ago that doing so would lead to better outcomes and I am a virtue ethics kind of guy, and partly because I was brought up in a conservative Jewish tradition and I have always felt this way.

I am worried about being wrong because others are counting on me for information or advice, and I do not want to let them down, and also I want a reputation for not being wrong about stuff.

I am worried about us collectively being wrong because some time in the next century someone is likely going to build an Artificial General Intelligence and if those who do so are wrong about how this works and try to get by with the sloppy thinking and system of kludges that is the default human way of thinking, we will fail at the control problem and then we are all going to die and all value in the universe is going to be destroyed. I would really prefer to avoid that.

Note that all of these, and especially this last one, while certainly sufficient motivation for some, is in no way necessary. Any of the reasons will do fine, as will other motives, such as simply thinking that trying to be less wrong is the most interesting thing one can do with their day.

4. Some rationalists talk a lot about status signaling and tribe-based confirmation bias. What’s the theory of shutting this off?

These are some pretty big problems. Shutting them off entirely is not practical, at least not that I can see, but there are reasonable steps one can take to minimize the impact of these problems, and talking a lot about them is a key part of those plans.

Tribe-based confirmation bias, like all other biases, can be consciously (partly) corrected for, if one is aware of what it is and how it works. The internet is full of talk about how one should go out into the world and read the people who disagree with you politically, advice which a sadly small number of people take to heart, and which if anything should go farther than the damnable other political party. The thing is, the trick kind of works, if you sample a lot of opinions, consider other perspectives, and think about why you and your tribe believe the things you believe. It works even better when one of your tribe’s values is pointing out when this is happening. If you get into the habit of looking for status-quo bias, and thinking “status-quo bias!” whenever you see it, and others around you help point it out, you don’t kill it entirely, but you get to catch yourself more often before falling prey, and do at least partial corrections. Tribe-based confirmation bias is not any different. Awareness and constant vigilance, and good norms to promote both, are the best tools we know about – if you can do better, let us know! If anything, we as a species are not talking about it enough because it seems to be one of the more damaging biases right now.

Status signaling is important to discuss on several levels.

As Robin Hanson never tires of pointing out, status signaling is a large part of the motivation of human activity. If you do not understand that this game exists and that everyone is playing it, or you do not understand how it works, you are really screwed.

In terms of your own actions, you will lose the game of life, and not understand how or why. Yes, most humans have an instinctive understanding of these dynamics and how to navigate them, but many of us do not have such good instincts, and even those with good instincts can improve. We can learn to see what is going on, be aware of the game, play the game and even more importantly play the better game of avoiding playing the game. Humans will make status moves automatically, unconsciously, all the time, by default. If you want to avoid status ruining everything, you have to think constantly about how to minimize its impact, and this pretty much has to be explicit. Signaling cannot be completely defeated by the conscious effort of those involved, but the damage can sometimes be contained.

There is also the fact that it is bad for your status to be too obviously or crassly signaling your status. Talking about status can raise the costs and lower the benefits of status signaling. In some cases, talking about signaling can make the signals you are talking about stop working or backfire!

You can also design your systems, and the incentives they create, knowing what status signals they enable and encourage, and do your best to make those positive and productive actions, and to make the signals yield positive outcomes. This is a lot of what ‘good norms and institutions’ is actually about. Your status markers are a large part of your incentive structure. You’re stuck with status, so talk over how best to use it, and put it to work. It can even be good to explicitly say “we will award status to the people who bring snacks” if everyone is used to thinking on that level. We say “to honor America” before ballgames, so it isn’t like regular people are that different. It may sound a little crass, but it totally works.

If one does not think hard and explicitly about status signaling and related issues when thinking about the happiness impact of public policy, this seems like a huge mistake.

These two problems even tie together. A lot of your tribe-based bias is your tribe’s opinion of what is higher or lower in status, or what should be higher or lower in status, or what signals raise and lower status. Talking explicitly about this allows you to better see from the other tribe’s perspective and see whether they have a point. Treating that all as given goes really badly. So talk about all of it!

If there are ideas on how to do better than that, great, let’s talk about those too. These are hard and important problems, and I don’t see how one can hope to solve them if you’re not willing to talk about them.

Alternatively, if the question was, why would shutting this off be good, I can site the SNAFU principle, or the lived experience that containing the status problem leads to much happier humans.

A last note is that under some circumstances, being unaware of status is a superpower. You get to do the thing that cannot be done, because you do not realize why you cannot do it. I have even taken steps to protect people from this knowledge on occasions when I felt it would do harm. Knowing and not caring at all would also work, but it rather inhuman (see point 10!). But this is the exception, not the rule.

5. Plato’s right. Will to truth is rooted in eros. Longing for recognition must be translated into a longing for a weird semantic relation.

I disagree, unless we are taking the ‘everything is about eros’ model of the world, in which case statement does not mean much. I never really ‘got’ Plato and ended up not reading much of him, I felt he was portraying Socrates frequently using definition-based traps on his less-than-brilliant debate partners, but I do feel like I get the Socratic method in practice and I do not understand why eros is involved – I notice I am confused and would appreciate an explanation. Will to truth can be and often is practical, it can and should be the curiosity and playfulness of ludos, and eros is often truth’s enemy number one.

The times when my mind is focused on eros are, if anything, exactly the times it is not especially worried about truth, and most media seems to strongly back this up.

When it comes to what type of love is the love of truth, I think of the six Greek words for love, I would put eros at least behind ludos and philia, and probably also agape. I could be convinced to put it ahead of pragma, and it presumably beats out philautica, but all six seem like they work.

The more time I spend thinking about this point, the more I notice I am confused by Plato’s perspective here, and I would love some help understanding it, as my instinctual and reflective readings of his point do not seem to make any sense.

I also don’t see this as a problem if Plato is right, since I already know about how much and how well people reason, so it wouldn’t be bad news or anything. But I am definitely confused.

6. Hume is right. Good things, like reason, come from co-opting and reshaping base motives. Reason’s enslaved to the impulse that drives it.

I agree that this is the default situation. I view it as something to overcome.

7. I see almost no interest among rationality folks in cultivating and shaping the arational motives behind rational cognition.

It is not reasonable to expect outsiders to know what insiders are up to before criticizing them, so I will simply point out, as Scott did, that this takes up a huge percentage of our cognitive efforts. Someone not noticing is a fact about their failure to notice, rather than a fact about the our lack of interest or lack of action.

We are very aware that we are running on corrupted hardware. Attempting explicitly to get yourself to do what you logically conclude you should do, and often failing, helps a lot in recognizing the problem. A lot of us work hard to battle akrasia. Avoiding rationalization is an entire sequenceWe study evolutionary psychology. We recognize that we are adaptation executors so we seek to figure out good habits to install, and then to install good habits that will help us achieve our goals, only some of which involve being rational. There is a lot of implicit recognition of virtue ethics following from consequentialism, even if 60% identified as utilitarian on the Less Wrong survey.

If anything, I see our community as paying much more attention than most others to how to shape our arational motives, rather than less attention. That applies to sources of irrational action and cognition, so they can be overcome, and also to sources of rational cognition, so they can be encouraged, enabled and improved.

8. Bayes’ Law is much less important than understanding what would get somebody to ever care about applying Bayes’ Law.

Seriously? I personally have tried to figure out what would get people to care about Bayes’ Law, as well as trying to get people to care about Bayes’ Law (and trying to teach Bayes’ Law). We are pretty much the only game in town when it comes to trying to get people to care about Bayes’ Law. Not everyone cares simply because this is what probability and truth actually is. You need a better argument than that. We teach and preach the good word of Bayes’ Law, so to do that we try to write new and better explanations and justifications for it. We kind of do this all the time. We do it so much it sometimes gets rather obnoxious. I challenge anyone to name three people outside the Rationalist community, who have put more effort into figuring out what would get somebody to ever care about applying Bayes’ Law, or more effort into getting them to actually care, than Eliezer Yudkowski, and if you do, provide their contact information, because I want to know how it went.

If anyone has good ideas, please share them, because I agree this is a totally, really important problem!

(It also seems logically impossible for Bayes’ Law to be less important than getting someone to care about applying it, but yes, it can and might be the harder part of capturing the benefits than learning to apply it)

10. Aristotle says a real philosopher is inhuman.

Funny you should mention that. There is a good chance that soon, we will have Artificial General Intelligence, and if we do, it better be a real and damn good philosopher. The alternative, if we fail to achieve this, is likely to be a universe with zero value. This is why we sometimes say that philosophy has a deadline. Alternatively, we are busily creating automated systems now that, while only narrow AIs, are optimizing our world for things we may not want, along with things like public policy. If our real philosophers, our most important philosophers, are not human, it seems important that they get their philosophy right, and that means we have to do this work now. Thinking about these problems very carefully is really important.

I would also say that if you are a human doing philosophy, and it results in you becoming inhuman, you probably need to do better philosophy!

9. If the reason people don’t is that it would destabilize their identity and relationships, maybe it’s not so great to be right.
11. If the normal syndrome of epistemic irrationality is instrumentally rational, then you’ve got to choose your irrationality.

12. The inclination to choose epistemic rationality is evidence of being bad at life.

YES! AGREED! The Way is not for everyone. It is for those who wish to seek it, and for those who wish it. It is not a universal path that everyone must adopt, for it is a hard path.

It is the writer who tells the class that if you can imagine yourself doing anything other than writing, then you should write, but you should not be a writer.

It is the master who looks at the student and tells her to go home, she is not welcome here, until she persists and convinces the master otherwise.

Those of us who are here, and have come far enough along the path, are not a random sample. We were warned, or should have been, and we persisted.

Rationality is a distinct path for figuring out how the world works. Without it, one’s brain is a system of kludges that mostly get reasonable answers. To improve that on the margin, you add another kludge. Some of those even involve learning Bayes’ Rule, or learning about some of the biases, in ways we have found to work on their own, complete with tricks to get better outcomes.

To make ourselves all that we can be, you need to dismantle and throw out large parts of the code base and start again. That is not a fast solution. That is what rationality does.

It says, we are going to figure things out from first principles. We are going to logically deduce what is going on. We will say explicitly what others only say implicitly, so we must have the freedom and a space to do so. Question everything. Your punches might still bounce off everything, but the important thing is to work on your form until your hand bleeds. Rather than instinctively playing status games, or aiming the bat vaguely at the ball, think about the underlying mechanisms. Learn the rules, learn the rules for learning the rules, and, if you need to, number them. When you get it wrong, figure out why. When you get it right, figure that out too. Make becoming stronger your highest virtue. Go meta all the time, perhaps too much (better too much than not enough, but if you disagree, let’s talk about that…).

Eventually, you come out stronger, faster, better, than the ones who never rebuilt their systems. But the process of upgrading is not fast. For many it kind of sucks, although for those for whom figuring out how things work is the interesting thing, you get to enjoy the process.

Think of this as a version of the resource curse. You go through that process because the old model wouldn’t cut it. If you can’t chop wood and carry water, perhaps it is time to seek enlightenment.

If your life is already going pretty much OK, having an illogical thinking process does not especially bother you, and you do not have something to protect, why would you go through all that?

You wouldn’t. And you shouldn’t! The normal syndrome of epistemic irrationality is a good enough approximation for you. Pick up a few tricks, some defenses against those who would exploit you, a link to how to buy low-cost ETFs or mutual funds, and get on with your life! You’ve got things to do and slack in the system.

If taking the already long and hard path would also destroy your relationships or threaten your identity, that’s also a damn good reason to turn aside. Maybe it’s still worth it. Maybe you think truth is more important. Maybe you have something to protect. Probably not, though. And that is fine!

13. It’s understandable that these people should want a community in which they can feel better and maybe even superior about this.

14. We all need a home. If that desire for safety/acceptance/status produces more usefully true thoughts, that’s awesome.

Amen, brother. Truth requires good institutions and norms supporting it. If every time you choose the truth over the convenient, the weird over the normal, you took a social hit, your good habits are in a lot of trouble. Is it right for us to feel superior?

Well, yeah. It is. That is how you build norms and institutions that reinforce the things you want. You value them, and the people who have them get to feel superior, thus encouraging those things. And presumably you think more of those things is superior, so feeling that way seems pretty reasonable.

Then everyone else gets all the positive externalities that come when people work hard to figure things out, and then work hard to tell everyone all their findings.

Others, of course, have said a mix of things including many that are… less friendly. Of course some people pull out Spock, or point to some arrogant thing Eliezer Yudkowsky said once (but, to be fair, probably actually believes), or cites an example of utilitarian math giving the ‘wrong answer’, or any number of other things. A lot of it isn’t constructive or fair. So that means we can take our place among groups like men, women, gender non-binary people, gays, straight people, black people, white people, Hispanics, Asians, Jews, Muslims, Christians, Atheists, Buddhists, Democrats, Republicans, Libertarians, Socialists, Communists, Fascists, Astronomers, Astrologers, Architects, Teachers, Cops, Economists, and fans of various romantic pairings on many different television series, among many many others, in the category of groups that the internet said a ton of completely ignorant, false, mean and unhelpful things about today. 

I am fine with that. Bring it on. I have the feeling we can handle it.

Posted in Good Advice, Rationality | Tagged | 5 Comments

Responses to Tyler Cohen on Rationality

In his recent podcast with Ezra Klein (recommended), Ezra asked Tyler his views on many things. One of them was the rationality community, and he put that response on Marginal Revolution as its own post.

Here is the back and forth:

Ezra Klein

The rationality community.

Tyler Cowen

Well, tell me a little more what you mean. You mean Eliezer Yudkowsky?

Ezra Klein

Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.

Tyler Cowen

Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable. And that pops up in some of those people more than others. But I think it needs to be realized it’s an extremely culturally specific way of viewing the world, and that’s one of the main things travel can teach you.

Julia Galef published a response to this, pushing back on the idea that rationalism is just another ethos or even another religion. Her response:

My quick reaction:

Basically all humans are overconfident and have blind spots. And that includes self-described rationalists.

But I see rationalists actively trying to compensate for those biases at least sometimes, and I see people in general do so almost never. For example, it’s pretty common for rationalists to solicit criticism of their own ideas, or to acknowledge uncertainty in their claims.

Similarly, it’s weird for Tyler to accuse rationalists of assuming their ethos is correct. Everyone assumes their own ethos is correct! And I think rationalists are far more likely than most people to be transparent about the premises of their ethos, instead of just treating those premises as objectively true, as most people do.

For example, you could accuse rationalists of being overconfident that utilitarianism is the best moral system. Fine. But you think most people aren’t confident in their own moral views?

At least rationalists acknowledge that their moral judgments are dependent on certain premises, and that if someone doesn’t agree with those premises then it’s reasonable to reach different conclusions. There’s an ability to step outside of their own ethos and discuss its pros and cons relative to alternatives, rather than treating it as self-evidently true.

(It’s also common for rationalists to wrestle with flaws in their favorite normative systems, like utilitarianism, which I don’t see most people doing with their moral views.)

So: while I certainly agree rationalists have room for improvement, I think it’s unfair to accuse them of overconfidence, given that that’s a universal human bias and rationalists are putting in a rare amount of effort trying to compensate for it.

Bryan Caplan, on the border of the rationality community as noted in the question, also offered a response:

Here’s how I would have responded:

The rationality community is one of the brightest lights in the modern intellectual firmament.  Its fundamentals – applied Bayesianism and hyper-awareness of psychological bias – provide the one true, objective vantage point.  It’s not “just another kind of religion”; it’s a self-conscious effort to root out the epistemic corruption that religion exemplifies (though hardly monopolizes).  On average, these methods pay off: The rationality community’s views are more likely to be true than any other community I know of.

Unfortunately, the community has two big blind spots.

The first is consequentialist (or more specifically utilitarian) ethics.  This view is vulnerable to many well-known, devastating counter-examples.  But most people in the rationality community hastily and dogmatically reject them.  Why?  I say it’s aesthetic: One-sentence, algorithmic theories have great appeal to logical minds, even when they fit reality very poorly.

The second blind spot is credulous openness to what I call “sci-fi” scenarios.  Claims about brain emulations, singularities, living in a simulation, hostile AI, and so on are all classic “extraordinary claims requiring extraordinary evidence.”  Yes, weird, unprecedented things occasionally happen.  But we should assign microscopic prior probabilities to the idea that any of these specific weird, unprecedented things will happen.  Strangely, though, many people in the rationality community treat them as serious possibilities, or even likely outcomes.  Why?  Again, I say it’s aesthetic.  Carefully constructed sci-fi scenarios have great appeal to logical minds, even when there’s no sign they’re more than science-flavored fantasy.

P.S. Ezra’s list omits the rationality community’s greatest and most epistemically scrupulous mind: Philip Tetlock.  If you want to see all the strengths of the rationality community with none of its weaknesses, read Superforecasting and be enlightened.

This provoked a Twitter reaction from Bryan’s good friend and colleague Robin Hanson:

@RobinHanson: @bryan_caplan says group w/ views “more likely to be true” is too open to sf. To him is an axiom that weird stuff can never be foreseen (!)

@RobinHanson: Seems to me “that scenario seems weird” just MEANS “that scenario seems unlikely to me”. Its a restatement of claim, not an argument for it.

@bryan_caplan: How about “seems unlikely to almost all people who aren’t fans of science fiction”?

@bryan_caplan: I said “weird AND unprecedented,” not just “weird.” And replace “never” with “almost never.”

@RobinHanson: if it were “stuff re computers that seems less weird to computer experts”, that would be argument in its favor. Same if computer -> physics

@RobinHanson: Almost everything in tech is unprecedented on long timescale. That’s way too low a bar to matter much.

Now, my response to all of the above.

The first thing to note is that Tyler’s and Bryan’s responses are both high praise. They then offer constructive criticism that deserves a considered response. If some of the reason for that criticism is to balance out the praise to avoid too closely associating with the rationality community too closely or seeming to endorse it too much, that does not make the criticism invalid, nor does it overwhelm or invalidate the praise. On a fundamental level, I’ll take it all.

I think that Tyler’s criticism goes too far when it uses the term ‘just another kind of religion,’ the same way it is not correct when others call Atheism a religion. That is not what religion means. Tyler knows this, and he was speaking in an interview rather than a written piece. I believe ‘a different set of ethoses’ however is perfectly fair, and the claim that we represent something like ‘the true, objective vantage point’ does tend to get rather obnoxious at times. As Tyler notes, some of us are worse offenders at this than others.

Julia and Bryan both push back hard, and push back well, but I think they are a little too quick to take on an adversarial stand. To say that our fundamental principles ‘provide the one true, objective vantage point’ as Bryan does goes too far. Yes, Bayesianism is simply how probability and the universe work, and hyper-awareness of psychological bias is key to properly implementing Bayesianism. Any effort that is not Bayesian, or that is unaware of our psychological biases is doomed to fail hard outside of its training set.

That does not mean that we have it right. On the contrary, we have it wrong. We know we have it wrong! Our community blog is called Less Wrong. When we are being careful, we call ourselves Aspiring Rationalists, since no one can be fully rational. This is why Tyler’s suggestion of calling it the Irrational Community did not strike me as so crazy. We basically do that already! We are the community of people who know and acknowledge our irrationality and seek to minimize it. The name rationalist is one we  have debated for a long time. It has big advantages and big disadvantages. At this point, we are stuck with it.

Julia responds, quite reasonably, that at least we are trying to do something about the problem whereas basically no one else is even doing that. Among those who believe there is a right answer, we may be the least overconfident ones around. Yes, there are those whose models treat all perspectives as equal/legitimate, and thus are self-refuting, but I do not feel the need to take that perspective seriously. The fact that you do not know which parts of which perspectives are right, and which ones are better or worse than others, is a fact about you, not about the universe.

What it does mean, which I will stand by, is that we are claiming is that if others are not Bayesian and aware of bias, everyone else has it wrong, too. Other calculation methods won’t get the right answer outside their training sets. That does not mean that other traditions cannot have great metis. They absolutely can, and we have much to learn from them. One series I keep planning to start is called ‘the models’, covering various different models of the world and/or humans and/or social dynamics that I have in my head and pull out when they are appropriate. All of them are, of course, wrong, but they are also useful. You want to get in everyone’s head, figure out what they know that you do not know.

This is where travel, and being culturally specific, come in as well. I think these are definite areas for improvement. Tyler is a huge fan of travel, and judges harshly those who stay in one place. I had the opportunity to travel around the world a lot back in the day, and no question it helped with perspective, although I did not investigate the places I was going as much as I should have. Living in even different American cities (I have resided in Denver, Renton and Belmont, in addition to New York City) and renting testing houses in others can provide perspective.

Wherever you live, the area seeps into your brain. You take in its memes, its values, its ambitions, its industry. Living in Denver working for a gaming start-up with people from a different social background, on no money, meant everything felt different. Working at Wizards in Renton for seven months was another. Living in a suburb of Boston, working out of an apartment and trying to find connection from that base was another, even if that did not last so long. Tyler says the year he spent in Germany was the year of his life he learned the most, and I believe him.

This is a lot of why I feel like our concentration in the Bay Area has been a mistake. Yes, I have a personal bias here, as I have watched many of my best friends, the core of my community, leave me behind, each claiming the pull from the last person who decided they couldn’t hack it in the Big Apple so they would move to the Bay and pretend it was their dream all along (or just wanted to hang out with a lot of rationalists and work in a start-up). They’re even about to do it again! Yes, I fought that every step of the way. And yes, I find the Bay to be a toxic intellectual environment – again, different places have different attitudes.

What I am objecting to here however is not the particular unfortunate choice of the Bay. I am objecting to our choice to concentrate at all! By placing so many of us in the same place, we get to interact more cheaply, which is great, but we also seep most of our best people in the same culture, the same ideas and memes, the same reality tunnel. We need to maintain different perspectives, to draw from different cultures, and moving us all together to a new different place would free us from The Bay, as well as its expense, but it would not solve the deeper problem.

We are, as Julia notes, excellent (but far from perfect!) at soliciting and considering criticism and new ideas, but if you keep looking in the same places, you lack perspective. We need to, as a wise member of our community said, eat dirt, to find the things we do not realize that we need.

Tyler Cohen is holding us to an unreasonably high standard. And that is great! This is exactly what we are trying to do: uncover the truth, overcome our biases, get the right answers. We are not trying to be less wrong than others. We are trying to be less wrong than yesterday, the least wrong we can be. Nature does not grade on a curve, and Tyler is challenging us to do even better, to incorporate all the wisdom out there and not only our narrow reality tunnel. Challenge accepted!

Bryan gives us even higher praise than Tyler does, but points to what he sees as two blind spots. On one, I somewhat agree, and on the other I strongly disagree.

His first objection is to Utilitarian/Consequentialist Ethics. Ethics is certainly a hard problem, and I have thought about it a lot but not as much as I would like to. I am certainly not confident I have it right! As Julia notes, we wrestle with the flaws, and there are certainly flaws, although calling them ‘devastating’ counter-examples does not seem right to me. I also think that being able to describe a system in one sentence is a quite legitimate advantage; we should judge simpler theories as more likely to be correct.

I read Bryan’s link, and these objections are good and worth thinking about, but they do not seem to be ‘devastating’. They seem to basically be saying “not so fast!” and “solving these problems properly is hard!” but if there is one group that would readily agree with both of those propositions, we’d be it. Yes, figuring out what actions would have the best consequences is hard! We spend most of our time trying to figure that one out, and have to use approximations and best guesses, and rightfully so. That’s what I’m up to right now. Yes, figuring out what things are how good is hard too. Working on that one too, and again going with my best guesses. Yes, you have to define utility in a way that incorporates distributive justice if you care about distributive justice. Yes, people who act according to different principles will tend to produce different future levels of utility, yes you need to figure out how to honor obligations to those close to you. The response that other systems reduce to utility seems right for the last challenge.

The reason I somewhat agree with Bryan is that I do not only lack confidence in utilitarianism; I think utilitarianism is the right way to start thinking about things, but I also think act utilitarianism is wrong. I do my best to follow Functional Decision Theory, and I believe in virtue ethics, because I believe this is the way to maximize utility both for oneself and for all. I even view many of the problems that we face in the world, and as a community, and with technology, follow from people and systems following act utilitarianism and/or causal decision theory, and leaving key considerations out of their equations, resulting in poor optimization targets run amok, failure to consider higher-order and hard-to-measure effects, and inability to cooperate. I think this is really, really bad and fixing this is ridiculously important. I will keep working on how to make this case more effectively, and on how to fix it.

I am very interested in talking about such matters further, but this is not the place, so moving on to Bryan’s second objection. I wish Bryan would give us a little more credit here. I think Robin’s answers are strong, but if anything too kind. These are not ‘carefully constructed sci-fi scenarios.’ In many cases, quite the opposite; if you do not have Hostile AI in your sci-fi world, you need to explain why (and the real reason is likely because ‘it would ruin a good story’)! These are, for the most part, generic technological extrapolations from what exists today. Rationalists tend to think about the future more than others, and consider what technologies might occur in the future. Those who are not even familiar with sci-fi at all largely do not think about potential future technological developments, and would consider any new technologies to be ‘weird.’ Even further than that, most people on Earth (and certainly most people who have died) would consider many current, existing technological developments to be ‘weird.’ It seems very strange to base one’s prior on whether those people would consider artificial intelligence to sound weird or not.

I see Bryan rejecting these possibilities, ironically, exactly for aesthetic reasons, the very reason he accuses us of falling for them – they are weird to him. That does not mean that we do not need evidence for such claims, or that the priors should start out high! It is easy to forget all the evidence and analysis that one has already put in. Everyone who is interested should look carefully at the arguments and the evidence, and reach their own conclusions. Whether or not you consider us to have fulfilled our burden to provide extraordinary evidence, we do indeed work to provide exactly that. Many of us are happy to discuss these matters at length, myself included. In particular, I would be interested in hearing Bryan’s reasons why AI is so unlikely, or why AI is unlikely to be hostile, and would hope that the answer is not just ‘that sounds like sci-fi so I don’t have to take the possibility seriously’.

While I was writing this, a third response came in from Noah Smith, called Are Rationalists Dense?

It starts out describing the discussion as a “interesting food-fight,” which again does not match my view of what is going on – we are not perfect, there’s nothing wrong with calling us out where we need improvement, and it is good and right to push back on that when we feel the criticism has gone too far.  He has speculations on why we may be ‘rubbing others the wrong way’ and comes up with three hypotheses: The name, the leaders and the fans.

The name, as has been noted here and many times before, is in some ways unfortunate. I wish it didn’t have the implication that those not explicitly in our community were therefore irrational whereas we were already rational; we do not actually believe this, or at least I hope most of us do not. We also, I hope, do not believe that one must care about Effective Altruism and/or A.I. Risk in order to be rational; rather, we hope to spread rational thinking in the hope that some of those who receive it will then realize such causes are important. But again, it also got and gets us attention and focus, and a rallying cry, which is why a lot of groups end up with names that rub some people the wrong way (he notes Black Lives Matter, Pro-Life and Reality-Based Community, all of whose names give attention, focus and a rallying cry, and all of whose names understandably piss off some others)

The leaders have been known to rub some people the wrong way, for sure. Yudkowsky, Hanson and Alexander are not the most conflict-free bunch. What he calls the fans are often not that friendly, and can come off not so well in online interactions. None of that is unusual.

If I had to add a fourth reason, it is simply because we are, as Noah put it, a ‘seemingly innocuous online community mostly populated by shoe-gazing nerds’ and people generally do not like nerds, especially nerds who are making claims. We do not need super-complex explanations!

Noah’s suggestions are definitely good areas for us to work on, if we seek broader acceptance and a better image – work on the image of our leaders, work on our general interactions, consider describing ourselves in softer language somehow. To some extent, these things are worth seeking, and I certainly strive to improve in this area. What I am even more interested in, is making us better.




Posted in Rationality | Tagged | 4 Comments

Avoiding Emotional Dominance Spirals

Follow Up to: Dominance, care, and social touch

One thing Ben said in his latest post especially resonated with me, and I wanted to offer some expanded thoughts on it:

Sometimes, when I feel let down because someone close to me dropped the ball on something important, they try to make amends by submitting to me. This would be a good appeasement strategy if I mainly felt bad because I wanted them to assign me a higher social rank. But, the thing I want is actually the existence of another agent in the world who is independently looking out for my interests. So when they respond by submitting, trying to look small and incompetent, I perceive them as shirking. My natural response to this kind of shirking is anger – but people who are already trying to appease me by submitting tend to double down on submission if they notice I’m upset at them – which just compounds the problem!

My main strategy for fixing this has been to avoid leaning on this sort of person for anything important. I’ve been experimenting with instead explicitly telling them I don’t want submission and asking them to take more responsibility, and this occasionally works a bit, but it’s slow and frustrating and I’m not sure it’s worth the effort.

This resonated on multiple levels.

There is the basic problem of someone dropping the ball, and offering submission rather than fixing the problem on some level. As someone who tried to run a company, this is especially maddening. I do not want you to show your submission, I want you to tell me how  you are going to fix what went wrong, and avoid making the same mistake again! I want you to tell me how you have learned from this experience. That makes everyone perform better. I also want to see you take responsibility. These are all highly useful, whereas submission usually is not. However, you have to hammer this, over and over again, for not only some but most people – too many people to never rely on such folks.

Different people have different reactions they want to see when someone lets them down or makes a mistake. I have one set of reactions I use at work, one set I use at home, another I use with other rationalists, and so on, and for people I know well, I customize further.

The bigger problem, also described here, is the anger feedback loop, which the main thing I want to talk about. Ben gives an example of it:

A: Sorry I let you down, I suck. And other submissive things.

Ben (gets angry): Why are you doing that? I don’t want that reaction!

A (seeing Ben is mad): Oh, I made you mad! So sorry I let you down, I suck. And other even more submissive things than before.

Ben (get angrier): Aaaarrgggh!

…and so on, usually until A also gets angry at Ben (in my experience), and a real fight ensues that often eclipses by far the original problem. This is Ben’s particular form of this, but more common to my experience is this, the most basic case:

A: You screwed up!

B: You’re angry at me! How dare you get angry at me? I’m angry!

A: How dare you get angry at me for being angry? I’m even angrier!

B: How dare you get angry at me for being angry at your being angry? Oh boy am I angry!

When things go down this path, something very minor can turn into a huge fight. Whether or not you signed up for it, you’re in a dominance contest. One or both participants has to make a choice to not be angry, or at least to act as if they are not angry. Sometimes this will be after a large number of iterations, which will make this task very difficult, and it plays like a game of chicken: One person credibly commits to being completely incapable of diffusing the situation before it results in destruction of property, so the other now has no choice but to appear to put themselves in the required emotional state, at a time when they feel themselves beyond justified, which usually involves saying things like “I’m not angry” a lot when that claim is not exactly credible. Having to do all this really sucks.

The only real alternative I know about is to physically leave, and wait for things to calm down.

Then there are the even worse variations, where the original sin that you are fighting over is failure to be in the proper emotional state. In these cases, not only is submission demanded, but voluntary, happy, you-are-right style submission. You can end up with this a lot:

A: I demand X!

B: OK, fine, X.

A: How dare you not be happy about this? 

B: I’m happy about it.

A: No you’re not! You’re pretending to be happy about it! How dare you!

B: No, really, I am! I am blameworthy for many things, but for this I am not blameworthy, I have the emotional response you demand oh mighty demander!

A: I don’t believe you.

And so on – and it can go on quite a while. With begging and pleading. B was my father. A lot. It is painful even to listen to. It was painful to even write this.

So essentially, and I have been in situations like this including at various jobs, you end up on constant emotional notice. You must, at all times, represent the right response to everything that is happening. So you try hard to do this at all times, and perhaps often this is helpful, because people acting cheerful can make things better. But what happens the moment this facade starts to break down? Too many things push your buttons in a row? This happens at exactly the moment when it has become too expensive to keep this up. Then they detect it.

They tell you this is bad. You must be happy about this; you have no right to be upset! And of course, now you’re also mad about them telling you what you have no right to be mad about… and the cycle begins. Cause your job just got a lot harder, and if you slip again, it’s going to get really ugly.

Even when reasonably big things are at stake and there is actual disagreement, this is where most of the real ugliness seems to come from – one party decides the emotional response of the other party is illegitimate and their reaction to this reinforces the reaction.

This is something we need to be super vigilant about not doing.

Within reason, and somewhat beyond it, people who want to be upset need to be allowed to be upset. As long as they can do it quietly they need to be allowed to be angry. If the person is being disruptive and actively wrecking things, that is something else, but if someone decides to let the wookie win, and you are the wookie, you need to let them let the wookie win. The argument really is over. If you’ve got what you want on the substance, that has to be good enough.

They also need to be allowed to be submissive. People instinctively are going into this mode in order to avoid these fights and dominance contests. Yes, it’s not the most productive thing they could be doing right now. You can explain to them later in a different conversation that this isn’t necessary with you. Eventually they might even believe it. For now, let them have this. If you do not, what is likely to happen is, as Ben observes, they interpret your being upset with them as them not being submissive enough. That is a reasonable guess, and more often then not they will be right.

Rising above this is, of course, even better. Here’s something along those lines that happened to me recently.

For a while I had been busy, and therefore mostly out of rationalist circles. I had been spending a lot of time in other good (if not quite as good) epistemic circles, and I’d learned the habit, when someone calls you out on having screwed up, of acknowledging I had screwed up, apologizing, fixing it to the extent that was still relevant, and assuring that I knew how to not have it happen again. If everyone in the world started doing that, I would take that reaction in a second, and life would be a lot better.

It’s not as good as understanding on a deep level exactly why you made the mistake in the first place. So the other person got frustrated, expecting better and holding me to a higher standard, and I was then called out on my reaction to being called outbecause the other person respected me enough to do that: I don’t want your apology, I want you to figure out why you did that and I think you can do it. I then caught myself doing the same submission thing a second time, which resulted in me realizing what was wrong in a much more important sense than the original error. As a result, instead of simply putting a band-aid over the local issue, I got a moment that stuck with me.

We should all strive for such a standard – from both sides.

Cross-posted to Less Wrong Discussion. Comments are always encouraged at either location.

Posted in Good Advice, Rationality | Tagged , , , | 1 Comment