Leads to: Against Facebook: Comparison to Alternatives and Call to Action, Help Us Find Your Blog (and others)
Epistemic Status: Eliezer Yudkowsky writing the sequences. They sentenced me to twenty years of boredom. Galileo. This army. Chris Christie to Marco Rubio at the debate. OF COURSE! A woman scorned. For great justice. The Fire of a Thousand Suns. Expelling the moneylenders from the Temple. My Name is Susan Ivanova and/or Inigo Montoyo. You killed my father. Prepare to die. Indeed. It’s a trap. Tomfidence. I swear on my honor. End this. I know Kung Fu. Buckle up, Rupert. May the Gods strike me down to Bayes Hell. Compass Rose. A Lannister paying his debts. The line must be drawn here. This far, no farther. They may take our lives, but they will never take our freedom. Those who oppose me likely belong to the other political party. Ball don’t lie. Because someone has to, and no one else will. I’ve always wanted to slay a dragon. Persona!
This post is divided into sections:
1. A model breaking down how Facebook actually works.
2. An experiment with my News Feed.
3. Living with the Algorithm.
4. See First, Facebook’s most friendly feature.
5. Facebook is an evil monopolistic pariah Moloch.
6. Facebook is bad for you and Facebook is ruining your life.
7. Facebook is destroying discourse and the public record.
8. Facebook is out to get you.
A second shorter post will then lay out what I believe is the right allocation of online communication. Some readers will want to skip ahead to that one, and I will understand.
I felt I had to document my explorations and lay out my case, but I trust that most of you already know Facebook is terrible and don’t need to read 7000 words explaining why. If that is you, skip to the comparison to alternatives or the call to action at the end. I won’t blame you.
A Model Breaking Down How Facebook Actually Works
Facebook can be divided into its component features. Some of these features add value to the world. I will start with those, because they form the foundation of the trap. These are the friendly parts of the system, that are your friends. They are not out to get you. If the rest of the system was also not out to get us, or we had it under control, I would use the good and mixed parts more.
Facebook’s best reason to exist is as a repository of contact information. If you know someone’s name, you have a simple way to request access to their email and their phone number. If you are already friends with them, that information is already waiting for you without having to ask. Effectively we have a phone book that only works at the right times. This is a very good thing.
The Event Planner is quite handy. Note the structure that it uses, because it will contrast with other sections. If you are invited to an event, it is easy to find under events or your notifications, as it should be. If you go to the event page, it prominently contains the key things you need most, allowing you to easily see name, time, location, who is going and details in that order (I would swap details and who is going, but this is a quibble). There is a quick button to show a map. If you want to search for events with similar descriptions, or at the same location, that’s a click away, but it is not forced upon you. Related events are quietly and politely listed on the right side.
The only downside is that there are people who feel it is appropriate to invite hundreds or thousands of people to their event without checking to see if they even live within a few hundred miles or might plausibly be interested. Facebook seems to lower the psychological and logistical barriers to doing this, but also makes it easier to turn an invitation down, without asking people to use an additional similar planning system.
Overall, good stuff, and I wish I felt comfortable using it more.
Facebook’s messenger service is perfectly serviceable in a pinch. I strongly prefer to use other services, because they are not associated with the evil machine, but that is the only real reason (other than Signal’s encryption, or wanting to move to video) this is effectively any different from chatting over text, Skype, Google, WhatsApp, Signal or anything else. On my phone, I use Trillian to unify a whole bunch of such services, which I used to use a lot, but I no longer find this worth bothering with on desktop.
Groups are a good idea. Who doesn’t like groups?
The first problem is that literally anyone on your friends list can add you to any group at any time unless you explicitly block them group by group. This is our first (mild) hint that Facebook might be out to get us. A system that was not out to get us would simply ask us, do you want to join? There would be a button marked “Yes” and a button marked “No.” Instead, the system presumes you want in, so there will be more content to throw at you.
The second problem digs deeper, and is a less-bad version of the problems of the News Feed: The groups are horribly unorganized. All you have is a series of posts you can try to endlessly scroll through. If people want to comment on something, there are unthreaded comments on the posts where it is not obvious what is and isn’t new.
If your goal is something like have the discussions of a Magic team, you’re screwed. You have to constantly go check for new things. Even if you do, you have little confidence that any new thing will be noticed. If there are types of things you care about, you have to scroll through pick them out of the scroll of fully expanded items like this is the Library of Alexandria, scanning for new comments.
Except wait. Even then, you are still screwed. See this thread.
You cannot count on the posts being in chronological order.
You cannot count on the posts being in the same order as last time.
There is no depth of search that assures you that you have seen all the posts.
There is no depth of search that assures you that you have seen all the new comments.
Each post you see needs to be carefully scanned for new comments, since the order does not tell you if any comments are new. If you don’t remember every comment on every post, good luck not wasting tons of time.
There is no way to know that your friends have seen a post or comment you make, no matter what procedure your friends commit to doing.
Because Facebook is willing to silently change such rules, other than maybe carefully scanning the entire archive of the group, you cannot count on anything at all, EVER. Even if you did find a solution, you could not assume the solution still worked.
This may sound like a quibble. It is not. When my Magic team Mad Apple teamed up with some youngsters, we agreed to try their method of using a Facebook group for discussion instead of using e-mail. This was a complete and utter disaster. I spent a stupid amount of time checking for new comments, trying to read the comments, trying to see answers to my comments. When I posted things, often I would refer to them and it was obvious others did not know what I was talking about. Eventually I gave up and went back to using email, effectively cutting discussion off with half of my team, because at least I could talk to the other half at all. I did not do well in that tournament.
I even heard the following anecdote this week: “When browsing a group looking for a post, I have even seen the same post multiple times because there was enough time while scrolling for Facebook to change its algorithm.”
How did things get so bad? I have a theory. It goes something like this:
Facebook uses machine learning in an attempt to maximize the number of posts people will view, because they think that ‘number of posts viewed’ is the best way to measure engagement, and determines the number of advertisements they can display. At first glance, this seems reasonable.
They then run an experiment where they compare groups that are in a logical order that stays the same and is predictable, to groups that are not in a logical order and are constantly changing.
Some people respond to this second group by silently missing posts, or by only viewing a subset of posts anyway; those people barely notice any difference. Other people are using groups to actually communicate with other people, and notice. They then feel the need to scroll a lot more, to make sure the chance of missing anything is minimized. They might want to change group platforms, but groups are large and coordination is hard, so by the time some of them actually leave, the algorithm doesn’t think to link it back to the changes that randomized the order of the posts – by now it’s changed things ten more times.
The more the algorithm makes it hard to find things, the more posts people look at. Thus, the algorithm makes finding posts harder and harder to find, intentionally (in a sense) scrambling its system periodically to prevent people from knowing what is going on. If people knew what was going on, they would be able to do something reasonable, and that would be terrible.
To be fair to Facebook, this is not automatically a problem. It is only a problem if you want to reliably communicate with other people. If you do not care to do that, it does not really matter. Thus, if your selected group is “Dank EA Memes” then you could argue that this particular problem does not apply.
The high ad ratio applies.
The problem of ‘you have to look at entire posts and can never look at summarizes’ applies.
The problem of ‘your discussions have no threading’ applies.
The problem of ‘tons of optimization pressure towards distorted metrics that destroy value’ applies.
The problem of ‘Facebook is evil’ still, of course, applies.
The problem of ‘They have made efficient navigation impossible’ though, is one that this type of group can tolerate. I will give them that.
We’ll talk about those other problems in other sections, since they all apply to the News Feed.
Games and Other Side Apps
Technically Facebook still offers games and other side apps, but my understanding is that people have learned not to use them, because they are the actual worst, and for the most part Facebook has learned that everyone has learned this, and quit bothering people on this front. I will at least give the site credit for learning in this case.
The News Feed
The News Feed is the heart of Facebook. When we talk about Facebook, mostly we are talking about the News Feed, because the News Feed is where everything goes. You post something, and then Facebook uses a machine learning based black box algorithm to determine when to show the post and who to show the post to. When composing, you think about the box. When deciding whether to respond, you think about the box. You click boxes all over the place trying to train the algorithm to give you the information and engagement you want, but the box does not care what you think. The box has decided what is best for you, and while it is willing to let you set a few ground rules it has to live by, it is going to do what it thinks is going to addict you to the site and keep you scrolling.
There is one feature that actually kind of works, which is the “See First” option you can select for some people. Facebook will respect that and put their content first, allowing you to (I think) be reasonably confident that if they post, you will have seen it the first time, and see it before other things. That does not give you any reasonable way to keep tabs on ongoing discussions, but it does at least mean you won’t miss anything terribly important right off the bat.
Beyond that, the system does not respond well to training, or at least to my attempts to train it, as this will illustrate.
This is a random sample of my news feed. Before I write the rest of this I pre-commit to cataloging the next 30 things that appear after the ad I just saw (to start at the beginning of a cycle). I will censure anything that seems plausibly sensitive.
1. Nathanial Mark Price was tagged in a photo.
Facebook thinks that when Ben Baker, who I have never heard of, posts a photo containing one of my thousand friends, that I should see this. My attempts to teach Facebook that I could not possibly care less (e.g. actively clicking to hide the last X of these where X is large) do not seem to work. It thinks the problem is Ben Baker, or the problem is Nathanial Mark Price. Neither of them are the problem. Is this pattern really that hard?
Seriously, if anyone understands why a machine learning algorithm can’t figure out that some people generally don’t like to see photos of their friends that are posted by people who are not their friends, when those people are explicitly labeling examples for it, then I can only conclude that the algorithm does not want to figure this out. If there is an actual reason why this might be hard, please comment.
2. Diablo: In case you missed it: Patch 2.5.0 is live!
Useless, since I finished playing a long time ago, but I did follow them at one point when I was trying to use the site. Or at least, I’m assuming that this is true. All right, my bad, I’ll unfollow. Oh wait, there is no unfollow button? So that means either I wasn’t following and they put this here anyway, in which case either this was an ad and pretended not to be, it actively thinks I would want to know about a patch to a game I bought several years ago (I’ll give it credit for knowing I own Diablo III), or I was following and they didn’t give me an unfollow option. Instead I chose to hide all posts from Diablo, so if they announce Diablo IV, I’ll just have to figure that out one of twenty other ways I’d learn about it. I can live with that.
3. John Stolzmann was tagged in this. (This is a photo and video by Beryl Cahapay, who I have never heard of, called ‘Day at the races’).
Facebook seems to believe that being tagged in a photo is an example of a post being overqualified to be shown to me. All you really need is that one of my friends was tagged. That friend being John Stolzmann. Who? Since I did not actually remember who he was before I Googled, I unfollowed John Stolzmann, although normally I prefer to wait until the person actually posts something before doing that.
4. Nicole Patrice was tagged in a photo.
Note that the photo does not, in fact, contain Nicole Patrice. The photo was posted by Nora Maccoby Hathaway, who is not even listed as having mutual friends with me when I hover over her name. Great filtering, guys.
5. An actual post by a friend! Giego Calerio says: “Given cost 3.5 G-happy-mo’s…what’s the Exp life gain of freezing bone marrow now?”
I decline to click on the link because if I do that, Facebook’s algorithm might get the wrong idea, but I’m not sure how it could get much worse, so maybe I worry too much. Giego is at least asking a valid question. He does seem to be making some bad calculations (e.g. he is treating all hours of life as equal, when youthful hours should be treated as more valuable than later hours, from a fun perspective) and is considering a surgical procedure where his expected ROI is 3.6 months of life in exchange for 3.5 months lost, which to him says “obvious yes” and to me says “obvious no” because you don’t do things like let someone do a costly surgical procedure unless you think you are getting massive, massive gains due to model error, risk/reward of being right/wrong, precautionary principle and other similar concerns. It is certainly not a ‘no brainer.’ But I don’t want to signal boost when someone is being Wrong On The Internet, and also I don’t comment on Facebook, so I say nothing. Except here.
6. Hearthstone Ad
All right, I basically never play anymore, but good choice. Points.
7. Tomoharu Saito says in Japanese, according to the translation: “There’s an American GP in the next camp.”
I think something was lost in translation.
8. Tomoharu Saito says in Japanese, according to the translation: “Rishi, I’m too tired, w. I’m tired, w. I got a barista from RI.”
Either the man is a poet and doesn’t even know it, or more likely Facebook needs to make a deal with Google Translate. Either way, looks like I can’t follow people posting in Japanese.
9. Adrian Sullivan posts he “is now contemplating a new Busta song featuring a zen-like feel, “Haiko ‘couplets’”, with Russiagate and Michael Flynn as its subject!
Go for it?
10. Ferret Steinmetz notes that “It is now officially impossible to preorder Mass Effect Andromeda”
Which makes sense since it was released last Tuesday.
11. Arthur Brietman posts something about shipping apps I saw on Twitter and I don’t know enough technical details to grok.
I’m sure it is thought out, though.
12. Kamikoto Ad for a stainless steel knife at about 85% off!
Swing and a miss.
13. Michael Blume asks: “I think I’m starting to be out of touch – can anyone tell me why people keep photoshopping the same crying person onto Paul Ryan?”
Can’t help you, sorry.
14. Tudor Boloni links to a Twitter post that links to a paper, saying “it’s hard to interpret.”
Oh yeah, that guy. I really should unfollow him. Done. Paper could in theory have been interesting I guess.
15. Robin Hanson posts: Ancient Hebrews didn’t believe in immortal soul, nor do most Christian theologians/philosophers today.
Saw that earlier on Twitter, which makes sense, he likely cross-posts everything. I put him in the See First category anyway just in case since his Twitter posts on average are very good and perhaps the discussions are good here, or some posts are not cross-posted. I guess Points.
16. Mandy Souza posted two updates. One is ‘lingerie model reveals truth about photoshoots by taking ‘real’ photos at home.’ The second is from Thug Life Videos.
I admit that the video was mildly amusing. The article is obvious clickbait. Hid them both.
17. Ferrett Steinmetz posts “Perfect for all your Vegan Chewbacca” needs and a picture.
OK then. Told it to show less from Twitter.
18. Nate Heiss shared The Verge’s video. It seems Elon Musk’s solar glass roofs can be ordered next month.
So, congrats, Elon?
19. Mack Weldon Ad for airflow enhanced underwear.
I’ll get right on that. One for three.
20. Brian-David Marshall thinks he has found the best ice cream scoop for hand to hand combat.
You can always count on Brian for news you can use.
21. Michael Blume retweeting Sam Bowman saying: My politics in a tweet: Use free markets to create as much wealth as possible and redistribute some of it afterwards to help unlucky people.
That idea sounds great. Glad he’s endorsing it, I suppose.
22. Phil Robinson wishes Happy 113th Birthday to Joseph Campbell.
And a very happy unbirthday to you, sir.
23. Teddy Morrow started a tournament on [some poker app].
How obnoxious. Hide all posts from the app, please.
24. Jelger Wiegersma is 6-3 at GP Orlando, shares his deck.
Points. I’m guessing David Williams gave him a B+ on the photo?
25. Teddy Morrow spun the Mega Bonus wheel on [same poker app as #23]
That’s even more obnoxious.
26. “Remarkable” add of a tablet you can write on like paper.
I guess if you gotta give me ads that’s not obnoxious.
27. Rob Zahra posts a link to “People Are Really Conflicted About This Nude Claymation Video” and says “it’s not sketchy…”
I choose to remain unconflicted.
28. Mike Turian posts: At the Father Daughter dance! Stopping for a quick arts and crafts break!
This one made me smile. Points.
29. Adrian Sullivan is suddenly craving grilled cheese…
He’s in Wisconsin, so I think this will work itself out.
30. Ron Foster posts photo and says “Sculpture seen in downtown Kirkland. Look familiar, Brian David-Marshall?”
The good news is I do remember who Ron Foster is. That’s all of the good news.
So let’s add that up:
Number of posts that got ‘points’: 3, or 10%. I could argue that this should be as high as 4 or 13.3%.
Number of posts I would have regretted missing or provided meaningful news about someone: 0
Number of posts that attempted to provide intellectual value: 4 if you want to be really generous.
Number of posts that provided intellectual value: 0 or 1 depending on if you count duplication
Number of ads: 3 or 4, hard to tell. Not too bad?
Number of posts that 100% I should never see but can’t figure out how to stop: 7 out of 27 non-ads (so 1/3 of posts are this or ads).
That went… better than I would have expected given my other experiences, but I am attempting to be a good and objective scientist, and will accept the sample.
Now think about whether you see that list and think “I want to take something like that, and hide our community discourse inside a list like that, and leave what to display up to a black box algorithm that is maximizing ‘interactions’!”
Great idea, everyone.
Living with the Algorithm
Now that we have seen the algorithm in detail a bit, it is time to ask how the algorithm actually works and what it does. Since it is constantly changing this is not an easy problem. One can do this by observing the results, by theorizing, or by reading up on the problem. My strategy here will be a mix of all three. I’ve already done some theorizing with respect to groups. Similar logic will apply here. I have also taken a sample of the feed and analyzed it, and generally looked through a large number of posts looking for other patterns. This is also where I stopped writing in order to Google up some articles on how the algorithm works, in the hopes of getting a more complete picture that way.
First principles say, and both reading and casual observation confirm, that Facebook’s primary tool will be to use interactions. If you interact with a post, that is good. That means engagement. If you do not interact with a post, that is bad, it means you did not engage. Thus, posts are rewarded if they create interaction, punished if they do not.
Time for another experiment! Let’s see how big this effect is. For the next 20 posts, excluding advertisements since those are paid for, let’s record the number of interactions (likes/reactions plus comments) and then compare those 30 posts to the 6th-10th posts in the same person’s timeline (excluding the original post, and only by the person in question, that second requirement added after I realized other people’s stuff appears in timelines a bunch); the delay is so that people have time to react and new posts are not overly punished by comparison. Note that in the first experiment, the feed was close to ‘looping around’ to the start of another session, which is why it turned out to ‘improve’ somewhat in the later half, and this is unlikely to be the case here.
While running the experiment, let’s also rate posts by how happy I am to have seen them (on an arbitrary scale of 0 means I would not have missed at all but I am not actively unhappy to have seen it, -5 means OMG my eyes or fake news, +10 means big win, +20 means they got married or something. System 1 has final say.
Our prediction is that the interaction numbers will be higher, but with large uncertainty as to how much higher, and that a similar thing will happen for ratings. Note that whose posts are shown is also not random, and we are intentionally taking that out of the equation for now, so sorting is much stronger than this would suggest on its own.
Since I will be evaluating entire timelines, I will not include names. If two posts come from the same person, the second will be skipped.
Also excluding what I consider ‘Facebook spam’ stuff like ‘reacted to a post.’ Note that the average post in the timeline (even without ads) is lower than the average rating this system will generate, but it is not hugely lower.
Post 1: 15 interactions. Rating 0. Mildly amusing tweet. Was #8 in timeline.
Timeline posts 6-10: 6 (-3), 1 (-3), 24 (+2), 9 (0), 5 (-3). Negatives here come from person’s need to do constant political commentary.
Post 2: 2 interactions. Rating +1. Mildly amusing video. Was #9 in timeline.
Timeline posts 6-10: 5 (-1), 1 (1), 4 (0), 22 (+2), 2 (-1). Person mostly posts little things intended to mildly amuse.
Post 3: 161 Interactions. Rating +3. Personal message related to actual life event. Was after #10 in timeline.
Timeline posts 6-10: 50 (-2), 34 (+1), 40 (0), 15 (0), 15 (0). Someone figured out how to get people engaged!
Post 4: 9 Interactions. Rating -5. Fake Magic spoiler.
Edit: Well, it is April 1 as I write this. But still. Not cool.
Timeline posts 6-10: 11 (+2), 78 (0), 34 (+1), 39 (0), 9 (-1). Mostly Magic content.
Post 5: 0 interactions. Rating 0. Wikipedia link. Was beyond #10.
Timeline posts 6-10: 0 (0), 5 (+5 for actual intellectual interest), 3 (+1), 1 (+4 again!), 18 (+2).
He posts links to science and philosophy stuff I would otherwise miss and seem worth investigating! No way I would have known if I hadn’t looked at the timeline. Promoted him to See First.
Post 6: 101 interactions. Rating +2. Important life PSA (for others who need it, I did not need it). Was beyond #10.
Timeline posts 6-10: 25 (0), 110 (+1), 28 (0), 70 (+3), 85 (0).
Person lives in The Bay, uses Facebook largely to coordinate events. If I was local and looking to hang out, this would be very good, but I am more of a thousands-of-miles-away person who has met her once.
Post 7: 95 Interactions. Rating +1. Magic preview card. Was before #6.
Posts 6-10. 14 (0), 61 (0), 66 (0), 24 (+3), 42 (+1).
Posts links to his Magic articles and activities.
Post 8: 48 Interactions. Rating -1. Was beyond #10.
3 (-1), 24 (0), 215 (+2), 159 (+1), 30 (+2).
Has interests that do not overlap with mine, also some that do.
Post 9: 14 Interactions. Rating -1. Was #10.
Posts 6-10: 4 (1), 3 (1), 13 (0), 4 (0), 2 (0).
Shares AI-related articles. They do not seem like they are worth reading.
Post 10: 12 Interactions. Rating +1. Was beyond #10.
Posts 6-10: 21 (0), 10 (+3 because F*** California), 21 (+1), 17 (+2), 39 (+3).
Post 11: 51 Interactions. Rating +1. Was #5.
Posts 6-10: 6 (+1), 7 (0), 9 (+1), 19 (+3), 12 (-1).
Post 12: 15 Interactions. Rating +1. Was beyond #10.
Posts 6-10: 38 (+3), 51 (0), 30 (+1), 12 (0), 10 (0).
Always the jokester.
Post 13: 38 Interactions. Rating -1. Was beyond #10.
Posts 6-10: 9 (0), 44 (-3), 8 (-1), 15 (0), 21 (+1).
Confident opinions, confidently held. Negative is for political echo chambering.
Post 14: 4 Interactions. 0 Rating. Was after #10.
0 (-1), 5 (0), 11 (0), 2 (0), 2 (0).
No interest overlap. Got an unfollow.
Post 15: 6 Interactions. 0 Rating. Was after #10.
14 (-1), 10 (0), 2 (-3), 2 (-3), 10 (-1).
Post 16: 3 Interactions. 0 Rating. Was #7.
1 (-5), 6 (0), 1 (-3), 2 (-3), 2 (-1).
Post 17: 7 Interactions. 0 Rating. Was #4.
9 (-1), 12 (0), 18 (0), 8 (0), 1 (-2).
A friend who is a lot smarter in person than they appear online, including about politics. Sometimes in these situations I wonder which one is real…
Post 18: 15 Interactions. 0 Rating. Was beyond #10.
4 (0), 8 (+2), 23 (0), 26 (0), 20 (0).
Post 19: 1 Interaction. -1 Rating. Was beyond #10.
2 (-1), 1 (-1), 0 (0), 0 (-1), 0 (-5).
I’ll just say this one is basically on me.
Post 20: 16 Interactions. +1 Rating. Was #6.
10 (+1), 0 (0), 2 (0), 4 (0), 2 (0).
Before examining the data statistically, it seems like the algorithm is not adding much value. It certainly was not adding as much value as some simple heuristics would have, depending on how easy it would be to determine post types. If you wanted to predict interactions, that too seems pretty easy, although I wasn’t studying this so it didn’t show up in the data: The big numbers all revolve around a few types of posts.
If nothing else, the algorithm of “choose all the posts of the top X people” seems like it would crush the algorithm if combined with the right amount of exploration, even if you did nothing else to improve it.
The obvious counter-argument is that my refusal to interact with Facebook, other than to tell it what I do not want to see, is preventing the algorithm from getting the data it needs to operate correctly. This seems like a reasonable objection to why the system isn’t better in my case, but it should still be better than random or better than blindly obvious heuristic rules. It certainly does not take away my curiosity as to what the system does in this situation. In addition, Facebook is known to gather information like how long one takes to read a post, so the data available should still be rather rich.
Some Basic Statistics
The average rating of a post was 0.07 if it was not selected by the algorithm, or 0.1 if it was. That’s not a zero effect, but it is a damn small one. The standard deviation of all scores was 1.67 and the difference in average rating here was 0.03, also known as 3% of the difference between “my life is identical to not seeing this post except for the loss of time” (score of 0) and “I found this slightly amusing/annoying” (score of 1 or -1).
The number of interactions was different: 30.65 for selected stories versus 20.33 for non-selected, versus a standard deviation of 33.65. If we use a log scale, we find 2.66 vs. 2.34, with a standard deviation of 1.25, so this effect is not concentrated too much in very large or very small numbers.
What happens if we use the algorithm “show the 20 posts with the most interactions, from anyone”? We see 20 posts with a mean of 80 interactions versus 10 for unselected, and we see a much more dramatic rating differential: 0.6 average rating for selected posts, -0.03 for unselected! At first glance, it looks like not only is the algorithm not doing much work, if you control for number of interactions, it is doing negative work! Even if you need to take half your posts from the non-interaction section in order to figure out what posts people interact with, that’s still a much better plan.
What about if we use “show the top interaction-count post from each of the 20 people”? Now the posts shown will average 51 interactions (vs. 16 for other posts), and still have an 0.6 average rating. That is an even stronger result, and it makes sense, because different people have different friend groups and tendency for people to interact with their posts.
It is also worth noting that within-person ratings were highly correlated, which implies that some combination of the system and my own filters on top of the system needs to get rid of more people who do not provide value, and put more focus on the ones that do. This is a slow process, as like many of us, I have a lot of Facebook friends and they need to be tuned one by one.
Whenever you have a complex multi-factor algorithm, the first step should be to test it against simple baselines and see if it can at least beat those. Here, the system has failed to do that.
I started my reading with this story. It confirms the basic elements of the system, and includes such gems as:
The news feed algorithm had blind spots that Facebook’s data scientists couldn’t have identified on their own. It took a different kind of data—qualitative human feedback—to begin to fill them in.
Really. You don’t say! What is worth noting is not that the algorithm had blind spots in the absence of qualitative human feedback. What is worth noting is that this is something that had to be realized by Facebook as some sort of insight. How could one have presumed this to be false?
This may prove to be part of the problem:
Facebook’s data scientists were aware that a small proportion of users—5 percent—were doing 85 percent of the hiding. When Facebook dug deeper, it found that a small subset of those 5 percent were hiding almost every story they saw—even ones they had liked and commented on. For these “superhiders,” it turned out, hiding a story didn’t mean they disliked it; it was simply their way of marking the post “read,” like archiving a message in Gmail.
Thus, even though hiding is usually a strong negative signal, if you cross a certain threshold, the system now thinks you are no longer expressing an opinion. Or maybe it is this gem that follows soon thereafter:
Intricate as it is, the news feed algorithm does not attempt to individually model each user’s behavior. It treats your likes as identical in value to mine, and the same is true of our hides.
Dude. You. Had. One. Job.
They also do not understand how impact works:
Even then, Facebook can’t be sure that the change won’t have some subtle, longer-term effect that it had failed to anticipate. To guard against this, it maintains a “holdout group”—a small proportion of users who don’t see the change for weeks or months after the rest of us.
Facebook is an integrated system. Keeping a small number of people on the old system isn’t quite worthless, but if the changes you make lead to long term effects that destroy the Facebook ecosystem, or damage the world at large, a reserve will not prevent this.
Thus we get ‘insights’ like this:
The algorithm is still the driving force behind the ranking of posts in your feed. But Facebook is increasingly giving users the ability to fine-tune their own feeds—a level of control it had long resisted as onerous and unnecessary. Facebook has spent seven years working on improving its ranking algorithm, Mosseri says. It has machine-learning wizards developing logistic regressions to interpret how users’ past behavior predicts what posts they’re likely to engage with in the future. “We could spend 10 more years—and we will—trying to improve those [machine-learning techniques],” Mosseri says. “But you can get a lot of value right now just by simply asking someone: ‘What do you want to see? What do you not want to see? Which friends do you always want to see at the top of your feed?’ ”
Yes, it turns out that people actually want to see posts by some friends more than other friends, and it only took years for them to figure out that this might be a good idea. People have strong, simple preferences if you let them express those preferences. The stupidity here is mind boggling enough that it seems hard for it to be unintentional. The reason why they do not let you fine-tune the news feed is not because doing so would not make the feed better. The reason why is because it would make the feed better for you, and they are invested in making it worse for you instead. Everyone knows that a proper Skinner Box needs to avoid giving away too many rewards if you want to keep people pressing the buttons and viewing the advertisements.
Facebook’s case is that this is not what they are up to, because they understand that in the long term people realize they are wasting their lives if they don’t have good experiences doing so:
There’s a potential downside, however, to giving users this sort of control: What if they’re mistaken, as humans often are, about what they really want to see? What if Facebook’s database of our online behaviors really did know us better, at least in some ways, than we knew ourselves? Could giving people the news feed they say they want actually make it less addictive than it was before?
Mosseri tells me he’s not particularly worried about that. The data so far, he explains, suggest that placing more weight on surveys and giving users more options have led to an increase in overall engagement and time spent on the site. While the two goals may seem to be in tension in the short term, “We find that qualitative improvements to the news feed look like they correlate with long-term engagement.”
The author notes that “That may be a happy coincidence if it continues to hold true” which I think is not nearly cynical enough. There is the issue of whether the long-term goals are indeed aligned, but there is the bigger problem that even if Facebook wants in some sense to focus on the long term, the tools it has been given push all parties away from doing so.
What the Algorithm Effectively Does
The algorithm attempts to find those things that promote interaction. It then rewards them with a signal boost, allowing the best to go viral. In response, people got to work optimizing their posts so that Facebook would predict people would want to interact with them, and so that people would in fact interact with them, so that others would see their posts. Professional and amateur alike started caring about approximations of metrics and got to work creating de facto clickbait and invoking Goodhart’s Law.
There is some attempt by Facebook to define interaction in good ways, such as measuring how long you spend off site on articles you click on, and there is some attempt to crack down on the worst offenders. Links to spam sites filled with advertising are being kept down as best they can. Obvious fake news gets struck down some of the time, and so on.
However, there is still a double amplification effect going on here. I choose who I want to follow based on what I think I will like, and then Facebook subfilters that based on what it thinks I will like. No matter how much Facebook wants to stay in control of things, at a minimum I can choose who my friends/follows are on the site. I will attempt to create a mix that balances short term payoff with long term payoff, safe with risky, light with dark. Facebook will then take that mix, and do its best to return the most addictive stuff it can find. I can observe this and ideally adjust, creating a pool of potential posts that is full of deep stuff with only a small number of cute videos, and perhaps that will work, but no one is going to make it easy for me.
Everything anyone write gets warped by worrying about this. Those who rely on Facebook then get triply filtered. They choose who to follow, those people choose what to share based on what is likely to get traction (as Josh says on Crazy Ex-Girlfriend, got to keep up the LPPs, or likes per post), and then Facebook filters with the algorithm.
See First, Facebook’s Most Friendly Feature
If you must use Facebook to follow certain close friends and family, and chances are that you feel that you do need to do this, there is a solution: See First. See First is a recently introduced feature that turns the news feed from something that is out to get you into something that is not out to get you. This is because
Facebook is an Evil Monopolistic Pariah Moloch
When I think about posting anything, anywhere on the internet, such as here on this blog, I have to worry about what the algorithm will say. If someone shares my post on Facebook, will anyone see it? Will they comment about it?
Then, people comment on Facebook instead of commenting on your post, in order to help ‘signal boost’ the share, which then leads to more comments being on the share. The majority of all discussion of this blog takes place on Facebook right now. The conversation becomes fractured, impossible to find and hard to follow, and often in a place the author does not even know about. We are forced into this ecosystem of constantly checking Facebook in order to have a normal conversation even if we never post anything to Facebook in any way at all.
In the long term, this means that Facebook ends up effectively hosting all the content, controlling what we post, how we discuss it, who sees what information, what memes spread and which ones die. It does this in the service of Moloch rather than trying to make life better for anyone, slowly warping us to value only what it values. Meanwhile, we are then forced to endure endless piles of junk in order to have any hopes of seeing what is going to or what any of our friends are doing or talking about.
Well played Facebook, I guess? Very bad for the rest of us. We cannot permit this to continue.
Facebook is Bad for You and Is Ruining Your Life
I could rattle off a bunch of links, but there is no need. I was going to say that this is the most recent study I have seen and it in turn links back to previous research. Then today I saw this one. I have not examined any of them for rigor, but would welcome others to share their findings if they do examine them. Either way, my opinion here is not due to research. My opinion is due to witnessing myself and others interact with Facebook, and also the opinion all of those people have about those interactions.
Without exception, everyone who uses Facebook regularly, who I have asked, admits that they spend too much time on Facebook. They admit that time is unproductive and they really should be doing something else, but Facebook is addictive and a way to kill time. They agree that it is making their friendships lower quality, their social interactions and discourse worse, but they feel trapped by the equilibrium that everyone else uses Facebook, and that it is there and always available. If anything is on Facebook and they do not see it, they are blameworthy. People still assume I have seen things that were on Facebook until I remind them that I don’t use it. Facebook then hides those morsels of usefulness inside a giant shell of wastes-of-time that you are forced to wade through, creating a Skinner Box. Fundamentally, Facebook is out to get you.
Facebook warps our social lives around its parameters rather than what we actually care about, and wastes time better spent on other things. That is not to discount its value as a way to organize events, share contact information, as a messenger service, or the advantages of being able to stay in touch. That is to point out that the cost of using that last one is that it does a bad job of it and will incidentally ruin your life.
Facebook is Destroying Discourse and the Public Record
Most things I read on the internet are public. When something is public, others can repost it, extend off it, comment upon it and refer back to it. The post becomes part of our collective knowledge and wisdom, and we can make progress. The best thing about many blogs is that they have laid the foundations of the author’s world view, so Scott Alexander can pepper his work with links back to old works without having to repeat himself, and if someone wants to soak up his writing there is an archive to read. When something is especially interesting, I can link or respond to that interesting thing, and see the responses and links from others.
I can’t deny that most words posted to the internet are not great discourse, but some of them are, and those are a worldwide treasure that grows by the day. When we take our conversations to the semi-private realm of Facebook, we deny the world and even our friends that privilege. I have seen a number of high quality posts to Facebook that I would like to link to or build upon, but I cannot, because that is not how Facebook works, and their implementation of comments is rather bad for extensive discussions.
When we look back a few years from now, we will not remember what was posted to Facebook. It will be as if such things never existed. That is fine for posting what you ate for lunch or coordinating a weekend trip to the ballgame, but we need to keep important things where they can be shared and preserved. It is the internet version of The Gift We Give Tomorrow.
Facebook is Out To Get You
Some things in the world are fundamentally out to get you. They are defecting, seeking to extract resources at your expense. Fees are hidden. Extra options you do not want are foisted upon you unless you fight back. The service is made intentionally worse, forcing you to pay to make it less worse. Often you must search carefully to get the least bad deals. The product is not what they claim it is, or is only the same in a technical sense. The things you want are buried underneath lots of stuff you don’t want. Everything you do is used as data and an opportunity to sell you something, rather than an opportunity to help you.
When you deal with something that is out to get you, you know it in your gut. Your brain cannot relax, for you must constantly be on the look out for tricks and traps both obvious and subtle. You can’t help but notice that everything is part of some sort of scheme. You wish you could simply walk away, but either you are already bought in or there is something here that you can’t get elsewhere, and you are stuck.
Their goal is for you not to notice they are out to get you, to blind you from the truth. You can feel it when you go to work. When you go to church. When you pay your taxes. It is the face of both bad government and bad capitalism. When you listen to a political speech, you feel it. When you deal with your wireless or cable company, you feel it. When you go to the car dealership, you feel it. It’s a trap.
Most things that are out to get you are only out to get you for a limited amount. If you are all right with being got for that amount, you can lower your defenses and relax, and you will be in a cooperative world, because they have what they came for. The restaurant wants you to overpay for wine and dessert but it is not trying to take your house. Sometimes that is the right choice, as the price can be small and one must enjoy life.
The art of deciding when to act as if someone or something is out to get you, and when to sit back and relax, is both more complex and much more important than people realize. Most people are too reluctant to enter this mode, but others are too eager, and everyone makes mistakes. I intend to address this in more depth in a future post, and ideally that one would go first, but I want to get this one out there without further delay.
If you remember one thing from this post, remember this: Facebook is out to get you. Big time.
Facebook wants your entire life. It wants you to spend every spare moment scrolling through your feed and your groups, liking posts and checking for comments, until it controls the entire internet. This is the future Facebook wants.
Pingback: Against Facebook: Comparison to Alternatives and Call to Action | Don't Worry About the Vase
Did not know See First existed! Even before I read your followup I consider this a good value.
Yeah, it’s a lifesaver and I learned about it during the investigation.
You can set notifications to a group to notify you for every post (and email you, if you turn that on also).
You can turn on notifications for new comments on any post.
I wish there was a way to subscribe to replies to comments, as I’ve missed some replies to threads I was in. That is annoying. I think any replies to your own comments, or comments on your posts, or comments you are tagged in generate notifications, but people who also reply to the same comment as you only generate notifications if you’re the only other person who’s replied.
I’ve been thinking recently of building an alternate FB UI to fill in some of these gaps, using the API.
Think people like you would pay for something like that?
Agreed that there are solutions that are better than nothing but still not great.
An alternate Facebook UI would be freaking fantastic if it was done right.
To answer your question: Would I be willing to pay? I personally would be willing to pay if I was aware of it and was confident it would do what I needed.
To answer the general question: Most people likely would NOT pay for such a desktop/laptop UI, unfortunately, because they have been conditioned that such things are free, the same way they think all phone things should cost at most $0.99. This is a shame. You could perhaps sell it for a few dollars as a mobile app, but you’d have to be able to get the word out.
I do think it is a great idea, and if you got widespread adaptation, I think it would earn you enough status, reputation and ability to monetize through other means such as advertisement or sale, that it would still be worth it. So I encourage the product, but would not have a lot of faith in people paying for it as a business model.
I have heard tales that such has been attempted before and that Facebook blocks API access to any such alternative UIs once it finds out about them. Perhaps these tales are unfounded. But I’d invalidate them before putting too much effort into building such a thing.
For me, Notifications solves a lot of these problems. I get Notifications when someone comments on a post I make or a post I’ve commented on. You can also get Notifications when something new is posted to a Group, too, and you can “follow” comments to a post without actually commenting yourself. It doesn’t solve the problem of stuff disappearing and being hard to find, but it does let you keep on top of things as they happen.
I view these as better than nothing but far from a good solution. Facebook could certainly be much worse!
Pingback: Help Us Find Your Blog (and others) | Don't Worry About the Vase
So looks like there’s a way to change the news feed to display “most recent” instead of top stories, at least in my account. Click the three dots on the upper left next to “news feed”.
Good catch, I should have found that. Thanks! That does help, although using it well creates all new issues (everyone now needs to be completely in, or completely out) so it’s not clear that it on its own creates a higher maxima.
I’m commenting mostly on the one feature of facebook that I think you don’t use properly, Groups. specifically connecting them to events, to avoid the following problem.
“The only downside is that there are people who feel it is appropriate to invite hundreds or thousands of people to their event without checking to see if they even live within a few hundred miles or might plausibly be interested”
Facebook let’s us share events IN a group, this is the primary (and possibly only?) use of groups, to share events to the group, this gets around the problem of filtering, since you know that everyone in the group is interested in events in the area since if they weren’t, the would not be in the group.
The second thing groups let’s us do is sharing effeciently. in the exact same way that inviting all friends is probably wasteful, you can invite the friends that happen to be in a group that would imply that they were interested.
Example, you are hosting the event “discussion on how to fight lying in Effective altruism” then you invite everyone who is in the group “new york effective altruists” into your event.
Other than that I agree with the low utility of groups. But the fact that you can use groups in this 1 highly productive way makes them worthwhile. I don’t use facebook for anything other than events and groups, and only use groups to find events. Glad I’m not missing out on much.
I think I was talking about the groups where people chat, and you are talking about the groups you can create within your account that allow you to select all the people at once. I agree that this second thing is a useful thing if you are going to be selecting that group multiple times, but that’s more of a ‘without this basic feature things get really bad’ than anything else.
No i’m talking about the groups where you join them like this one https://www.facebook.com/groups/csuebsupersmashbros/?ref=bookmarks notice how the first 4 posts are just “here’s an event you probably are interested in” I think that use of groups is productive. If I were running a facebook group I would have a “nothing but event posts allowed” rule
Chatting in facebook groups is probably worthless. But joining groups like the above one is useful if you are interested in the subject.
Ah. I understand now. That makes sense, thank you. Agreed that this is useful (although chatting in the event itself also seems fine).
All this is really great, especially the part at the end.
It’s a real insight that one should consider, for a given thing, the question “is this thing out to get me, and by how much?”.
It’s worth asking this question to almost everything. Asking the question doesn’t in itself cause you the enter the mode of acting as if things are out to get you, so long as you calibrate correctly — the answer should be “this thing isn’t out to get me that much” most of the time. And the answer only needs to be “this thing is out to get the rest of my life” once, in order for Asking the Question Constantly to be a good habit.
Perhaps in order to avoid falling into the mindset of “everything is out to get me”, the question is best phrased as, “in a world where everything goes optimally for the thing in question, how much would it take from me?”
And if you want to decide whether or not it’s worth trading with the opposing agent, it’s worth considering the opposing question, “in a world where everything goes optimally for me, how much can I gain?”
It’s clear that quitting Facebook is overdetermined.
I like the dual-question approach – very clean way of abstracting the problem that will almost always get the right answer.
Have you seen Social Fixer (http://socialfixer.com/)?
I have not! Tell me more!
So it’s a program you can download that tweaks a few things in your FB web experience. You can have it auto-default you to Most Recent, highlight comments more recent than a certain time, give exact timestamps, and even mark posts as “read” so they don’t show up again. There are options to block advertisements, remove parts of the FB page, and even block posts containing a given word. While it doesn’t address the algorithm-related issues you mentioned, it does fix a lot of small problems I think we both have with FB. Definitely worth trying out!
Pingback: Book Review: Weapons of Math Destruction | Don't Worry About the Vase
Pingback: Write Down Your Process | Don't Worry About the Vase
> If there is an actual reason why this might be hard, please comment.
I recently talked to someone who works on ranking at FB (for a different part of FB) about something very similar and his response was that the signal from people’s individual actions get swamped by the signal from you tagging the content as something you don’t want to see. This isn’t something inherent and better individualized models could figure this out, but, by definition, things that require these kinds of fine-grained signals are things that the majority of people don’t mind.
Something that isn’t inherent hardness and is “just” a practical problem is that ranking for things like this are often done with a multi-stage ranking process where the last (and most expensive) ranker only runs on a relatively small subset of items. If an earlier, dumber (and cheaper) ranker knocks something out of the top N, it’s not going to rank well regardless of how relevant it is. Conversely, if a dumber ranker puts a bunch of junk at the top, unless a later ranker can ask for more results from a previous ranker, the results aren’t going to be very good, and AFAIK no one does that because it would increase latency.
Anyway, the reason for this sort of practical problem is that each impression is worth little to FB. FB grosses something like $60 per user per year in the U.S. and Canada. That’s excellent for an advertising company, but if you look at the margin for a single pageview, it’s low, low enough that they can’t run a super expensive but smart ranker on every item you might want to see every day you visit the site. Of course they’d bring in more money if they had better ranking for their feed, but if we’re talking about your personal experience, how many ads you do you click a year anyway? If you spend twice as much time on FB, how many more ad clicks is that?
Thanks! Lot of food for thought here, and I would be interested in reading more about the details. The multi-stage ranking system makes sense, and serves as an explanation for why the system might have false negatives (assuming you didn’t apply hard rules to prevent this, such as the ‘see first’ feature), but you would still run the expensive system before you had a false positive.
So it then comes down to why the more expensive system seems to be making some basic mistakes – e.g. doing things with multiple degrees of separation that are unlikely to be relevant, seemingly on the basis of a tag or small number of likes or something? Maybe I have a really odd preference that is hard to pick out, here, but I have a hard time believing that. I have a lot of really weird preferences and like to think I have some idea where I’m being super odd and where I’m not. If nothing else, the system has to have a variable for distance, in some form.
I’m a little confused by the statement that things get overwhelmed by the preference not to see a particular thing. That’s clearly good in the case of that thing, but the question is how this then extends to other things that are similar, and how it acts (or doesn’t) as a de facto veto. The algorithm is ‘having all the fun’ creating this weird worry that you’re not giving it the right inputs, and it’s unclear how to send it the message you want to send.
BTW, one thing that might be unclear from my original comment is that I think things will improve over time as more engineering effort is applied (and also as computation continues to get cheaper despite the slowdown in chip performance increases). I don’t think this is hopeless, even when only looking at it practically and not theoretically.
Anyway, the other thing that seems like it might be unclear is that I talked about two distinct things while making it maybe sound like they’re the same thing?
1. Multi-stage ranking
2. “Large” preferences vs. small custom preferences
On (2), it’s perhaps more relevant to what I was asking this person about (why do I get so much “clickbait” in my feed when I usually select that I don’t want to see it, or pretty much any article that’s not written by someone I know). There, the response was that if some clickbait has a 100x or 1000x better “engagement” than some mundane thing, then it basically doesn’t matter what my preference is and I’ll see the clickbait outrank the mundane thing.
Maybe you could have a model where you multiplied probabilities together to get the probability that I’ll click, but even though I’ve hidden a lot of these stories, I probably still haven’t hidden 10k of these or even 1k of these. Maybe their model should understand if I hide every one that I bother to and only have accidental clicks (maybe 1 in 100?), then I really don’t want to see this and should multiply by 0, but if they did that, they’d run into the problem you specifically mention in your post. If there was a “clickbait” feature in their model and that feature was highly correlated with what I think of as “clickbait”, that would work, but they probably don’t have that feature or if they do it’s not close enough to what it looks like in my head for that to work.
To more directly answer the question in the post, a specific problem is that the range of possible features is huge. It’s entirely plausible that they don’t have “photo of friend taken by stranger” as a feature. They may have “item was posted by a stranger” as a feature, but if you’re only trying to avoid photos, maybe the wrong thing gets learned.
Another problem here is that if the ranking is made up of an ensemble of, say, 300 models, what happens if any individual model can veto something and set the probability to 0? Maybe it’s fine and maybe something terrible happens, depending on how often models decide to veto things. But if we can’t have an absolute veto, then we’re back in the territory where one model scores this thing 1000x and another says .01x. Maybe the bug here is that the individual model should really say .00001x, but it’s hard to be that confident about anything from individual user data.
One question is, how weird is it not to want to see that stuff? For clickbait, by definition, it’s kinda weird (although this problem isn’t insoluble and I’m told that a youtube ranking team recently spent a lot of effort on a closely related problem and got good results). For photos, it’s less obvious, but it’s not totally implausible that it’s weird. I certainly like seeing those photos and click on a lot of them, but that might just be subculture thing (they’re usually photos of friends of mine at dance events, taken by people who I don’t know). I have very low confidence in my ability to predict how weird a preference like that is.
I’m not sure that my comments are actually clearing things up since I’m not 100% sure how we’re talking past each other, if we are. Let me know if this doesn’t make sense.
P.S. One disclaimer I should add is that I’ve never directly worked on ranking, I’ve only worked in nearby areas, so I don’t have any particular expertise here and you probably know as much as I do.
I read this piece of writing fully concerning the
comparison of most recent and preceding technologies, it’s remarkable article.
Pingback: Choices Are Really Bad | Don't Worry About the Vase
Pingback: The Joy of Links | The Rationalist Conspiracy
Pingback: Best of Don’t Worry About the Vase | Don't Worry About the Vase
Pingback: Against Facebook: The Stalking | Don't Worry About the Vase
Pingback: Out to Get You | Don't Worry About the Vase
Pingback: Seeding a productive culture: a working hypothesis | Compass Rose
Pingback: 46 – Social Media and Outrage Culture | The Bayesian Conspiracy
Pingback: Book Review: The Complacent Class | Don't Worry About the Vase
Pingback: Reinforcement and Convenience in Technology – mindlevelup
I just sent you a Facebook message; posting here to alert you since you’re not active on Facebook. It’s slightly private so I didn’t want to post the message here.
(It seems appropriate to tell you that I sent a Facebook message on your post about why Facebook is bad.)
Pingback: Don’t Believe Wrong Things – Put A Number On It!
Pingback: An Antifragile Church for Pragmatists, Questions and Answers – Open-Sourcing All Over Space
Pingback: 10 Disasters When Testing Exclusively Through Magic Online
Pingback: After Facebook – Operating Space
Pingback: After Facebook – overflow.space
Pingback: Information and money – Thicket Forte
Pingback: Out to Get You | Egg Syntax
my solution is this: we rally our digital avatars and stage a populist revolution, so that facebook is owned and operated, by the people, for the people. as the people, we demand internet sovereignty!
Pingback: Blackmail | Don't Worry About the Vase
Pingback: Privacy | Don't Worry About the Vase
Pingback: Counterfactuals about Social Media | Don't Worry About the Vase
Pingback: Some Ways Coordination is Hard | Don't Worry About the Vase
Pingback: Maha Supplier
Pingback: Dual Wielding | Don't Worry About the Vase
Pingback: Covid 7/2: It Could Be Worse | Don't Worry About the Vase
Pingback: Covid 2/11: As Expected | Don't Worry About the Vase
Pingback: Covid 5/27: The Final Countdown | Don't Worry About the Vase
> Either the man is a poet and doesn’t even know it, or more likely Facebook needs to make a deal with Google Translate. Either way, looks like I can’t follow people posting in Japanese.
Kinda offtopic, but yeah, their translation soft has hilarious outcomes sometimes.
Like in this pic: imgur -> yBKL8iu
It tried to translate “The fast way [to warm mars] is to drop thermonuclear weapons over the poles. – Elon Musk” into Polish. The output, translated back to English: “The fast way [to warm Mars] is to drop a thermonuclear weapon over the Poles – Elon [Piżmo]”.
…it’s actually a really subtle-looking difference when translated back it seems. It capitalized Poles. Which changes the meaning to an impressive degree. Also it’s hilarious it translated the surname to
Musk is a class of aromatic substances commonly used as base notes in perfumery. They include glandular secretions from animals such as the musk deer
Pingback: Covid 8/12: The Worst Is Over | Don't Worry About the Vase
You mention “I was going to say that this is the most recent study I have seen and it in turn links back to previous research. Then today I saw this one.” which links to
https://hbr.org/2017/04/a-new-more-rigorous-study-confirms-the-more-you-use-facebook-the-worse-you-feel and https://academic.oup.com/aje/article-abstract/185/3/203/2915143/Association-of-Facebook-Use-With-Compromised-Well#.WPsEIK87jKc.facebook respectively, but both appear to refer to the same study: “Association of Facebook Use With Compromised Well-Being: A Longitudinal Study” by Holly B. Shakya and Nicholas A. Christakis (the latter is the paper itself, while the former is an article by the authors that mentions prior research and links to the paper in question).
Pingback: How to Best Use Twitter | Don't Worry About the Vase
Genuinely out of interest, is the purpose of the “epistemic status” para at the top a gatekeeping one? I have literally no idea what is meant by that. I get a couple of the references, obviously (“I know Kung Fu”) but I have no clue how they fit together or what it’s all meant to mean. Is it to screen out certain types of readers who aren’t in on it? Not trolling, just wondering.
It was mainly because it was fun (e.g. try to figure out what all of them are), and to communicate essentially ‘I am on a holy crusade here.’ Wasn’t intended as a gatekeep.
Pingback: Covid 7/28/22: Ruining It For Everyone | Don't Worry About the Vase
Pingback: Ordinary Steroids