What Is Rationalist Berkeley’s Community Culture?

Rationalist Status: Wonky (if you are not interested in the rationalist community or mission, this one is not for you)

Epistemic Status: Continuing the discussion

Response to (read this first): The Craft Is Not The Community (Otium)

Sarah’s post, The Craft Is Not The Community, was explicitly intended to start a discussion. Her central thesis is that the community has proven itself bad at doing outward-facing projects, so members should do their community-focused projects together, and form a community that can fulfill its members emotional needs, but do their outward-facing and world-saving projects in the outside world and play by the outside world’s rules.

I want to focus elsewhere, at least for this post, and accuse Sarah of burying the lead. She does not actually bury it – it’s right up at the top of the post – but she views it as background rather than the thing to be focused on.

We need to focus on it.

Her leading paragraph is super important, and I strongly, violently agree with it:

“Company culture” is not, as I’ve learned, a list of slogans on a poster.  Culture consists of the empirical patterns of what’s rewarded and punished within the company. Do people win promotions and praise by hitting sales targets? By coming up with ideas? By playing nice?  These patterns reveal what the company actually values.

And, so, with community cultures.

Those within a culture quickly learn what that culture truly values, and adopt their behaviors to that culture. Culture is super important. If we are building a physical community, and it seems that we are, we should think carefully about what culture we want that community to have, and work to ensure that we get what we want. Keeping what makes us unique requires constant vigilance in the best of times. We are building the community in 2017, and doing so in Berkeley, California. The pressures are and will be unending. Constant vigilance doesn’t begin to cover it.

Paul Graham once wrote a (recommended) piece called Cities and Ambition. Cities, he observes, have a culture. The great ones push you to work harder, achieve great things. They have a message about what you should be and what is good in life. The city constantly whispers that message in your ear.

He claims that New York, where I live, has the message that you should be richer. Fair enough. He says Boston wants you to be smarter. Again, fair enough. He thinks Silicon Valley’s message is to have impact on the world, and loves it; I see the real message as more like ‘you should excite people (with your ability to excite people about your ability to excite people)’ or ‘you should spend every hour of your life to follow our script for how to do a great start-up’ or something like that, and can’t watch the obvious excellent show Silicon Valley because it causes emotional flashbacks. I can see both arguments. Things were likely more as he describes back in the day. We have different perspectives.

What does he say is the message of Berkeley, in a piece from back in May 2008?

I’d always imagined Berkeley would be the ideal place—that it would basically be Cambridge with good weather. But when I finally tried living there a couple years ago, it turned out not to be. The message Berkeley sends is: you should live better. Life in Berkeley is very civilized. It’s probably the place in America where someone from Northern Europe would feel most at home. But it’s not humming with ambition.

Not heeding this warning, or perhaps because of it, the Rationalists have concentrated in Berkeley, California.

Sarah then reports, back in the present day:

It seems to me that the increasingly ill-named “Rationalist Community” in Berkeley has, in practice, a core value of “unconditional tolerance of weirdos.”  It is a haven for outcasts and a paradise for bohemians. It is a social community based on warm connections of mutual support and fun between people who don’t fit in with the broader society.

I think it’s good that such a haven exists. More than that, I want to live in one.

I think institutions like sharehouses and alloparenting and homeschooling are more practical and humane than typical American living arrangements; I want to raise children with far more freedom than traditional parenting allows; I believe in community support for the disabled and mentally ill and mutual aid for the destitute.  I think runaways and sexual minorities deserve a safe and welcoming place to go.  And the Berkeley community stands a reasonable chance of achieving those goals!  We’re far from perfect, and we obviously can’t extend to include everyone (esp. since the cost of living in the Bay is nontrivial), but I like our chances. I think we may actually, in the next ten years, succeed at building an accepting and nurturing community for our members.

We’ve built, over the years, a number of sharehouses, a serious plan for a baugruppe, preliminary plans for an unschooling center, and the beginnings of mutual aid organizations and dispute resolution mechanisms.  We’re actually doing this.  It takes time, but there’s visible progress on the ground.

I live on a street with my friends as neighbors. Hardly anybody in my generation gets to say that.

What we’re not doing well at, as a community, is external-facing projects.

And I think it’s time to take a hard look at that, without blame or judgment.

Sarah reports that a bunch of people settled in Berkeley, decided they should live better, and have done a bunch of things to help themselves live better but failed at attempts to do more ambitious outward facing things.

In the comments, blacktrance responds:

There are different kinds of community-building, and some of them have more aspects of external projects. At one end of the spectrum, people who happen to live in the same town or neighborhood form a natural community that serves their needs and desires. At the other end, there are clubs formed around specific activities. The “local community”-type doesn’t have many standards beyond living in the area, but the club has them even if they’re not explicit (“If you don’t like reading, don’t join our book club”), and they may even be standards for excellence (“Our chess club is for people with an Elo above X”). A community between the two has to reconcile fulfilling the needs of its existing members with continuing to maintain the goal of being whatever kind of community it is.

I see the rationalist community as somewhere in the middle of that spectrum. It’s not formally about anything like a book or chess club, and there aren’t any necessary or sufficient criteria for belonging, but central nodes in the thingspace cluster include at least general agreement with the Sequences (and other important LW posts), as well as possessing and using a certain conceptual toolbox. It’s also a community that’s tolerant of weird people and contains many of them, but I think to a large degree that’s a consequence of the above, not something independent. If you think about what you really want, take ideas seriously, know and viscerally understand common failure modes, etc, being weird is a likely consequence. And tolerance for weird people isn’t far behind, both because you want to be tolerated and to avoid unknown blind spots (several people willing to be unconventional and compare notes will find more opportunities than one would find alone).
Being that kind of community is a goal – it’s not world-saving, but nor is it purely internally-facing, because it’s not just about serving the needs and desires of community members qua community members.

Sarah then responds, in a sentence that should totally freak you out:

I think that may have been true in 2012 or so, but I don’t think it’s true now.

No one could have predicted this. No one had any idea, as it was happening, that the choice of Berkeley might have been a mistake and not only because of the stratospheric rents.

Many of our best and brightest leave, hollowing out and devastating their local communities, to move to Berkeley, to join what they think of as The Rationalist Community. They feel comfortable ripping apart those other communities because they think the point of those communities was to feed their best people to the ‘real’ community in Berkeley; when not being careful they use the term ‘rationalist community’ interchangeably with ‘rationalists living in Berkeley’. Once there, they have an increasingly good time and develop new ways to have an increasingly good time, forming a real community. But that ‘rationalist community’ is ‘increasingly ill-named.’ Its central cultural theme is not rationality, or becoming stronger, or saving the world; it is, Sarah reports, an unconditional tolerance for weirdos, a paradise for Bohemians, a place built on warm connections of mutual support for those who don’t fit into broader society.

The rationalists took on Berkeley, and Berkeley won.

Berkeley doesn’t work for us. We work for Berkeley.

Huge, if true.

…And That’s Terrible?

Yes.

This is bad.

This is really bad.

This is unbelievably, world-doomingly bad. It means we’ve lost the mission.

This is taking many of the people most capable of saving the world, and putting them in a culture focused instead on better living. A culture that, rather than enforcing the mission, is encouraging them to lose the mission. A culture that is such a failure in its outward-facing goals that one of its best and brightest is now suggesting that anyone who wants to impact the outside world should do so without the community’s help.

Raemon777 (Raymond Arnold) puts it this way in the comments:

As Quixote said, the reason I’m excited about *this* community is that it feels like it can in fact play a role in having a strong impact in the world. I’m driving across the country to be there right now because despite all the drama I hear about, the fact remains that whenever I visit the Bay there’s a palpable buzz in the air that “of *course* saving the world is the sort of thing you might do, or which you might try to optimize your socially-rewarding-stuff such that it ends up benefiting the world to some degree. This Buzz was what changed my life, and I think having it matters a lot.

I do think this requires more self-awareness, and requires people to be able to reasonably say “I’m working on this thing that is totally not world saving but will make our home nicer” and not receive subtle vibes that they’re doing the wrong thing.

This is super important. It is arguable whether we truly need a social life and a warm loving community in order to save the world, but most of us are not going to give up such things to go off on a mission. Unhappiness and burning out are not the way, and the fact that a community of like-minded people is being constructed that will give us sane ways to save on housing costs and educate our children, while putting lots of great minds where they can collaborate with each other, is an amazing and wonderful thing.

If we keep the mission.

If we keep the buzz that says, of *course* saving the world is not only the sort of thing you might do, it is the best thing to do. Another comment mentioned a great old post about Mythic Values versus Folk Values. We don’t need everyone to have the Mythic Rationalist Values, but we do need everyone to at least have the Folk Rationalist Values. We do need to require a basic affinity for The Sequences, and at least a good and expanding chunk of our toolbox of thinking skills, where an active quest to acquire said affinity and toolbox fully counts in your first year or two.

A community needs to have standards. A rationalist community needs to have rationalist standards. Otherwise we are something else, and our well-kept gardens die by pacifism and hopefully great parties.

Not everyone in our community can or should be out saving the world, but they should aspire to that, to helping with the mission, even as they focus on themselves and their own lives. That should be the thing of highest honor. We need to keep The Buzz. We need to remember what we are about. We need the mission.

That does not mean we should not be welcoming of weirdos; weirdos are awesome. Warm connections and mutual support are underrated and undersupplied everywhere. We absolutely should continue to welcome outcasts, sexual minorities and runaways, if they are otherwise part of our culture.

The problem is that if you are not careful, your culture becomes about welcoming such people. Ideally, the culture isn’t about welcoming such people, you just do it. It’s simple. You don’t have to organize your life around being a kind and decent human being you can just be a decent human being. Then get on with everything else.

Otherwise, you lose the mission, and all is lost.

The point of bringing all these amazing people together is so those people can make each other stronger. It’s so those people can work together, think together. It’s so those people can form a culture that makes them their best selves.

It’s so that the projects we start can use the rationalist culture we have created, as their company cultures. It’s so that when the outside world wants to capture our mission, and tells us that if we just stopped worrying so much about being technically accurate, we could do so much more, and start us down the standard incentive gradients that are a lot of why (to a first approximation) no one is able to accomplish anything, we can stand up and say no.

That does not mean that every project that is outward facing, or that tries to impact the world, needs to be purely internal. We are not legion, nor are we the only ones who can do anything. The skills and talents and connections and funds often lie elsewhere. There are even places that play by rules similar to our own; I happen to currently work at one of them.

In Conclusion

There is a ton more to unpack, and a lot more important claims that I disagree with, and a lot more detail to explore, but I want to get at least this out there now. I hope to lay out more posts that go into other details.

I want to reiterate that community and happiness are good things. I applaud the construction of a real community, even as I wince at the chosen location and the implementation details. I am glad people have found happiness.

Our short-term and even medium-term happiness is not enough. We have our values. We have our culture. We have a mission.

If Sarah is to believed (others who live in the area can speak to whether her observations are correct better than I can) then the community’s basic rationalist standards have degraded, and its priorities and cultural heart are starting to lie elsewhere. The community being built is rapidly ceasing to be all that rationalist, and is no longer conducive (and may be subtly but actively hostile) to the missions of saving and improving the world.

Its members might save or improve the world anyway, and I would still have high hopes for that including for MIRI and CFAR, but that would be in spite of the (local physical) community rather than because of it, if the community is discouraging them from doing so and they need to do all their work elsewhere with other people. Those who keep the mission would then depart, leaving those that remain all the more adrift.

I hope she is wrong, but I fear she is right. If she is right, sounding the alarm now has been a hugely valuable service, giving us a chance to save what is valuable.

If this is indeed what is happening, and I see reasonably strong evidence that it is, then we need to course correct before it is not too late. That starts with a commitment to honor our rationalist values and virtues, and to holding each other to high standards.

If it is already too late, the mission, and those who have kept it, need to go on elsewhere.

 

 

 

 

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

65 Responses to What Is Rationalist Berkeley’s Community Culture?

  1. srconstantin says:

    What do you see as the mission? I think there’s actually a lot of disagreement and there was always a lack of clarity.

    • TheZvi says:

      I’ve been thinking about this; it’s not an easy question to answer.

      I hope to refine this into a post at some point that’s better thought out, and we should talk about it. But in the interest of not failing to answer the most important email because you’re trying to craft the perfect response, here goes.

      I even think it’s good and right for us to disagree about what exactly the mission is. A large part of the mission is to figure out the contents of the mission, because the mission is all about figuring things out. Hard things. Impossible (in the shut up and do the impossible sense) things. One could even say that the mission is “figure out what the most mission should be, given it’s the most important mission, and then do that,” declare that The Way that can be specified is not The Way, and it’s not obvious that’s even a cop out.

      But it’s still totally a cop out.

      In response to another comment, I answer “why care about the mission?” which incidentally reveals some of what I think the mission is, but the question deserves an answer, probably much larger than this space allows.

      There’s different levels of mission. To use the metaphors I know best, think about a game. The victory condition is something like, score X victory points, or reduce your opponents’ life total to 0, or take the enemy capital, or launch a spaceship to Alpha Centauri, or rescue the princess, or whatever. In an important sense, that is the mission.

      And at some point, you really do drop everything else, use all your strength, and do everything you can to accomplish that mission (interesting note is I wrote ‘try’ but shouldn’t have, and deleted it).

      On this level, The Mission is the most important mission. Save The World. We are here to Fight The Good Fight, to ensure there is something of value rather than nothing (or a very small finite amount) of value in the universe. We fight for life against death, for utility against its many failure modes, on the scale of all of existence. We read Beyond The Reach Of God, we sing Uplift and then we sing Five Thousand Years AND WE MEAN IT. All of it.

      On another level, the mission is about putting us in position to accomplish the mission. The goal is to cut the enemy, but it is not the main activity. One spends far more time crafting the blade, learning the blade, training with the blade, becoming one with the blade, than one spends cutting the enemy. There is paperwork that must be done. There are resources to be gathered. You spend most of your time building city improvements, not rocket ships. You fight for the battlefield, then later claim the spoils of war.

      The most difficult part of putting yourself in that position is to preserve your utility function, to keep your values. Sometimes I think that all the problems of AI are bigger problems of humans. It’s HARD to keep your values in the face of life; everything and everyone around you whispers to conform (RA RA RA!), to want what will feel comfortable and natural in the short term.

      The mission, in the most relevant/important sense given the context, is: To value high-level ruthless pursuit of truth, and the skills and habits that lead to it on all meta-levels, as the most important virtue, and to create and spread a culture that will spread and preserve that mission, on all meta levels. To choose values, both folk and mythic, that will lead to this and to their own self-perpetuation and strengthening over time.

      Attempted short version: “The pursuit of the art of the pursuit of truth, as the highest virtue, in the service of the future, and the spread and preservation thereof.”

      • srconstantin says:

        So, “let’s make there be a future” is a really good mission and I’m pretty much behind it! It’s so broad (compared to, say, the aim of a research group or a company) that I hadn’t really been thinking in those terms for a while, but yeah, I can endorse this as important.

  2. [note on the comments in the post about people leaving their local community to move to berkeley]

    the tone here worries me. the descriptions of what happens are accurate, but the phrasing assigns blame and social punishment to people for leaving, which you should basically just not do. it is really important to allow people to leave.

    • TheZvi says:

      I think that is fair. The implication is there, certainly, and I don’t want to say that everyone who moved is blameworthy or deserves social punishment. In most cases I want to say the exact opposite, and keep things fully constructive.

      However…

      I’m only human, so my emotional truth is going to leak out sometimes.

      And they totally, TOTALLY started it.

      The Berkeley/SF community has engaged in a systematic recruitment war to convince as many people as possible to leave their communities and move. They have done this claiming not only that it would be more fun but that it was the *right thing to do*, they have dangled promises and missions in front of them, then they have used the people who already moved to recruit their friends, and so on.

      For years I have watched my best friends, one by one, leave and then be the reason my other friends are considering leaving, as the rest of us struggled to hold together our lives and rebuild, worried that anything good we did create would just be ripped apart again. I have had pressure put upon me to move, as well, pressure that has made my life substantially worse.

      So they can offer social pressure and social rewards for coming, but others can’t even counterbalance that. They put their entire hand on the scale and people look at me funny for trying to put my finger on the other side.

      At a minimum, those who have engaged in recruiting on behalf of the area, saying things like “everyone should move to the Bay” need to take responsibility for that – for good and ill.

      But let me be clear, if any of my dear and formerly dear friends comes back, we will slaughter or spare the fatted calf, depending on their ethical principles, and there will be cake.

      • raemon777 says:

        As the person Zvi is basically talking about, I mostly endorse this response. (I think Zvi is justifiably angry and frustrated). People should have exit rights (and they do). They also should have the right to complain when a pattern is devastating their community.

        I think community leaders who depart for Berkeley should put a lot of effort into making sure to replace themselves. (I spent several months doing knowledge transfer and helping new organizers step up, and generally shifting the meetup structure to something more sustainable, which I actually do think is going to leave the community stronger than it has been — the last couple months had little input from me and appeared as vibrant as it’s been in the past couple years — but this is very much not the default outcome.)

        This doesn’t help with friendships / relationships getting harmed, which does suck a lot, and which I don’t have a good answer for.

      • Laura says:

        I have lost motivation to put any effort into preserving the local community – my friends have moved away and left me behind – new members are about a decade younger than myself, and I have no desire to be a ‘den mother’ to nubes who will just move to Berkley if they actually develop agency… I worry that I have wasted the last decade of my life putting emotional effort into relationships that I have been unable to keep and I would have been better off finding other communities that are not so prone to having its members disappear.

      • arundelo says:

        Laura, your comment is heartbreaking.

      • Viliam says:

        In my experience, the recruitment to Berkeley was very aggressive. Sometimes it felt like: “if you don’t want to move to Berkeley as soon as possible, you are not *really* rational, and then it is a waste of our time to even talk to you.” I totally understand why having more rationalists around you is awesome, but trying to move everyone into one city feels like an overkill.

        First, putting all your eggs into one basket is a bad strategy from the safety point of view. Imagine that one crazy person would bring one bomb (and I am not talking about nukes here, just ordinary bombs anyone can build at home) and — POOF! — the whole rationalist community of Earth is gone. This seems exaggerated now, because honestly almost no one cares about the rationalist community that much, because the community does not do anything and therefore does not threaten anyone. But imagine a parallel reality where the rationalist movement somehow has an impact on the world (“raising the sanity waterline” etc.); sooner or later someone would get angry.

        There are also slower and less visible risks. Having all rationalists at the same place increases their exposure to locally dominant memes. And because all of them will be influenced the same way, they cannot easily provide sanity checks to each other. I am not saying that in 10 years all rationalists in Berkeley will turn full SJW. I am just saying that I wouldn’t be completely surprised if all of them moved halfway in that direction; and then congratulated each other for being in perfect Aumann agreement with each other.

        Second, if your goal is to spread rationality, having multiple bases across the world allows you to cover larger territory. Blogs are not everything (even ignoring the fact that Less Wrong is currently dead); the offline world matters. You have greater impact on people if you are physically there.

        Third, there is a section somewhere in the Sequences about how groups keep their art alive by competing with each other. You don’t have multiple groups if everyone lives in the same city, maybe even on the same street. You get one big group. And given human nature, that group will likely focus on itself (status within the group, sex with other members of the group) and forget about the world outside. Maybe the productivity of the rationalist group will be laughably small compared with the norms in the rest of the society; no one will notice, and those who notice will not mention it to avoid losing status points within the group.

  3. benquo says:

    This is a good description of why I feel like I need to leave Berkeley whether or not there’s a community somewhere else to participate in. This thing is scary and I don’t want to be part of it.

    I think this is some evidence that the Rationalist project was never or only very briefly real and almost immediately overrun by MOPs, and largely functions as a way for people to find mates. Maybe that’s OK in a lot of cases, but when your branding is centered around “no really, we are actually trying to do the thing, literally all we are about is not lying to ourselves and instead openly talking about the thing we’re trying to do, if you take things literally saving the world really literally is the most important thing and so of course you do it,” it’s pretty disappointing to find it’s just another flavor.

    • Alyssa Vance says:

      “never or only very briefly real and almost immediately overrun by MOPs, and largely functions as a way for people to find mates.”

      Having been around for over a decade, I can safely say that things were really very different back in the day. In 2009, a bunch of us spent the summer together in Santa Clara, as part of a MIRI (then SIAI) fellowship program. I eventually learned that Anna and Carl, who were in the program together, had started dating. Emotionally, it was pretty surprising to me. Not because of anything about them, but because the idea that program participants might be dating each other just hadn’t really occurred to me. It would be basically impossible for anything in Berkeley to be like that now. Even if the project itself was totally de novo, the existing culture would quickly filter into it.

      I remember when Andrew and Sarah first got together (early 2011). IIRC, everyone assumed by default that they would be monogamous. Poly people had to explain that being poly meant dating more than one person at a time, because many people literally had not heard that before. Kevin started the first permanent rationalist house; nobody just assumed that an average adult, out of school, would live somewhere like that. Nobody assumed that Berkeley would be the permanent center of everything.

      To the extent we worried about things being real, it was because of something like “oh, we might be ivory tower academics who are too nerdy to do things in the Real World”. There were people you might call MOPs, but they were widely agreed to have low social status; an MIT postdoc was considered a special high-status VIP. The idea that someone might hang around specificially to hook up with people would have been, given the culture and gender ratio, seen as obviously a joke, the kind of thing that was funny since of course it would never happen. :)

      • raemon777 says:

        My question is not ‘is there a higher concentration of MOP’, but ‘if people want to hang out with non-mops and/or do real things, are there obstacles to that that are do to culture instead of actual limitations on how many close friends or colleagues people have the time to interact with’

    • Evan says:

      I’d say the rationality community started whenever Eliezer forked off LessWrong Overcoming Bias, which was around 2008 or 2009. That’s certainly not when it peaked. Even in a way MIRI never was, CFAR started out a project built by the rationality community. That was happening in 2012 or 2013. Above Sarah is also quoted as saying she thinks the Berkeley rationality community hit the right balance of focusing on being a welcoming community qua community, and aspiring to the whatever the core mission(s) of the aspiring rationalist project are.

      Unless you’re arguing there was a latency effect where the MOPs overran the community in 2009, but the consequences of such were buried for several years, the period between 2008/09 and 2012/13 doesn’t constitute being “immediately overrun”.

      I get you’re pessimistic, but I think you’re overshooting. Matching the map to the territory of what went wrong in the Berkeley rationality community is key to undoing it, or making sure similar failures don’t occur in the future.

      FWIW, I’m sorry you’ve had to experience so directly what you feel like is a decline in an aspect of your local rationality community. As someone who connects with rationalists primarily online, I can tell you they’re everywhere, and even if there isn’t a meatspace community as developed as the one in Berkeley, there are rationalists who won’t let the Craft disappear everywhere, and they want meatspace communities of their own built up outside of Berkeley as much as anyone.

      • TheZvi says:

        I think we need to realize the extent to which Berkeley is actively preventing the formation of, and destroying, these other communities. The majority of high-level rationalists who started in the New York community are in the Berkeley community, which caused New York to outright collapse for years before recovering, and they just now once again caused a crisis by taking away a pair of vital community members and almost wiping out the only rationalist group space in the process. From meeting other community leaders in other cities, I hear similar stories A LOT.

        I do agree that Plan A for most members can and should be Fix It, not walking away, and that pointing out it needs fixing is the requirement for perhaps fixing it.

      • My sense of the history agrees with the above. If memory serves, we only had a few MOPs before CFAR was formed, but we also didn’t have enough sociopaths, and we had discovered that only a small subset of (particularly spectrumy or psychotic) Geeks is productive without sociopaths to organize them, and when you get too many unproductive geeks together they become depressed and become even more unproductive. When CFAR opened the doors to MOPs, everyone was greatly relieved, and happiness levels went way up. However, while SJW and Rationalist Geeks don’t mix, SJW and Rationalist MOPs mix thoroughly, and around 2014 the MOP exceeded the 8:1 safe upper bound, the MOP herd was predominately SJW, and the term ‘Rationalist’ was no longer really applicable. I returned from the Bay Area to NYC at this point.

        Since then, my impression is that things have gotten worse WRT CFAR and the community as a whole, but MIRI has improved considerably, and has grown to the point where by itself it can constitute a very appealing community. Leverage seems to have done this as well. Both seem much less integrated with the larger rationalist world than they once did though. It seems particularly sad that they don’t move somewhere cheaper now that they are effectively socially autonomous.

        Despite everything, and with all regards for Zvi, I don’t think that the non-Bay rationalist communities are actually the solution. The Rationalist Community was founded almost a decade ago with the intention of accelerating a historic trend towards raising the sanity waterline before a diffuse historic trend towards accelerating technological progress killed everyone. In retrospect, we were clearly misreading the situation. Since it was founded, technological progress has ceased to be a diffuse historic trend to such a degree that it’s now fair to attribute most of ongoing technological progress to one guy and his increasingly numerous companies, while the sanity waterline has fallen faster than we ever hoped to raise it, leading to contemplations such as http://www.scottaaronson.com/blog/?p=3376 .

        In particular, we hoped to raise the number of people reasonable enough to seriously consider AI risk. Since then, AI risk has become fashionable and is now being endorsed because it is fashionable, by unreasonable people.

        Most of all, having learned so much of the awfulness of Silicon Valley and the overt malevolence of SJWs, we have failed to come to terms with the fact that empirically, as the world measures progress, Silicon Valley remains the benchmark. Somehow, the rest of the world is EVEN MORE AWFUL for technological progress! We haven’t become curious about that, yet it is the chief riddle with which we ought to be occupied. Likewise, Some fact, obvious to smart teenagers, makes the overt malevolence of SJWs a MORE appealing option than joining with the ‘out to get you’ forces of Facebook and the contemporary world. It’s fair to say that there is a GREAT DEAL more covert malevolence going around than we are inclined to notice, and if we pretend it’s not there even as it undermines our project we are like farmers who refuse to use pesticides or to acknowledge the need for some sort of organic methods of pest control.

    • raemon777 says:

      I think Zvi’s post points at a plausible and interesting thing worth considering, but I’m not sure what you mean by “this is some evidence”. It sounds like you are responding to this essay, but this essay isn’t additional evidence about the state of Berkeley – it’s attempting to explain the state of Berkeley as described by Sarah.

      Or did you mean that you’ve independently seen the thing Zvi is describing?

  4. Pingback: Rational Feed – deluks917

  5. blacktrance says:

    What’s the point of the mission if it doesn’t cause one to live a better life?
    When I lament the possible deterioration of rationality standards in the community, it’s because I think they contribute to a good life. If they weren’t, they’d be more like work-related skills – useful in their context, but being surrounded by people who have them wouldn’t be as important.

    • TheZvi says:

      It is a good question and should be asked. There are a lot of levels to it; it boils down to the question Why Rationality? See that post for my long answer (in a slightly different context), but basically a quick sketch would be:

      1. I value the mission for its own sake. It is one of my Intrinsic Goods.
      2. The mission is the (1st to 4th meta-level depending on how you count) source of the things that are leading to a better life. If you lose the mission, the source of the better life things goes away.
      3. The mission is necessary to protect the good things you already have. If you don’t keep the mission, you lose your selection optimization; members of the public enter, and your best most motivated people leave. Your thing gets captured by regular people and turned into a regular thing, what is special about it is lost. Even if all you really wanted was to live with a bunch of other smart, honest people that share your memes, that will be diluted, discouraged and destroyed if you don’t defend it.
      4. If you don’t provide the mission it will become something else; e.g. tolerance for weirdos. If we are lucky. That’s a better mission than most, but these days most paths end with culture war politics. Step one in a better life is keeping that stuff away, even on a minute-to-minute basis.
      5. Not to bury the lead but the mission is FREAKING IMPORTANT. The mission is the difference, at least with some probability, between a universe with substantial positive utility, and one without it. It is, quite literally, life and death. It is the difference between the preservation/creation, and the destruction, of all the things you care about. We need to be careful about not putting people under too much pressure, cause that would not help matters, but we cannot afford to all Refuse The Call.

  6. Evan says:

    While rationalists are internally trying to figure out how there community has changed, and they’re lamenting how it’s not as focused on world-saving, there’s a giant factor nobody has talked about yet. The only community which is more focused on the rationality community’s way of world-saving than the rationality community is effective altruism. To what extent is the rationalist community less world-save-y than it used to be because the rationalists whose primary rationalist role was “world saver” just switched to EA as their primary world-saving identity. I think as things have gotten less focused since LessWrong 1.0 died, and the rationalist diaspora made entryism much easier as standards fell, what you’re saying is all true. You might be overestimating the impact of entryism, though, and underestimating people who exited not because they had no voice, but for sensible reasons. If at any point a rationalist felt they could better save the world within the EA rather than through the rationality community, it’d internally make sense to dedicate one’s time and energy to that community instead.

    The EA community doesn’t seem able to build bonds as well as the rationality community. However, the EA community seems better at making progress on outward-facing goals. In that case, I for one wouldn’t blame anyone who find more at home as a world-saver in EA than they did in the rationalist community.

    • TheZvi says:

      Definitely an elephant in the room and a reasonable suspect! Certainly partially responsible. I haven’t mentioned it yet, but that doesn’t mean I’ve missed that it is in the picture. I wanted to get this much out there now, and avoid trying to cover as many bases as possible all at once.

      There have been many (Sarah and Benquo among them) who have been trying to talk for a long time, with many many words, about the problems with EA. I will consider that question beyond scope here, but rest assured I Have Thoughts.

  7. The Verbiage Ecstatic says:

    I think your invocation of the folk values vs mythic values post might be missing its point.

    You say that the community *at least* needs the folk values.

    But I think its point, which I agree with, is that folk values are a cargo cult built around heroic values that actually get in the way of them.

    When you see someone do something heroic, one feels the urge to rise to that level. That urge is counteracted by the fact that true exercise of heroic values (no matter which community we are talking about) are generally terrifying and paid for in blood, sweat, and tears.

    So, some people procrastinate on living those values by finding a bunch of others who also admire the heroism from a distance. They then sit around talking about how great the heroes are and how much they would like to emulate them, and what their big plans are for doing that, and them they start sharing “I fucking love science” memes, and then they start writing blog posts about how members of the community need to *at least* fucking love science!

    Meanwhile, outside the community, someone else saw Nikolai Tesla’s work and was inspired to take action on her own, toiling in loneliness on a project. Her project looks nothing like “I fucking love science” memes but shares Tesla’s deep need to understand how the universe works.

    It would be great for her if she could find collaborators who share her mission and inspiration. But would a community of Tesla fans throwing their meme parties be a good place for her to look?

    That’s my read of the point that heroic values v folk values post was making. See also Steven Pressfield’s book “The War of Art”, which is great and brings to life the dynamic of how someone who admires Tesla ends up making memes. If you haven’t read that book, and you have “doing something important for the world” on your life goals list, you should probably read it, it is very short.

  8. tk17studios says:

    re: the community needing to have standards …

    It is my experience that it emphatically does not. Granted, I only see like a 25% slice of the community at all, and spend most of my time in like a 10% slice, but the *vast* majority of conflicts I have had with people in the community have boiled down to disagreements over how important it is to conform to norms of rationalist virtue (taking outside views seriously, making falsifiable predictions, checking one’s emotional responses to see whether they’re justified and appropriate or not, deliberately setting aside one’s tendency to strawman, etc.).

    Most of the time, when these conflicts play out, I feel that the community response I get is somewhere between overt hostility à la people doing monkey politics (a setting aside of the actual point in favor of defending social and subtribal bonds that I’m threatening) or a rueful shaking of the head à la “yeah, you have a point, but let’s be real, it’s not worth fighting for” or “maybe it’s true that this person isn’t really a rationalist, but they’ve been around for years and they’re a part of the social fabric and we’d rather avoid schism.”

    This makes me sad. I wish that there were e.g. a corresponding loss of social status for not acting in accordance with a) rationality and b) worldsaving ethos, and in my experience, there just plain isn’t.

    (I note that under such norms I certainly would’ve lost points here and there, sometimes in large chunks, but those are the points I would have *wanted* to lose, to incentivize from the outside what I reflectively feel is proper behavior.)

    This seems to be a community first, with rationalism and worldsaving a distant second, something that everyone professes but nobody actually *has* to uphold (like professions of Christian value in small towns in the South or the Midwest). There are those who are hardcore committed and will never waver, but Moloch is chipping away at everyone else, and overall the thing is not structured to turn nascent rationalists and worldsavers into *actual* rationalists and worldsavers. Quality and growth seem from my perspective to happen occasionally in spite of the community, rather than because of it, and a number of high-status people seem to me to be objectively, demonstrably non-rationalist in their behavior, and to be venerated as role models anyway.

    (This is in addition to, say, the Rob Bensinger and Julia Galef types, who *do* consistently embody the virtues that the community lays claim to on the surface, and receive status for it. It’s less about the community never rewarding the good thing, and more about it never disincentivizing the bad thing.)

    If anyone reads all of the above and thinks it’s just patently false, by the way, note that I’d *love* to be proven wrong on this point, and again I acknowledge that I see but a slice and that slice may be non-representative (or I myself might be pessimistically biased).

    • raemon777 says:

      > It is my experience that it emphatically does not.

      Minor point: I first read this as “the community does not need to have standards”, as opposed to “the community does not have standards”

    • tk17studios says:

      Crosspost from FB, where people responded to the above by saying that they agreed about lack of standards but disagreed about what the standards should be (and that therefore the community might productively split), and also asked whether in my experience high-standards and low-standards subgroups could in practice coexist:

      I’m okay with the community splitting into multiple communities, but the one that calls itself “the rationalist community” had better embody rationalist virtue, and if there’s disagreement on *that* meta-standard, I’m willing to openly fight the disagreers.

      I’ve found (in symmetric experience in past communities) that there can be a loose, casual alliance between the high-standards subgroups and the low-standards subgroups—they can certainly exist shoulder-to-shoulder and meaningfully hang out. But yeah, the former causes too much pressure against the latter for them to actually overlap/coexist. In a very real sense, what defines a community is its fences and its ejection policies (hence a lot of people’s sudden and terrifying introspection about What Even Is America If It Doesn’t Repudiate Donald Trump). If a community is more concerned with being welcoming and not hurting feelings than it is with its nominal goal, then its nominal goal is fake.

      Another way to think of this is, you don’t get to play football with the team if you don’t attend practices. In the current state of our community, many people are “playing football” without “attending practices.” They’re enjoying the social benefits of the network and the pat-yourself-on-the-back benefits of the identity without actually contributing to the team.

      (Note here that I am aware of, and do validate, specialization—one need not literally be e.g. a rationality researcher to be meaningfully participating. One could possibly be just the Town Wafflemaker. But in that case one would still need to be on board with the culture. I can think, for instance, of a woman who was more deeply integrated with the high-standards parkour scene than many of the athletes, despite not actually doing parkour herself—she took photographs, chauffeured groups around, helped host and sponsor events, regularly passed on tips and advice to beginners, encouraged safety and good practice, etc. etc. etc.)

  9. Nick T says:

    I don’t have anything substantial to add at the moment, but do want to say for the sake of building common knowledge about perceptions that the OP and Benquo’s and TK17Studio’s comments resonate strongly with me.

  10. gbear605 says:

    My take is that the question of “what is the mission” really isn’t asking the most helpful question right now. Instead we should ask **what is the next step to take as part of the mission?** I feel like people in the rationalist community have at least a subconscious “save the world” goal, but a lot of them aren’t doing anything toward it because it’s not a useful goal. The mission, as Zvi vaguely defines it, doesn’t reveal any steps to take.

    This is similar to the idea of SMART goals, where if you want to accomplish a task, it needs to be Specific, Measurable, Achievable, Realistic, and Timely, otherwise it will be much less likely to actually achieve the goal. Zvi’s mission isn’t specific, measurable, or timely [whether it is realistic or achievable is a completely separate discussion].

    • tk17studios says:

      Some degree of increase in this would be useful, but also I posit that you don’t get culture-level cohesion around SMART goals. You get culture-level cohesion around “shining crystal city on a hill” or “fairness and equality for all” or “the best gosh-darn marching band in the world,” and SMART goals are one tier below that, if not more.

    • TheZvi says:

      Yes. The question we want to be asking is, what is the next step to take as part of the mission?

      The problem is that you need to answer “what is the mission” first, or you auto-lose.

      If you don’t know what the mission is, it’s rather difficult to then figure out what next step would help accomplish that mission. If you see someone who got confused and lost the mission, correcting that is a necessary first step.

      No plan, no matter how SMART (I worry about things like SMART that they are making important observations but in a highly dangerous way, but that’s beyond scope), will accomplish the mission if it was chosen with completely different goals in mind.

      As for what we should actually do once we affirm the mission, I certainly have thoughts on that (and to the extent I can, I am acting on them), but that too is beyond scope. I hope to be able to well-specify such things soon.

      • gbear605 says:

        Then I posit that the next step as part of the mission is to determine what the mission is. I guess my point is that rationalist-berkley is distracted from the mission because as a whole, there is currently no steps that they can take/feel they can take. So instead they get involved with EA or are part of MIRI or CFA or they just don’t know what to do so take the easy answer and do nothing. If you want a group to do something, that something needs to be defined.

        I’d suggest that the next step is to create a Google Doc or something like that, share it with other rationalists, and start concretely defining the mission and steps to accomplish it.

  11. Andrew Rettek says:

    The tone of the Berkeley community has changed, but I’m far from convinced that efforts have declined in absolute terms. As a proposed measure, what concrete actions were being taken in 2012 that aren’t being taken in greater effort (in terms of numbers of people and dollars moved), or replaced by other projects?

    • raemon777 says:

      As far as I can tell the answer to this question is ‘no’, and I’m frustrated that this isn’t a bigger part of this discourse.

    • TheZvi says:

      Remind me, what point in its development was Less Wrong (the website) in at that point?

      As an outsider who misses his friends and lives thousands of miles away, it’s hard for me to point to know what object level stuff is and isn’t happening on the other coast – I’m relying entirely on secondhand reports. Raymond just arrived in the last month. So I’m curious to see thoughts from others who have been at the center for 5+ years.

      I do think that a greater quantity of concrete efforts is entirely compatible with the problems described, especially if the size of the underlying group is much larger, but it would be good to know if that was the case to be made, or not. I’d also add that if something is ‘replaced by another project’ but the new project is off-mission (in some important sense) then that’s still a big problem.

      I’m also struggling a lot with the question of how to either share the intuition that particular non-concrete stuff matters and why, and/or be able to turn these non-concrete things concrete.

  12. Orangeneck says:

    Just a point of feedback from one salty rationalist, as I’m guessing you rarely get feedback from those who’ve selected themselves out of your community. I bailed when it became very apparent that the core value was not “rationalist ethos” nor “unconditionally tolerate weirdos” but rather “conditionally tolerate weirdos that did not offend the not-very-atypical “blue-tribe” normative sensibilities of influential community leaders”. This article comes to mind as very poignant on the matter: http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/

    When I offended others, I was censored, told I belong on an “evil” containment board (and supposed to believe this was not an insult), and later threatened with expulsion. When I myself was offended and sought remedy, I was told I urgently needed to apologize to those who offended me or “face consequences”. This even occurred when I made painstaking efforts to edit my objections into legal formality- which was firstly derided as “tldr” and secondly remanded as still objectionable and inappropriate content. It was abundantly clear to me then that this community had no real interest in facing points of view it found challenging. And I was disgusted that intelligent people would so readily and *explicitly* abandon due process in favor of “in-group, out-group” arbitration. For “necessary” expediency of course. Never mind whose feelings might be trampled by this expediency. Unless of course it was the feelings of someone who actually matters. Many standard format excuses and justifications for the “necessity” of this sort of behavior soon followed.

    In my experience, you are not so idyllic or even interestingly different than you all seem to think of yourselves. Even your arrogance in this regard is rather typical. But this is not rationalism’s failing as an ideology, nor is failure of the community inevitable in my opinion. Rather, some rationalists leave your community (I’ve observed five, myself included) when interpersonal politics are put before ideals. And with each loss, your community is less enriched.

  13. Quixote says:

    For the record, I’m going to note that my experience is almost the exact opposite of Orangeneck’s.
    While Orangeneck found the community not tolerant of [political direction], I found it to be excessively tolerant. This wasn’t a problem in [physical location] meet up groups that I attended (if it had been I wouldn’t have attended), but was on some online groups. Even [physical location] seemed more tolerant than seemed prudent to me.

    I at least once heard someone claim not to acknowledge the evidence for anthropogenic climate change and people just awkwardly moved on rather than calling the individual out. If people aren’t called out for being wrong on a controversial issue that is so easy to fact check, think about what might be happening on other issues.

    The community didn’t seem serious about managing PR risk to me. But if you have a goal and you want it to succeed, then you manage risks to the goal. Even 5+ years ago, the community never felt serious about risk management to me. Even back then, the values drift to tolerance of weird views rather than accomplishment of goals had already happened. I work in [sector] and do significant work to prevent [type of crisis]. That work is important. Accomplishment of that wasn’t furthered by attending a meet up of a community whose public faces put tolerating unpopular views over getting things done. So now I don’t.

    The work to prevent [type of crisis] continues quite well.

    • JDP says:

      >The community didn’t seem serious about managing PR risk to me.

      Bingo. One of the major problems I have with the LessWrong sphere is that its full of people who spend weirdness points like they’re water, and if you try to censure them for it you get told you’re being unreasonable. Some examples that come to mind:

      – The number one search result people use to find my personal website is apparently “lesswrong cult” (admittedly with a single digit sample size but…wow)

      – When I was soliciting questions for the 2016 survey, I remember somebody suggesting I put a question about a “squatty potty” on the survey. Someone else asked me to put one in about how often people bathe. O_o

      – Dragon Army.

      Part of why it’s important to define your mission well is that it lets you get away from questions like “Okay but are Neoreactionaries *so wrong they need to be kicked out*?” and more towards “Is hosting Neoreactionary discourse serving our organization/community/etc mission?”

    • Orangeneck says:

      I would like to clarify, that my quarrel is not with “[political direction]”, in as much as American politics are concerned; petty personal politicking yes, but not governmental. I mean “blue tribe” as something very distinct from “Democrat” or even “Neoliberal”.

      For example, someone made a “factual” claim about [people where I’m from]. I objected, and demanded citation and evidence to support this claim. Not only was it not provided, but I was told I was *being offensive and in the wrong for demanding it*, and that I both needed to cease inquiring and apologize. This act of protecting someone’s emotions at the expense of investigating a hypothesis- the very definition of irrationality. Not only that, but the fact the claim was personal to me was less important than my objection applied to someone more connected to the group leader than myself. So not only was the leadership of the community irrational, but far from impartial as well. Meanwhile ignorance and prejudice about [people where I’m from] is reinforced. This is all very typical and ordinary. But should I not have expected more from the ideology that penned the Sequences?

      This was not an isolated incident, nor was it isolated to myself. I redid my count- seven individuals that I’m personally aware of and have maintained contact with have reported dropping for similar experiences- the additional two from different meetup locations. The issue is cultural, not political.

      And when I said “standard format excuses”, it speaks to your hubris that you talk of winning PR battles. PR was certainly a “concern of the community” that was trotted out frequently enough when I left several years ago. As I understand it, things have only declined since then. Probably because principled people like myself won’t stand for it when such “necessity” in practice is only used to remove “undesirables” from your friends circle.

    • PDV says:

      Not managing PR risk isn’t a bug, it’s a feature. Focus on PR risk brings you further from the truth, because what is good PR is not truth-tracking.

      I also don’t see what’s necessarily wrong about allowing someone to express disbelief in AGW. Do you really think it’s better for truth-tracking norms to challenge them on it and start a significant, and substantially political, fight, rather than downgrading your assessment of their quality as a rationalist and moving on to something that might be a productive topic of discussion? If they otherwise have experienced the person as a valuable contributor to discussion, then the appropriate action may in fact be “slightly update away from AGW”.

  14. deluks917 says:

    I am reposting (with permission) something Zvi sent me in a private exchange. I think it really clarifies his position on excluding people:

    “I think that currently reading the sequences, over the course of a year or two is worth 100% full credit. Students welcome. Wanting to learn via practical examples and discussions, also good enough. In fact, if you just like hanging out and talking about how things work in a careful fashion using our cultural discussion norms, that’s basically good enough, because if you do that, you’ll learn.

    As Scott Alexander puts it, we have truth on our side (and we’re pretty smart), so we’re happy to engage in debate. We’ll win.”

  15. JDP says:

    You know it’s funny, I’ve actually been thinking about mission for quite a while and had an entire mini-section dedicated to it on the 2017 survey. So to have discourse swing around to it as a focus at the same time is pretty convenient. Some questions for people that will let you get lots of data on this subject:

    – What are the reasonable options for a mission for ‘the rationalist community’? Don’t bother with whether they’re the correct mission or not, just the ones you think various people *think* are it.

    – Is there a better way to phrase Sarah C’s “unconditional tolerance of weirdos” mission statement? I don’t think most people would ever think of it that way, so on a survey it would get artificially low results but I’m not quite sure how to rephrase it. Suggestions welcome.

  16. I may be wrong about this, but I see as the key feature of the Berkeley Rationality Community (BRC) the fact that it’s both possible and implicitly encouraged to live your entire social life within the BRC. Your roommates are mostly BRC, your friends are mostly BRC, your coworkers, your neighbors, your lovers…

    Here in New York, the community probably isn’t large enough to be anyone’s entire social world, so most of us have other circles and communities that we are part of, circles that are very different from rationalists. Most of my professional network, my circling group, and my soccer team would think the rationality community is crazy if I told them about it in detail.

    But what if you’re the type of rationalist-aligned weirdo who only feels comfortable in weirdo-tolerant rationalist communities? My guess is that you would move to the BRC as soon as you can afford the bus ticket, and that once there you will make sure that “unconditional tolerance of weirdos” is the core mission of the community because your entire social life depends on it. And once enough people like that move in, the community grows ever more all-inclusive and internally focused.

    P.S.
    Betting time! Will Scott’s writing improve or deteriorate with his move to Berkeley? You can probably guess which way I’m leaning.

    • TheZvi says:

      This does seem like a plausible thing.

      On the Scott question, I was wondering/worrying about that too, and I think this will be a real test of how things really are right now. If Scott manages to maintain the quality of his writing and thinking, and to avoid getting hijacked into debates and topics that would be expected given his new location, then that would be a very good sign, while his failing to do this would be a bad sign. I am hopeful but skeptical, and of course wish him the best of luck. I think his writing has still been good, with a small decline in quality; my bigger worry is that it seems like he feels like he needs to engage (or feels able to engage) in places where it’s not clear engaging (or engaging too often) is wise.

      • raemon777 says:

        What are concrete examples of topics or styles of quality-shift that would be evidence that something bad has happened?

      • Zvi Mowshowitz says:

        Quality is a hard thing to give a concrete example about, for obvious reasons. One thing to look for would be a lack of ‘grounding’ in the regular world, which he is quite good at historically. In terms of topic selection, I can see a few failure modes. One is him feeling the need to write a lot of, or feel unable to write any of, #ThingsIWillRegretWriting; I think if we see a big increase, or a basically complete lack, of such posts, that will be bad, for different reasons. I also think if we see a large increase in the amount of “look at thing X that is bad, here is a long and detailed explanation of why it’s bad” where X is something people in the outside world weren’t that aware of, that would also be bad. Or if we start to see a lot more ‘motivated’ stuff in other ways, such as him pushing EA harder and less objectively; so far he’s advocated for EA in a very good and balanced fashion, including yesterday, which I took as a positive sign for him – I’m still processing what it means for the rest of EA, but watch this space for that.

      • William Eden says:

        With 8 months behind us since this comment, do you think Scott’s writing has changed appreciably in one direction or another?

      • TheZvi says:

        On the negative side, definitely a shift towards ‘taking the bait’ in various forms, and feeling the need to defend things that don’t need defending slash refute things better left alone. And a general decline in the rate of awesome, especially considering he was previously also writing Unsong.

        This would be the scariest piece by far, along with a few around that time, which would be exactly what I’d feared: http://slatestarcodex.com/2017/11/07/does-age-bring-wisdom/

        On the plus side, it’s still basically the best blog in town in terms of quantity*quality, despite all that, and it could have gotten a lot more worse. And you could argue, reversion to the mean. And you could argue, 2017 and 2018, man. Fair points.

        Curious what others’ thoughts are.

  17. Pingback: Paths Forward on Berkeley Culture Discussion | Don't Worry About the Vase

  18. Pingback: Altruism is Incomplete | Don't Worry About the Vase

  19. Pingback: At a Party | An Algorithmic Lucidity

  20. Pingback: Best of Don’t Worry About the Vase | Don't Worry About the Vase

  21. Raemon says:

    FYI, after one month here, my concerns are entirely on the “I hope the pressure to devote my entire life to fighting X-Risk doesn’t accidentally damage my soul” side of things as opposed to the “I hope I don’t get absorbed into the vague feel-good parts of the community”

    • Raemon says:

      (this is still a generally positive experience, just, if it fucks up, that is the direction in which it seems likely to do so)

    • raemon777 says:

      (9 months later)

      I definitely don’t feel pressure to abandon the mission. I do feel some… not really pressure but _drive_ to focus on efforts to build up the community elements _within_ the mission paradigm, and my only worry is that that is a distraction from other mission-focused areas I could be fully-devoted to.

      I think it remains the case that the Village is larger than the Mission, but this is, like, to be expected. Most of the mission-work goes on in organizations, basically as it should be.

  22. Pingback: “Balance to Win”: Sometimes You Need Friends, Sometimes You Need Haters – Remake The Map

  23. Pingback: Nice Things | Don't Worry About the Vase

  24. Pingback: The Case Against Education: Foundations | Don't Worry About the Vase

  25. Pingback: Smorgasbord (Aug 2018) – Junk Heap Homotopy

  26. Pingback: Additional arguments for NIMBY | Don't Worry About the Vase

  27. Pingback: deluks917 on Online Communities – Put A Number On It!

  28. Pingback: Who To Root For: 2019 College Football Edition | Don't Worry About the Vase

  29. Pingback: Interview with Aella, Part I – Put A Number On It!

  30. Pingback: Covid-19 6/18: The Virus Goes South | Don't Worry About the Vase

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s