Paths Forward on Berkeley Culture Discussion

Epistemic Status: Not canon, not claiming anything, citations not provided, I can’t prove anything and you should definitely not be convinced unless you get there on your own.

Style Note: Phrases with Capitalized Words other than that one, that are not links, indicate potential future posts slash ideas for future rationalist concepts, or at least a concept that is (in a self-referential example) Definitely A Thing.

I got a lot of great discussion, at this blog and elsewhere, about my post What Is Rationalist Berkley’s Community Culture? There are a lot of threads worth full responses. I hope to get to at least some of them in detail.

This post came out of a discussion with Ben Hoffman. I mentioned that I had more places to go next and more threads to respond to than I could possibly have time to write, and he suggested I provide a sketch of all the places. As an experiment, I’m going to try that, at least for the subset that comes to mind. Even as I finish up writing this, I think of new possible paths.

None of this is me making claims. This is me sketching out claims I would like to make, if I was able to carefully build up to and make those claims, and if after doing that I still agreed with those claims; trying to robustly defend your own positions is sometimes a great way to end up changing your mind.

In particular, I am definitely not claiming the right to make these claims. An important problem is that often people know something, but do not have the social right to claim that it is true, leaving them unable to share, defend or act upon that knowledge. As we as a culture become unable to state more and more true things explicitly, this problem gets worse. I have left out some claims that I am not socially safe even saying that I believe, let alone claiming that they are true or claiming the right to claim that they are true.

One thread is to expand on the question of folk values versus mythic values, as I think the original post was pointing to a hugely important thing but got important things about it wrong – I think that people with only folk values are helping, if you chose your folk values wisely, and should be honored. So we might call this (1A) Yay Folk Values.

A second thread is to address the question Sarah asked: “What Is The Matrix Mission?” This deserves not only one but at least two careful and full posts, one (2A) about the mission to preserve our mission and values, The Mission Must Preserve Itself. Preserving your utility function (for both groups and individuals) is hard! If your utility function and culture do not explicitly and expensively protect themselves, they will become something else and eventually end up orthogonal to your original goals. If people decide they don’t want to go on (2B) The Mission To Save The World or support that mission, I respect that decision, because that’s not the part that needs to preserve itself – it follows from thinking carefully about things. What can’t be compromised are the epistemic standards and values. That’s what we must fight for.

Thread (2C) would then talk about How the Mission was Lost, as the group allowed its culture and utility function to shift over time, partly through entry of non-core people, but also by choosing goals that were too much about attracting those people, which is doubly toxic to preserving your utility function and culture – you shift your own priorities in ways that are hard to compartmentalize or contain, and you bring in people with different culture.

The follow-up from some people was, why preserve the mission?

One I do intend to get to, because it seems super important is that you sometimes need to (3A) Play On Hard Mode. A lot of paths make your life easy now, but make it hard later, because you develop habits and tools, skills and relationships and assets, that won’t get you what you want, and that incentivize you further towards developing the wrong things. The wrong hills are climbed. You need to avoid optimizing for things, or taking actions, that would subject you to Goodhart’s Law – you need to (3B) Nuke Goodhart’s Law From Orbit. Repeatedly. It won’t stay down.

Getting what you want is also about avoiding what you don’t want. This partially answers, if done right, the question of (3C) How To Keep The Wrong People Out. If your keep your thing unattractive to the wrong people, the wrong people who you wouldn’t feel comfortable kicking out, will self-select out. The ones who still refuse to leave will make it easier on you; you still have to grow something of a spine, but not that much. New York has inadvertently done a strong job of this. The cost, of course, is the false negatives, and I’d explore how to minimize that without losing what is valuable. This is also related to (3D) Become Worthy, Avoid Power, which is still on my eventual maybe-to-do list.

(3E) Goodhart’s Law Is The Enemy. It is behind far more of our problems than we realize. Almost all of today’s biggest and most dangerous problems are related. That is in fact the place I am going, slowly, in the background. There’s also another track about letting people see how my brain works and (3F) Teaching The Way By Example but let’s cut the thread off there before I get even more carried away.

Then there’s (4A) which would be Against Polyamory. You can be busy solving the nerd mating problem, but if you’re busy solving that you’re going to have a hard time being busy solving other things, especially if those other things aren’t helping your nerd mating success. I think that setting aside all issues of whether it makes people happy, whether it is moral and whether it has good long term effects or is good for families, I think it has even more important problems: It is a huge time sink, it hijacks people’s attention constantly, vastly increasing complexity (Complexity Is Bad), and it makes anything and everything about sexual bargaining power and status because everyone always has tons of choices and Choices Are Really Bad. It makes utility function preservation even harder, and it is an example of (4B) Bad Money Drives Out Good. I have some stories to tell about other people in other communities, as well, to the extent I can find a way to tell them here.

At some point I do need to respond to the intended central point Sarah made, and that Ozy reminded us of elsewhere, that the rationalists by themselves don’t have an especially strong record of actually doing outward facing things, especially for things that are not charitable organizations or causes. There are essentially three ways I’ve thought of to respond to that.

Response one (5A) is that this is partly because of fixable problems. We should Fix It! Response (5B) is to explain why keeping the projects Within the System is important, and worth having a reduced chance of commercial success. If you can’t retain control of the company, and the company’s culture, it will inevitably drift away and be captured, and not only will other get most of the money, they’ll also steal the credit and the ability to leverage the creation to change the world. That doesn’t mean never go out on your own, but there are reasons to aim high even when everyone tells you not to; each project has the same problem that the overall culture does. Of course, you’ll need some people from outside, we aren’t big enough to cover all the skill sets, even with the non-core people. Response (5C) is that it’s fine to go out and do things on your own and with outsiders, of course it is, but you need A Culture Of Extraordinary Effort or some such, to make those things actually happen more often, and even in that case (5D) Hire and Be Hired By Your Friends And Network because that is how most people get hired.

Then finally there’s a question that both points to a huge problem and deserves an in-depth answer: Effective Altruism (EA) seems reasonably on-brand and on-similar-mission, with an explicit ‘save the world’ message, so why shouldn’t the people who want that migrate over there? Then I have to finally face writing out my thoughts on EA, and doing it very carefully (but not as carefully as Compass Rose, see basically the entire archives and I do not have that kind of time), without having ever been to more than local meetup level events for EA – I will at some point, but I have this job thing and this family and all that.

Given Scott’s recent post,  Fear and Loathing at Effective Altruism Global 2017, combined with this being perhaps the best challenge most in need of response, the timing seems right to do that.

I have a lot of thoughts about EA. Most of them involve philosophy, and many in EA are pretty deep into that, so it’s tough to write things that are new and/or couldn’t be torn to shreds by people who have been over all the arguments for years and years, but at some point we all know I’m going to do it anyway. So to those people, when you do tear it all to shreds, or point out none of it is new, or what not, I won’t be offended or anything. It’s going to take me a bunch of iterating and exploring and mind changing to get it right, if I ever get it right. Just be nice about it!

I’ve said before that I am a big fan of Effective but skeptical on Altruism. I mean, sure, I’m a fan, it does a lot of good, but (6A) The Greatest Charity In The World is not Open Phil or the Bill Gates Foundation, it’s Amazon.com, and if Jeff Bezos wants to help the world more, he needs to stop worrying about charity (although if he’s still looking, I do have (6B) Some Potential New Effective Altruist Ideas) and get back to work. Even when you discount the profit motive, there are plenty of other good reasons to do positive sum things that help others. At a minimum, (6C) Altruism Is Incomplete, also it gets more credit for the good it does than its competitors, and when it hogs credit for too many good things, this distorts people’s thinking. This ended up expanding beyond the scope indicated here.

It’s all well and good to score high in Compassion and Sacrifice, but if you do that by neglecting Honor and Honesty, or even Justice, Humility, Valor or plain old Curiosity, you’re gonna have a bad time.

To be filed under posts I do not feel qualified to write properly and that would be torn to shreds if I tried, there’s (6D) Can I Interest You In Some Virtue Ethics? or even (6E) Against Utilitarianism. At some point I want to try anyway.

There’s some pretty bizarre thinking going on about what is good or worth thinking about. Some of it even involves people in EA.

There’s the level of weird where one doubts the mechanism, like worrying about unfriendly AI, nuclear war, asteroid strikes or pandemic plagues. This is weird in the sense that most people don’t think the probabilities are high enough to worry about, but it’s totally normal in the sense that if a random person believed one of those was imminent they would quite rightfully freak the hell out and (one hopes) try to stop it or control the damage. They’re weird maps of reality, but the morality isn’t weird at all. As a group they get massive under-investment, and if we funded all of them ten times more, even if we also did that for a bunch of deeply stupid similar things, the world would be a safer place at a discount price. In this sense, we can all strongly agree we want to ‘keep EA weird.’ (6F) Encourage Worrying About Weird Stuff.

Then there’s the morally weird. Scott pointed to a few examples, and there was some very good analysis in various comments and posts about what the heck is going on there. I think the right model is that these people started by taking the idea that (6G) Suffering Is Bad, and (6H) Happiness Is Good. I don’t disagree, but they then get more than a little carried away. Then that gets turned into the only thing that matters, and these principles are treated these axioms as on the same level as I think therefore I am or the existence of rice pudding and income tax.

The people involved then saw where that led, drawing increasingly weird conclusions that made less and less sense. After lots of thinking, they then decided to endorse their conclusions anyway. I’m sorry, but when you are worried that protons are suffering when they repel each other, you screwed up. (7A) Wrong Conclusions Are Wrong. (7B) Life Is Good. I still am a big believer in (7C) Yes, You Can Draw And Use Long Chains of Logical Inference but they’re pretty easy to mess up. If your chain ends up somewhere false, it’s time to find at least one mistake. In my mind such people have made several mistakes, some more obvious than others. I’d start with how you are defining suffering and why exactly you have the intuition that it is so importantly bad.

In general, (8A) The Past Is Not A Dystopian Nightmare, and (8B) Nature Is Not A Dystopian Nightmare. Again, Life Is Good. No one is saying (I hope) that awful things didn’t happen all the time, or that things could have been better, but I consider saying the past had negative utility as one of those signposts that you messed up.

A big issue in EA, which many have written about and so I have some research I should likely do before writing about it, is (9A) How To Cooperate With Human Paperclip Minimizers. I think not only some of the absurd, hopelessly weird EA causes are orthogonal to value, but also some of the less weird ones. In general, I follow the principle of (9B) Help People Do Things, especially help them figure out how to do things more efficiently, even when I think the things are pointless, so this doesn’t get in the way too much. I could also just write (9C) Stop Paperclip Minimizing and try to argue that those people are doing something useless, but I do not need that kind of trouble, and I doubt my arguments would be convincing at this stage anyway. Still, I am tempted, I think largely but far from entirely for the right reasons. At some point, one must try to figure out (9D) What Is Good In Life?

A last important thread is the problem of motivation. (10A) Yay Motivation! It’s hard to get motivated, and it’s only getting harder. Just about everything we use to motivate ourselves and others is now considered unhealthy in one form or another, and/or a cause of unhappiness. It’s not that this is wrong, exactly, but we need to get our motivation from somewhere. If we’re going to withdraw things from the pool of officially acceptable motivations, we need to be adding to it as fast or faster, or we’ll get a lot of people who don’t have motivation to do things. Which we have.

A particular focus of this problem is ambition. This problem is hard. Empty ambition is clearly harmful to happiness and motivation, but it can also lead to real ambition, and the things that harm empty ambition seem to also harm real ambition, or prevent it in the future. (10B) On Ambition would address this hard problem, which I ended folding some of into 6B, but deserves a deeper dive. Another way to look at the issue is, we want people to feel a little sad that they don’t do more or aim higher, or somehow make them prefer doing more to doing less, because we want them to strive to do more, but we also want them to not feel too sad about it, especially if they’re doing plenty already. Scott Alexander feeling bad about not doing enough is kind of insane.

We particularly want to make sure that people can work on themselves first, put their own house in order, and keep themselves rewarded for all their hard work. We need a robust way to defeat the argument that everything you do is killing someone by way of failing to fund saving their life. Thinking in this way is not cool, it’s not healthy, and it does not lead to lives being net saved. We must deal with (10C) The Altruist Basilisk. It is (10D) Out To Get You for everything you’ve got. We must take a day of rest, and (10E) Bring Back The Sabbath.

 

 

 

 

 

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

30 Responses to Paths Forward on Berkeley Culture Discussion

  1. Pingback: Rational Feed – deluks917

  2. rossry says:

    Oh boy am I excited to see this play out over the coming months^H^H^H^H^H^H years. Thank you for sketching it out; it’s useful to have an explicit motivation to get back to up attack^H^H^H^H^H^H blogging speed myself, in order to have the footing to respond down the road.

  3. benquo says:

    On (1A) Yay Folk Values, one thing we need to think through here is the extraordinary impact of the Abrahamic religions, and New-Testament-centric reformed Christianity in particular. When Scott says that it looks like the Quakers overwhelmingly won, he’s saying something very important – Jesus and his followers laid out a set of protocols for human cooperation that somehow got buried in the Roman Empire, but got enough people to make copies of the relevant texts that George Fox was able to reconstruct the project millennia later and actually do it. (Probably this owes a debt to Judaism’s social technology of text-preservation.)

    Virtues: values freedom and well-being of all human beings, truthfulness. Flaws: doesn’t work well as a base substrate for society because it doesn’t have the will to live, doesn’t know about history, doesn’t have adequate steering mechanisms to avoid tearing itself apart in the endgame.

    Ideological liberalism – both its virtues and its flaws – seems like the result of victorious Christianity. The Cathedral is atheistic because speaking the truth is an important Christian virtue, and God doesn’t exist. Christianity was designed to be robust to hostile or hypocritical rulers both temporal and spiritual (as was the case in Rome and much of later European history), but once the leaders started sincerely believing the thing it seems like it’s begun to eat its own tail, e.g. taking tolerance as a primary value to the point of tolerating actively hostile substructures in its own society. Radical egalitarianism appears to have destroyed our society’s ability to teach intellectual history in an analogous phenomenon.

    Fortunately, this very tolerance and egalitarianism mean that building alternatives is unlikely to be very effectively persecuted if we are smart at all.

  4. tcheasdfjkl says:

    (this got longer than I meant and pretty rambling, sorry)

    I’m a relatively new member of the community who’s been following along with this discussion with more than a little bewilderment at its premises, which I guess is evidence that the community has in fact changed in the way you describe and had already changed by the time I found it. (My reaction to Sarah’s original post was “duh? communities and projects are different things and should be optimized separately? I wasn’t aware that anyone disagreed with this?”.)

    My reaction to your posts and to some of the discussion around it has been to feel threatened (which I guess is just how I react to any sort of argument that any community I’m part of should be more exclusive (other than by keeping out people who abuse others)). “How to keep the wrong people out”, “MOPs” in the previous thread, some suggestions that people who are insufficiently rational or too generally unhelpful should be kicked out, etc.

    So like, who are the wrong people and what exactly must they (we??) be kept out of?

    In a sense it doesn’t make sense for me to be worried that a few people I don’t really know might not want me in their high-standards world-saving community given that I don’t really want to be in a high-standards world-saving community. I think what I’m reacting to, then, is (a) at some level just the word “rationalist”, which I guess is silly, but not sillier than the people wanting to claim it exclusively for the high-standards community, (b) what sounds like attempts to split the community.

    What would be lost if instead of trying to more narrowly define “rationalist” and carefully prune the set of people who get to count, the people who care about ambition and high standards created subgroups of rationalists, specific houses and projects and meetups and stuff for advancing their mission, without trying to re-steer the community as a whole? Call yourselves The Ambitious Ones or something rather than The Rationalists?

    (What would be gained: not splitting the broader community, not having stupid arguments over who gets to count as a rationalist, not potentially cutting off less high-standards people from significant life improvement)

    To be clear, I’m not like, a random hanger-on who doesn’t care about rationality hanging around because this community happens to have interesting people. I mean, I do like the social benefits, but like, a main reason that I appreciate the people in this community is because I have been reading and to an extent participating in rationalist discourse and my thinking has been increasingly influenced by the ideas I have encountered. (I guess another reason is that this is just otherwise a set of people I largely like and also I mostly like the community’s norms.) But like, I’m not just getting social benefits out of the community, I’m getting knowledge and intellectual stimulation and tools for thinking about tradeoffs in my life in saner ways – which are things I think rationality is also supposed to be for? I don’t want to do super ambitious shit and I don’t want to save the world but I want the ideas and the tools and the discussions and the friendship and ponies or whatever.

    I guess this is the same sort of question as the question of whether e.g. EA should grow and develop a broader base or lean towards excluding “casuals” and consist just of people who will themselves do rigorous work of evaluating possible interventions.

    If you think that the world is likely-enough-to-be-scary to end very soon unless people seriously focus on preventing that, it makes sense to trade inclusiveness for laser focus and avoiding distraction and all those things. But
    (a) I feel like there should at least be a sense that this is a tradeoff. (Maybe this feels like a “missing mood” to me?)
    (b) from your exchange with Sarah and your reluctance to define a specific mission and your statement that it’s okay if people don’t want to save the world, it doesn’t look like this is exactly the tradeoff you’re making?
    (c) if this doesn’t feel like a tradeoff to you, and the more ambitious and exclusive community is actually the community you would prefer, then probably you should still recognize that it’s a tradeoff among different people’s preferences?
    (d) the way people join a community isn’t usually to find e.g. the Sequences and be converted into perfect rationalist zealots overnight. People join gradually through social and intellectual exposure and gradually come to adopt these ideas and methods over time. If what you want isn’t specifically to save the world but is instead to preserve the methods, I’m not convinced that you need ambition and focus per se rather than like, reach and continued discussion?

    I think the focus on the necessity of the Mission, coupled with the reluctance to define a Mission, is something I find objectionable. It sounds like people want a generic Ambition and Effectiveness and maybe the aesthetics of Being on a Mission, rather than wanting to accomplish something in particular? (The aesthetics part is probably uncharitable but I feel like I need to be better convinced of this)

    • benquo says:

      I regret my phrasing a little bit in my prior MOPs comment, and have a somewhat more friendly hypothesis to offer.

      On reflection, Geeks and MOPs might be symbiotes in the absence of predatory strategies. MOPs are looking for something actually alive and generative in social reality, and the Geeks provide that. Geeks can get a lot from MOPs too; the Geeks are often lonely before they show up, and they usually make a sincere effort to participate in or support the thing the Geeks are making.

      The problem is that undefended communities with lots of MOPs attract predators that pose as Geeks but optimize the culture for attracting and extracting resources from MOPs. This has the side effect of destroying the value the Geeks were trying to create. Blaming the MOPs for this is subtly but importantly wrong; the actual problem is letting people who are *optimizing for attracting and exploiting MOPs* pass as Geeks. Most Geeks either don’t know about these predatory strategies, or don’t understand that defenses against them are possible.

      • benquo says:

        Replying to my own comment to tag its relation to something in the OP:

        Thread (2C) […] How the Mission was Lost, as the group allowed its culture and utility function to shift over time, partly through entry of non-core people, but also by choosing goals that were too much about attracting those people, which is doubly toxic to preserving your utility function and culture – you shift your own priorities in ways that are hard to compartmentalize or contain, and you bring in people with different culture.

    • benquo says:

      On the object level your subgroup thing seems promising.

      Giving up on the stricter definition of the word Rationalist seems a bit sad for me since the community formed around Eliezer’s writings, which very clearly used “Rationalist” to describe people aspiring to a very high standard. I’d be less sad about giving up the term if we had a plan for how *not* to let the *next* term get similarly stretched out, otherwise we’re still on a semantic treadmill, which seems like an unfortunate tacit acceptance of a substantial baseline level of dishonesty and corruption.

    • I don’t think that was especially rambly, and we here at DWATV welcome length. Length is good here (not free, but generally to be preferred), and I think the instinct that it is bad is a symptom of the problem. A lot of this is how we think about things rather than the conclusions.

      I hope that when/if I get to fill in the vision above, these concerns will be addressed, but I can’t promise that will ever happen, and the short versions are hard (and hard on the reader).

      I don’t want to kick people out – or at least, I don’t want to have to do so very often. What I want instead is to send a message and provide an atmosphere that attracts the right people while not attracting the wrong people, rather than trying to maximize for attracting more people. I do want to attract more people, but this is a tradeoff (and hell yes it is one) that must be made. Once you start distorting your culture to attract more MOPs rather than attracting the MOPs if and only if they are likely to have Geek aspirations, you create a vicious cycle that makes you mostly MOPs and destroys the value whether or not you get explicitly taken over and steered.

      I certainly don’t want to kick out ‘casuals’ who are working on improving their discourse, simply for being too casual. At least not from the Outer Party.

      The name Rationalist has a lot of brand equity at this point and I think as a practical proposal I think it’s too late to abandon it without a huge cost. We might have to do that, but I would be rather sad about it. Will say more at some future point. If we need an Inner Party with a different name, that would not be too surprising. I have always wanted to found The Order of the Illuminati, at least a little.

      As for The Mission thing, I think that “preserve a culture that has pursuit of truth and taking ideas seriously as its primary virtues, and honors doing and aiming to do big things as an implication because it is scope sensitive” is a much more important mission than people realize, especially now, and we can disagree on what the endgame mission itself looks like, but there is definitely some possibility that I am wrong about that last part. Perhaps we DO need to be explicit about the world saving mission’s endgame or all is lost, in a “any movement without an explicit object level end goal becomes about friendship and ponies as its end goal” kind of way, and letting people not care about AI safety is a mistake. Not sure. Certainly it’s something I think about a lot.

  5. blacktrance says:

    If people decide they don’t want to go on (2B) The Mission To Save The World or support that mission, I respect that decision, because that’s not the part that needs to preserve itself – it follows from thinking carefully about things. What can’t be compromised are the epistemic standards and values.

    If many rationalists would choose this as part of their goal, it seems that some of the other concerns would be significantly ameliorated. If we’re not trying to save the world, then ambition isn’t that important, keeping out the wrong people becomes easier, and huge time sinks (if they are indeed huge) become less of a problem because maintaining those values can mostly be done passively.

  6. Julia says:

    The part of this that felt least intuitive to me was ” (8A) The Past Is Not A Dystopian Nightmare, and (8B) Nature Is Not A Dystopian Nightmare” – I’ve been thinking about this a lot and trying to figure out what I believe. I’d be particularly interested to hear more about why you believe those two.

    I liked your method of flagging things that deserve their own posts as a way of getting content written down without getting stuck by needing to write all the content!

    • TheZvi says:

      Thanks. I was really nervous about using that technique and I’m glad to see it getting positive feedback and even inspiring others in at least one place.

      I hope I get to those two at some point. I have a hard time wrapping my head properly around the view that you could do the math and have the answer come out negative even with very different assumptions than mine, without projecting modern deeply-historically-weird developed-world human emotions/experiences onto different cultures and wild animals (and even then, still somewhat confused how you get a negative number but I guess maybe?). I also think the utilitarian answer is a strict lower bound, since the things it misses are even harder to get to come out negative.

      However, in all seriousness, my plan is to study my Meta-ethics before I proceed there, even though it did not take long for me long into my first text to figure out why everyone hates moral philosophy professors.

      • Julia Wise says:

        Some things that point me toward thinking the past was worse than now, though maybe not worse than negative:
        – Happiness data indicates people in developing countries are the least happy (not clear what level on this metric makes sense to call negative.) In the past basically everyone lived in what we’d now consider poverty or extreme poverty, so maybe much of the past was similar to present-day Ghana. But maybe developing countries currently have more disrupted social cohesion or something that make them worse than being a medieval serf.
        – Pinkeresque thinking on how things have gotten better
        – Even leaving out stuff like starvation or being eaten, the everyday experience of being an animal seems like one I would not enjoy (being rained on, usually being the wrong temperature, constantly being alert to something that might want to eat me). I just don’t know how to think about what a squirrel’s experience of that (if any) is and how it differs from mine.

      • TheZvi says:

        I do think things are better now than they were then, but I think the last comment points out how much we not only anthropomorphize animals when trying to see if they are ‘happy’ or ‘satisfied’ or what not, we actively WEIRD them (as in treat them like Westerners). The squirrel doesn’t know he’s not supposed to be alert all the time or that being rained on is supposed to ruin his day. Everything is going about as expected, and I don’t see any point to making that experience miserable for him because he doesn’t have Cable TV or indoor plumbing or what not. Unhappy beings tend not to be terribly efficient. By Squirrel Values, it’s hard to believe that ‘be a squirrel in the wild’, fulfilling its telos, isn’t doing pretty well.

        Even people are often quite happy and satisfied to pursue goals they have little chance of success in, in materially terrible conditions, when they think that is the just cause.

        We are doing a similar thing with developing / past nations too. I do think in general we have the right idea, but they enjoy their lives far more than present you would enjoy those lives, given your experiences. I also think that those places that show up in red on the map represent places that have become actually much worse, filled with violence and social breakdown. I also think that knowing about the developed/advanced world is not great for the life satisfaction or happiness of those that have to do without (the catch-up growth is good though).

        The serf is what it comes down to for the past. I’m very glad I’m not that serf, but the idea that this person’s life is not worth living and that they’d be better off never having been born? That strikes me as crazy.

  7. nostalgebraist says:

    I keep feeling like there is an important step missing in your posts on this topic. You have some beliefs in the following categories:

    (A) beliefs about what the rationalist community ought to do or become, going forward
    (B) beliefs about what the rationalist community is currently like
    (C) beliefs about what others in the community think on these issues (i.e. the issues in categories A and B)

    It is possible for someone to write about category A while ignoring category C — to say, “here’s the community I’d want, if I could snap my fingers and create it without having to take anyone else into account (which I can’t).” That sort of writing is useful. But it seems like you are doing something more than that — not just saying “here’s the community I wish for,” but “here is the community we should build together.”

    I think this latter kind of writing requires more explicit discussion of category C than you’ve been providing. I don’t just mean that you should bring up alternative views (about category A) for the sake of arguing against them, in favor of your own — although that is good too. But more importantly, an argument about “what the rationalist community should become” has to specify the connection between the proposal and existing conceptions of rationalism. You’re free to dismiss some existing conceptions as bad, but there needs to be a clear reason why the proposal is deeply aligned with core values actually held by rationalists, and not just a neat proposal for some new community.

    An analogy: if you’re a Christian schismatic (say, Luther / Calvin / et. al.), your task is not just to argue that the dominant church is doing things wrong. It’s also to convince the general public that the thing you are hawking is still Christianity — indeed, even a “truer” Christianity. You’re going to cite Biblical precepts, not just your own opinions (even if there are good non-Biblical arguments for those opinions), because you’re trying to sell your audience on a new version of Christianity, not on your specific opinions or your good judgment. The goal is to create a package that existing Christians can look at and say “wow, I should be doing this because I’m a Christian” — where the (core of the) existing stuff motivates the new stuff.

    In short, this whole roadmap is heavily slanted toward arguing for Zvi’s New Wonderful Community, and ought to have more focus on why Zvi’s New Wonderful Community is in fact the rationalist community (in some “truer” form), and that it is an intuitive thing to build given what rationalists already believe (purged of some dross, but specific dross, clearly delineated from the good “core”).

    • nostalgebraist says:

      (Whoops, should have checked my italics tags more closely, was expecting to be able to edit my comment after posting)

    • TheZvi says:

      (Yeah I should definitely look into giving people the power to edit comments for some period, which would clearly be good. There’s probably a setting somewhere.)

      Interesting. I like that this comment gives me permission to think in those terms for a bit. I go back and forth on whether invoking religious analogies is a good idea but mostly think strongly not to go there – my beliefs seem to cash out as something like “that would be quite useful but if not used very carefully would freak people out too much, and perhaps even if used super carefully.” I do think it is illustrative. I know I’m sad that Scott says to repeatedly say “they enslave their children’s children who make compromise with sin” and I can’t even say it once in the post that’s basically making that exact point even though Lowell is way more poetic than I will likely ever be, because, well, yeah. Sad.

      It’s funny, I went to write the sentence that started “Christians believe that the core of being a Christian is…” to set up the one about Rationalists, and realized I don’t know enough to finish the sentence. I can generate a few hypotheses, but none of them seem like great fits, so I notice I’m confused about what that core actually is.

      My view of Rationalist core belief is that it is something like (not my final formulation by any means), we must face and seek truth and its consequences. Speak the truth, even if your voice trembles, and all that. Eliezer once said that the best informal definition of Rationalism he’d heard was “that which can be destroyed by the truth should be” and I don’t think that is quite right (nor do I fully endorse the principle, since I can counterexample it pretty hardcore), because I think it’s more the idea that this truth thing, and learning how to think and do discourse and such, is really freaking important and you need to fight for it like it is your Something To Protect (whether that’s because it is that thing itself, or it is the only way to in turn protect the other thing).

      Inside that metaphor, I’m arguing less for schism and more for revival. I’m saying that the values of 2007 Less Wrong are the true spirit of Rationalism, have been sublimated to other things, and need to be restored to their rightful place. The people worship (metaphorical) false idols, have sinned and must repent. I’m also saying, a lot of you know it, too, even if some of the people think it was and is a good idea.

      So I guess it comes down to me not seeing this vision as all that schismatic or revolutionary, nor did I feel an instinctive need to justify why it lines up with what Rationalist ideas/goals are. If you or others feel like this vision feels importantly ‘not Rationalist’ then I very much want to know that, and what gives people that feeling. That seems like important feedback to have. I also don’t really want to be claiming what others believe, that seems arrogant/bad, although there are ways to do the thing carefully if it needs to be de facto done.

      Certainly I could site and/or link to the sacred texts far more often, if I was so inclined. I’m of mixed feelings on how often I should do that, as well.

      • nostalgebraist says:

        Thanks for the reply. My perspective here comes from my background in a part of the “rationalist community” that’s effectively somewhat distinct from (I hope this makes sense) the Berkeley-Wordpress-Facebook axis. I hang out mostly on tumblr; I’m mostly here for interesting conversations and am not strongly committed to a big mission (there are many rationalists like this); I’ve always been critical of a lot of core stuff like MIRI and Bayesianism. People sometimes think of me as a noteworthy person in rationalism (I’ve written two novels that got somewhat popular in the community, I’m referenced several times in Unsong, strangers recognize me at meetups), but I live in this sub-bubble that doesn’t take any of the ~saving the world~ stuff very seriously.

        I mention all this for two reasons. First, to say there’s a lot more to the actually-existing “rationalist community” than what’s going down in Berkeley, and from the periphery it does not look like Berkeley has any special concentration of people who are doing important or interesting things. Second, to say that I’ve experienced if anything a negative correlation between professed interest in the big dramatic stuff (saving the world, agency/protagonism, making a point of “caring about truth”) and actual productivity on world-saving or truth-seeking. To put it crudely, the more someone writes about rationality per se, the less they practice it in a way I find valuable. If I wake up and find myself caring about truth, I will not seek the bloggers who write about Truth, I will seek the ones who just write things that are true. “Those who can’t do, teach.”

        What explains this observed correlation? One hypothesis is that that status quo(s) in truth-seeking are actually quite efficient (if not legibly so) — probably not global optima, but local optima on a complicated and twisty fitness surface, such that small perturbations will hurt you and large perturbations are shots in the dark. Thus the people who mostly use status quo methods on object-level problems get results, while the people who try to improve upon the status quo will either invent incorrect ideas about how to do so — corresponding to local moves away from the local optimum — or ideas that claim more than is known about the fitness surface, and which amount to cover stories for sampling (as good as) random points.

        This seems like the sort of thing that truth-seeking rationalism ought to be seriously thinking about. Instead, I mostly see stuff that looks like trying to “brute force” its way through — fight harder, believe more strongly in capitalized Values, move in the same direction more efficiently (via military discipline in Dragon Army, say). What if there were simply fewer low-hanging fruit in these directions than expected? What if the great world-saving relevance of these things (ideals, practices, habits of thought) should be destroyed by the truth, because it can be?

    • TheZvi says:

      I have not properly explored Tumblr, other than including Scott in my RSS, so I may well be missing value there due to my dislike of the platform and instincts that I shouldn’t be exploring in that direction. In particular, I’m worried that I’m missing out on the people.

      I actually agree with you that if you think you need to be saying explicitly “I care about truth” or “save the world” in a post, it probably means something went wrong somewhere. Truth seeking should be a common knowledge background assumption, obvious from the behaviors it leads to, not something we need to be asserting all the time. Saving the world either follows logically from other beliefs combined with scope sensitivity, or it doesn’t. Or at least, we have founding documents that should mostly cover those bases, most of the time.

      In this case, I’m biting that bullet explicitly, and saying ‘it seems that what should be a common knowledge background assumption isn’t true anymore, so we need to fix that.’

      My plan with this blog was to not say those things explicitly/centrally, at least not often, but rather get on with the show and let the show speak for itself – to show how I think about things, rather than going meta; the concept posts come out of having a concrete thing I want to talk about, realizing that I need a concept to do that, and then refocusing on the concept (and often never getting to the original end point, but I think that’s fine, its purpose was to provide evidence that I needed to explain the concept carefully).

      The anti-correlation point is real, as well. My full model of why, and how I think about it, is at least post-long, but it goes something like:

      Most things are at a local maxima. When we enter the community, we reject that maxima for some combination of: We can’t get to the maxima via normal methods (e.g. certain types of people simply fail to absorb the standard model properly because it’s not how we think/tick, myself included, and need to at least brute force our way back there if we’re not going to go somewhere else, or be lost in no-man’s land), we are bothered by the standard model (it’s not legible, it feels arbitrary, it fails a lot of places, it’s trapped in a local maxima in many ways, we find alternatives more satisfying or interesting, etc), and the feeling that the standard model is going to fail hard on the problems we care about (it’s heuristic based, so once we get to a certain level of abstraction or long chains of reasoning, the standard model stops working), or simply that we see one of the knobs in an obviously wrong place (in isolation) and turn it to its correct setting (e.g. correct for a bias) and you’re off to the races.

      The standard model is the ultimate easy mode. If you want what most people want, including a truth that’s good enough to get you through most days, it’s where it’s at.

      Playing your world model in hard mode kind of sucks, especially at first. In the sufficiently short-term, turning almost any knob even to its ‘correct’ position will cause more problems than it solves, even before taking the penalty for having a different maxima than everyone else, and it certainly won’t. If you were previously at standard model, your effectiveness will go down. CFAR is going for the art of knowing which knobs you can turn first and where, and actually get into a better spot with each step. Each of these also makes other moves towards the new target more likely to be positive or at least less negative, as well, at least in theory.

      The advantage of tuning the bigger knobs in the ways we do, is that it gets us to the part of the space where one can actually start maximizing efficiently and creatively rather than doing ML-style hill climbing. This is why the new points aren’t random.

      Once you have tuned the bigger knobs to where you want them to be, even at short-term cost, you can now start tuning everything else and create a new maxima, and/or use your new improved maxima-seeking skills to move to a point even better at maxima-seeking, and so on. The further into this process you are, the less you talk explicitly about truth or your big mission, because you’re done with that step (at least for now) and are down in the weeds making the damn thing work.

      The twin temptations are to do this on too many meta levels, or too few. Too many and you never make your actual life or the world better, too few and you stop doing the thing and are stuck wherever you land with strong pull towards returning to standard model, that you will be not well equipped to deal with. To deal with this you need a level that protects itself first, without going permanently higher, while allowing you to then focus lower, which is the central/hard problem.

      Thus, if someone is talking about higher level things too explicitly, chances are they are at the start of their journey, or they took a wrong turn, or are addressing someone or some group they think is off (or not yet on) the path. The Sequences are explicitly the third one.

      One community problem is that you want people who are well along on their path to be part of groups/meetups that are primarily in the early stages, for many good reasons, but dealing with such things has become tedious to them. NYC right now has solved this with a group that is kind of The Next Generation, combined with older members who are willing to make the investment even if it’s not what they would most prefer for themselves.

      I believe that I have done sufficient optimization in Rationalist-style places, far away from the Standard Model, that I want to share that point in space as a target, and am hoping I can make it legible, but it does seem like as models get more useful they need to incorporate things that are not as legible, or at least require a lot more explanation, so it’s a slow process.

      At some point I need to say that top-level and more carefully.

  8. Pingback: Best of Don’t Worry About the Vase | Don't Worry About the Vase

  9. srconstantin says:

    I think altruism is Actually Bad, and this is basically where we fell down a few years ago. I didn’t like the push to “grow the movement” at the time, and I didn’t like EA when it was new, and I didn’t like that period for a few years where on the CFAR mailing list you weren’t allowed to refer to a group of people called “rationalists” but you were allowed to refer to “EAs.”

    Most rationalists at the time didn’t notice a problem because they literally had no idea what the bad kind of altruism *is* (it’s not desire to help people, it’s a deeper form of crazy). If you’re a basically reasonable person, you see people working on trying to end infectious disease in the developing world, and you say “that sounds like a good project!” I, with my Crazyvision, saw that there was something creepier going on, on the psychological/zeitgeist/motivation level, but most people chalked that up to my idiosyncrasies. And, indeed, there are a lot of reasonable EAs who just want to do useful things like preventing disease. It’s hard to talk about the other stuff without sounding like an asshole — I’ve tried, and failed.

    Basically, I don’t think people like Zvi and me *knew* anything in 2012 that we’ve forgotten now. I certainly feel like I’ve grown somewhat. What was better back in the day was that we had to self-censor less. It was more okay to be a little bit of a jerk — making strong claims without hedging, talking to a presumed audience of people who had a lot in common with ourselves. There were more finance people in the old OBNYC community, which might have had something to do with it. That came with more antifeminism and more elitism, which I found a bit stressful, but there was a vibe of “obviously it’s better to be frank than to be politic”, which is completely lost today.

    • srconstantin says:

      My true reaction though, is ‘no no no no don’t try to engineer communities’. I don’t like calls to ‘be’ someone else than you are. I also don’t like trying to reify “how awesome The Rationalists are” when…some of us are, some of us ain’t. It’s trying to fit reality into a box that’s the wrong shape.

      A guy like Nostalgebraist is obviously nice, intelligent, and generally value-creating in his life. He’s also not my *comrade* on a mission, because he doesn’t want to be. Instead of going “ow ow how do I reconcile these things?? I like him but he isn’t totally aligned with me?!” I’ve found it’s better to just send my brain into a loner-space, where people are free to do what they want, they don’t *have* to come along with me, we are all just motes in space, and I’ll be fine without the moral support or ideological alignment of any one person in particular.

      • TheZvi says:

        I certainly can sympathize with the instinct that trying to force things doesn’t work, or won’t get us the types of things we want.

        The problem is, there are lots of people out there, and lots of systematic forces, trying to engineer things in ways that will remove the things we value from the world. If we all go down the incentive gradients, that’s what happens, as much as I love me some loner-space, and I love me some loner-space.

        The idea that everyone needs to be value-aligned in all ways and for all things, especially social and political things, is if anything part of the problem we face rather than the solution. The last thing I want is for everyone to need to parrot the same goals and viewpoints across the board, or to freak out that this hasn’t happened.

        The Great Conversation can and should be a Great Debate. However, if we lose the ability to talk, to debate, and with it the ability to think, then all is lost, and I think we all see that this is what is happening.

    • TheZvi says:

      Thank you so much for the courage to speak your truth here, even if your voice trembles and you can’t quite find the right words.

      Trust your crazyvision – you are always on to something important, even if you don’t have it quite right or can’t put it into words. Talking about this stuff without sounding like an asshole, or someone pretty evil, is stupidly hard, and we don’t get that many shots at it.

      The right words are damn hard, and getting harder. The constraints we operate under these days are many. We self-censor. I censor even the claims I explicitly say I’m not making (so there’s the claims I make and defend, the claims I make but don’t claim to defend, the claims I say I’m not making let alone defending, and then the claims I want to make but don’t even want to say I’m not making!) Esotericism is used. We make our claims only carefully with caveats, only where we can defend them with hard examples and short term pointed-to clear effects.

      What we had back then was the freedom to talk. To make mistakes of all kinds along the way. To speak the truth even when it wasn’t nice. We need it back, or else. But yes, having a bunch of New York finance helped. The same way that a bunch of Berkeley is decidedly not going to help. If it’s better to be politic than to be frank, we are lost, no?

      I also think there’s an equally important second half, that it’s better to be accessible and inclusive than to say the things worth saying. Back in 2008-2012, I often felt like things on Less Wrong were over my head, or discussing them was above my pay grade. It was tough to keep up and find useful things to say. Pressing the comment button was SCARY. Obviously that has huge costs, and if I was scared I can only imagine how many others felt (although it can take a lot of skill or wisdom to realize compared to the master, you don’t have any).

      In particular, recently I’ve run into links to a few things Anna wrote, and in each case I felt shame that we are no longer willing to do the thing she was doing with those posts. Not really. Do we still have the ability? Does she even still have it? I don’t know. I hope so.

    • Quixote says:

      Interested in hearing why you think Altruism is Actually Bad. I am (and have for several years been) outside the community, so my view of EA is that each year they publish a note telling me which tropical disease can be prevented at the lowest cost. Usually its malaria, sometimes its schistosomiasis, maybe someday it will be yellow fever, or whatever. They publish and say what it is and then I send the “alleviate third world poverty” portion of my charity budget to the right charity for that disease that year. This seems pretty innocuous. I read SSC, and the conference described there seems a little more nuts, but it didn’t strike me as too likely to be really problematic at any large scale, it also didn’t strike me as difficult to just ignore. Maybe I’m underestimating risks / dangers…

      • srconstantin says:

        Giving to charity is not bad. Insofar as it actually works at things like saving lives, it’s very good. The people who are the most nitty-gritty involved with figuring out how to help the global poor (some GiveWell workers as well as non-EAs such as Chris Blattman, the GiveDirectly guys, and the Gates Foundation) are doing cool and beneficial work.

        The philosophy/ethics side is the creepy side. Altruism is more like the idea that “being good” through serving others is the rent you pay to the universe in order to be allowed to exist, and the measure of how good you are is not how much you help but how much you play the martyr, how much you sacrifice. It’s not about creating value, it’s a totally different framework.

        When people who aren’t martyrs turn their minds to working on poverty, they can do cool stuff. When people who were interested in other stuff get nerdsniped by the meme “maybe I should be more of a martyr”, they may donate more, but they also quit doing independent thinking (because that’s usually motivated by curiosity not piety), and they alienate people who wanted to do their thinking in a prickly way, or who were interested in working on things for profit or science rather than altruism. A lot of good stuff gets done in the world from people who aren’t in a church-lady mindset.

  10. Quixote says:

    Not a response to the most recent comment, but to the original post

    The OP sketches out a bunch of ideas that could possibly be future posts. I don’t know how representative a reader I am (first guess not very), but here are all the future posts clustered by how interested I am in reading them. It should go without saying that you have zero obligation to produce any of these or even consider my preferences in any way (its not like I pay for this content), but if the preferences of one reader would be interesting to you, they are below. Again, I am thankful that you produce content at all, I appreciate it, and this should not in any way be interpreted as a demand on your time or interpreted in any way that causes stress.

    Very much want to read
    (1A) Yay Folk Values
    (10E) Bring Back The Sabbath
    (10A) Yay Motivation!
    (10B) On Ambition

    Interested
    (2C) would then talk about How the Mission was Lost
    (9A) How To Cooperate With Human Paperclip Minimizers
    (3C) How To Keep The Wrong People Out
    (4A) which would be Against Polyamory
    (3D) Become Worthy, Avoid Power
    (3F) Teaching The Way By Example
    (4B) Bad Money Drives Out Good
    (6B) Some Potential New Effective Altruist Ideas
    (5A) is that this is partly because of fixable problems. We should Fix It
    (5B) is to explain why keeping the projects Within the System is important

    Things I might disagree with and would be interested in hearing why we differ, but which would not be interesting to me if we were in agreement
    (6D) Can I Interest You In Some Virtue Ethics?
    (6E) Against Utilitarianism
    (8A) The Past Is Not A Dystopian Nightmare
    (8B) Nature Is Not A Dystopian Nightmare

    Probably important to the world but from my perspective might be preaching to choir
    (2A) about the mission to preserve our mission and values
    (2B) The Mission To Save The World
    (3B) Nuke Goodhart’s Law From Orbit.
    (3E) Goodhart’s Law Is The Enemy
    (6F) Encourage Worrying About Weird Stuff.
    (7A) Wrong Conclusions Are Wrong
    (7C) Yes, You Can Draw And Use Long Chains of Logical Inference but they’re pretty easy to mess up

    Other Content
    (5C) you need A Culture Of Extraordinary Effort
    (5D) Hire and Be Hired By Your Friends And Network
    (6A) The Greatest Charity In The World is not Open Phil or the Bill Gates Foundation, it’s Amazon.com
    (7B) Life Is Good
    (9B) Help People Do Things
    (9C) Stop Paperclip Minimizing
    (9D) What Is Good In Life?
    (10C) The Altruist Basilisk
    (10D) Out To Get You for everything you’ve got

    Things that sound obvious but probably more is meant
    (6G) Suffering Is Bad
    (6H) Happiness Is Good

  11. Pingback: Additional arguments for NIMBY | Don't Worry About the Vase

  12. Pingback: Moloch Hasn’t Won | Don't Worry About the Vase

  13. Pingback: Sadly, FTX | Don't Worry About the Vase

Leave a reply to benquo Cancel reply