The Story CFAR

In addition to to my donation to MIRI, I am giving $4000 to CFAR, the Center for Applied Rationality, as part of their annual fundraiser. I believe that CFAR does excellent and important work, and that this fundraiser comes at a key point where an investment now can pay large returns in increased capacity.

I am splitting my donation and giving to both organizations for three reasons. I want to meaningfully share my private information and endorse both causes. I want to highlight this time as especially high leverage due to the opportunity to purchase a permanent home. And importantly, CFAR and its principals have provided and in the future will provide direct personal benefits, so it’s good and right to give my share of support to the enterprise.

As with MIRI, you should do your own work and make your own decision on whether a donation is a good idea. You need to decide if the cause of teaching rationality is worthy, either in the name of AI safety or for its own sake, and whether CFAR is an effective way to advance that goal. I will share my private information and experiences, to better aid others in deciding whether to donate and whether to consider attending a workshop, which I also encourage.

Here are links to CFAR’s 2017 retrospective,  impact estimate, and plans for 2018.


My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.

I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.

In the years since then, many of my friends in the community attended workshops, reporting that things have improved steadily over time. A large number of rationality concepts have emerged directly from CFAR’s work, the most central being double crux. They’ve also helped us take known outside concepts that work and helped adapt them to the context of rationalist outlooks, an example being trigger action plans. I had the opportunity recently to look at the current CFAR workbook, and I was impressed.

In February, CFAR president and co-founder Anna Salamon organized an unconference I attended. It was an intense three days that left me and many other participants better informed and also invigorated and excited. As a direct result of that unconference, I restarted this blog and stepped back into the fray and the discourse. I have her to thank for that. She was also a force behind the launch of the new Less Wrong, as were multiple other top CFAR people, including but far from limited to Less Wrong’s benevolent dictator for life Matthew Graves, Michael “Valentine” Smith and CFAR instructor Oliver Habryka.

I wanted to attend a new workshop this year at Anna’s suggestion, as I think this would be valuable on many levels, but my schedule and available vacation days did not permit it. I hope to fix this in the coming year, perhaps as early as mid-January.

As with MIRI, I have known many of the principals at CFAR for many years, including Anna Salamon, Michael Smith and Lauren Lee, along with several alumni and several instructors. They are all smart, trustworthy and dedicated people who believe in doing their best to help their students and to help those students have an impact in AI Safety and other places that matter.

In my endorsement of MIRI, I mentioned that the link between AI and rationality cuts both ways. Thinking about AI has helped teach me how to think. That effect does not get the respect it deserves. But there’s no substitute for studying the art of thinking directly. That’s where CFAR comes in.


CFAR is at a unique stage of its development. If the fundraiser goes well, CFAR will be able to purchase a permanent home. Last year CFAR spent about $500,000 on renting space. Renting the kind of spaces CFAR needs is expensive. Almost all of these needs would be covered by CFAR’s new home, with a mortgage plus maintenance that they estimate costing at most $10,000 a month, saving 75% on space costs and a whopping 25% of CFAR’s annual budget. The marginal cost of running additional workshops would fall even more than that.

In addition to that, the ability to keep and optimize a permanent home, set up for their purposes, will make things run a lot smoother. I expect a lot of gains from this.

Whether or not CFAR will get to do that depends on the results of their current fundraiser, and on what they can raise before the end of the year. The leverage available here is quite high – we can move to a world in which the default is that each week there is likely a workshop being run.


As with MIRI, it is important that I also state my concerns and my biases. The dangers of bias are obvious. I am highly invested in exactly the types of thinking CFAR promotes. That means I can verify that they are offering ‘the real thing’ in an important sense, and that they have advanced not only the teaching of the art but also the art itself, but it also means that I am especially inclined to think such things are valuable. Again as with MIRI, I know many of the principals, which means good information but also might be clouding my judgment.

In addition, I have concerns about the philosophy behind CFAR’s impact report.

In the report, impact is measured in terms of students who had an ‘increase in expected impact (IEI)’ as a result of CFAR. Impact is defined as doing effective altruist (EA) type things, either donating to EA-style organizations, working with such organizations (including MIRI and CFAR), or a career path towards on EA-aligned work, including AI safety, or leading rationalist/EA events.  151 of the 159 alumni with such impact fall into one of those categories, with only 8 contributing in other ways.

I sympathize with this framework. Not measuring at all is far worse than measuring. Measurement requires objective endpoints one can measure.

I don’t have a great alternative. But the framework remains inherently dangerous. Since CFAR is all about learning how to think about the most important things, knowing how CFAR is handling such concerns becomes an important test case.


The good news is that CFAR is thinking hard and well about these problems, both in my private conversations with them and in their listed public concerns. I’m going to copy over the ‘limitations’ section of the impact statement here:

  • The profiles contain detailed information about particular people’s lives, and our method of looking at them involved sensitive considerations of the sort that are typically discussed in places like hiring committees rather than in public. As a result, our analysis can’t be as transparent as we’d like and it is more difficult for people outside of CFAR to evaluate it or provide feedback.
  • We might overestimate or underestimate the impact that a particular alum is having on the world. Risk of overestimation seems especially high if we expect the person’s impact to occur in the future. Risk of underestimation seems especially high if the person’s worldview is different from ours, in a way that is relevant to how they are attempting to have an impact.
  • We might overestimate or underestimate the size of CFAR’s role in the alum’s impact. We found it relatively easier to estimate the size of CFAR’s role when people reported career changes, and harder when they reported increased effectiveness or skill development. For example, the September 2016 CFAR for Machine Learning researchers (CML) program was primarily intended to help machine learning researchers develop skills that would lead them to be more thoughtful and epistemically careful when thinking about the effects of AI, but we have found it difficult to assess how well it achieved this aim.
  • We only talked with a small fraction of alumni. Focusing only on these 22 alumni would presumably undercount CFAR’s positive effects. It could also cause us to miss potential negative effects: there may be some alums who counterfactually would have been doing high-impact work, but instead are doing something less impactful because of CFAR’s influence, and this methodology would tend to leave them out of the sample.
  • This methodology is not designed to capture broad, community-wide effects which could influence people who are not CFAR alums. For example, one alum that we interviewed mentioned that, before attending CFAR, they benefited from people in the EA/rationality community encouraging them to think more strategically about their problems. If CFAR is contributing to the broader community’s culture in a way that is helpful even to people who haven’t attended a workshop, then that wouldn’t show up in these analyses or the IEI count.
  • When attempting to shape the future of CFAR in response to these data, we risk overfitting to a small number of data points, or failing to adjust for changes in the world over the past few years which could affect what is most impactful for us to do.

These are very good concerns to have. Many of the most important effects of CFAR are essentially impossible to objectively measure, and certainly can’t be quantified in an impact report of this type.

My concern is that measuring in this way will be distortionary. If success is measured and reported, to EAs and rationalists, as alumni who orient towards and work on EA and rationalist groups and causes, the Goodhart’s Law dangers are obvious. Workshops could become increasingly devoted to selling students on such causes, rather than improving student effectiveness in general and counting on effectiveness to lead them to the right conclusions.

Avoiding this means keeping instructors focused on helping the students, and far away from the impact measurements. I have been assured this is the case. Since our community is unusually scrupulous about such dangers, I believe we would be quick to notice and highlight the behaviors I am concerned about, if they started happening. This will always be an ongoing struggle.

As I said earlier, I have no great alternative. The initial plan was to use capitalism to keep such things in check, but selling to the public is if anything more distortionary. Other groups that offer vaguely ‘self-help’ style workshops end up devoting large percentages of their time to propaganda and to giving the impression of effectiveness rather than actual effectiveness. They also cut off many would-be students from the workshops due to lack of available funds. So one has to pick one’s poison. After seeing how big a distortion market concerns were to MetaMed, I am ready to believe that the market route is mostly not worth it.


I believe that both MIRI and CFAR are worthy places to donate, based on both public information and my private information and analysis. Again I want to emphasize that you should do your own work and draw your own conclusions. In particular, the case for CFAR relies on believing in the case for rationality, the same way that the case for MIRI relies on believing in the need for work in AI Safety. There might be other causes and other organizations that are more worthy; statistically speaking, there probably are. These are the ones I know about.

Merry Christmas to all.


This entry was posted in Uncategorized. Bookmark the permalink.

10 Responses to The Story CFAR

  1. MrBubu says:

    Thanks for writing this up. I am now considering donating some money to cfar as well, mainly because of the possible permanent “home”. I did (and do) not consider CFAR to be effective on the same level as MIRI, AMF, etc. Hence I paid no intention to their fundraiser.

  2. Stephan T. Lavavej says:

    Mistake repeated three times: “principle” should be “principal”.

  3. Pingback: Rational Feed – deluks917

  4. anonymous says:

    I unfortunately can not endorse CFAR. It would be trivial for CFAR to conduct surveys that would be considered evidence in a broader academic context (big 5, CRT, various other metrics) that they have not done so indicates to me that either this has not occurred to them (red flag), or they tried it and stopped due to poor results (red flag).

    • TheZvi says:

      I think it’s very reasonable to take the attitude of not endorsing due to lack of good evidence; you don’t have the data so you don’t endorse. I do that all the time!

      But I don’t think this particular lack is the red flag you think it is. I’m also confused why big 5 would be something CFAR would be expected to improve if CFAR worked. Maybe I’m thinking of a different thing? The CRT is something most people who go to CFAR have already seen, plus they’d likely already get perfect scores on it before coming, so I doubt you could really give it to such people and get anything useful (they might have tried it, gotten 90%+ on the pre-test, and given up). If there is a particular test you think they should be giving that they aren’t, I am curious what it is.

      Odd that you don’t think it is likely that they decided such evidence didn’t actually measure anything, and they didn’t want to waste what data collection points they were spending on bad metrics. Or that these aren’t the things that they are trying to accomplish, and measuring them would therefore not be useful. Or that they think that while some say now that this would ‘count as evidence in an academic context’ but that this is not a true objection/statement (or would only be true for massive effect sizes) and such people would simply move on to the next excuse. Often what happens is that different people have different things they think constitute evidence, and if you satisfy some of them others say you didn’t do the obvious right thing, and even dismiss the evidence you did gather as bad.

      I also wouldn’t want CFAR to be maximizing for student scores on traditional tests, which seems like it would risk corrupting things, and I think CFAR thinks about such things carefully. “Not occurring to them” seems highly unlikely as a hypothesis.

      In general, my experience is that ‘this would be considered evidence, you didn’t provide it, so that’s terrible’ tends to be a general demand for convincing evidence (broadly defined, including things like reputation and endorsement, various credentials and status markers, etc, in addition to studies or tests or what not). Not always, but often. But I am curious what exactly your model thinks would be ‘convincing to academia that CFAR worked’ that is plausible to administer and actually measures the thing CFAR is attempting.

  5. Quixote says:

    In the past I have been a donor to CFAR, in fact, I think I’ve donated to it most years of its existence (all but the first I think). I did not donate this year.

    When CFAR was started it had a mission to improve rationality broadly; to make practical what the sequences aimed to do theoretically; to develop exercises and techniques to raise the sanity waterline. After the initial few years, it published one checklist but basically didn’t put out any materials. It held some high priced seminars but didn’t ship anything. I had a call with the director (Anna) and expressed my concerns and was told the workshops served as a source of interim funding but eventually CFAR would go wide. A few more years passed and there was still no production or wide distribution of materials. Recently, CFAR at least issued some clarification of what it thought its mission was and what it intended to do; and this amounted to a significant pivot away from its original mission.

    • TheZvi says:

      I agree with a lot of this. The mission definitely changed (Anna said so explicitly on LW), in ways that I’m also not intuitively thrilled about. I would definitely like to see more materials distribution and attempt to aid general rationality training/learning/education be more the thing going on here, and I’m sad about that shift. I’m not sure what I think about the AI Safety focus – I do think that’s the right cause to focus on, but focus on a cause at all makes me worry and takes away from what we both would like to be the main focus. I’m also worried that this means that to be a place people give they have to both endorse the new mission and the original mission of rationality teaching, and that makes it much harder. Cross-disciplinary fundraising can’t be easy, and I see this not fitting into anyone’s buckets.

      I hope to learn more during my trip to SF in two weeks, where I very much want to talk to various principles.

      • Quixote says:

        Did you get the opportunity to talk to CFAR principals about this while you were in SF? If so is there anything you feel comfortable publicly reporting from your conversation? Thanks.

      • TheZvi says:

        Alas I mostly was not; Anna’s schedule didn’t work and I was super duper busy the whole time. I did speak with one employee, and I can report that this updated me in the direction of things being healthier, but I didn’t get a chance to explore these issues in depth. Hopefully next trip I will have more time.

      • Quixote says:

        Thanks for the update.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s