The Story CFAR

In addition to to my donation to MIRI, I am giving $4000 to CFAR, the Center for Applied Rationality, as part of their annual fundraiser. I believe that CFAR does excellent and important work, and that this fundraiser comes at a key point where an investment now can pay large returns in increased capacity.

I am splitting my donation and giving to both organizations for three reasons. I want to meaningfully share my private information and endorse both causes. I want to highlight this time as especially high leverage due to the opportunity to purchase a permanent home. And importantly, CFAR and its principals have provided and in the future will provide direct personal benefits, so it’s good and right to give my share of support to the enterprise.

As with MIRI, you should do your own work and make your own decision on whether a donation is a good idea. You need to decide if the cause of teaching rationality is worthy, either in the name of AI safety or for its own sake, and whether CFAR is an effective way to advance that goal. I will share my private information and experiences, to better aid others in deciding whether to donate and whether to consider attending a workshop, which I also encourage.

Here are links to CFAR’s 2017 retrospective,  impact estimate, and plans for 2018.


My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.

I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.

In the years since then, many of my friends in the community attended workshops, reporting that things have improved steadily over time. A large number of rationality concepts have emerged directly from CFAR’s work, the most central being double crux. They’ve also helped us take known outside concepts that work and helped adapt them to the context of rationalist outlooks, an example being trigger action plans. I had the opportunity recently to look at the current CFAR workbook, and I was impressed.

In February, CFAR president and co-founder Anna Salamon organized an unconference I attended. It was an intense three days that left me and many other participants better informed and also invigorated and excited. As a direct result of that unconference, I restarted this blog and stepped back into the fray and the discourse. I have her to thank for that. She was also a force behind the launch of the new Less Wrong, as were multiple other top CFAR people, including but far from limited to Less Wrong’s benevolent dictator for life Matthew Graves, Michael “Valentine” Smith and CFAR instructor Oliver Habryka.

I wanted to attend a new workshop this year at Anna’s suggestion, as I think this would be valuable on many levels, but my schedule and available vacation days did not permit it. I hope to fix this in the coming year, perhaps as early as mid-January.

As with MIRI, I have known many of the principals at CFAR for many years, including Anna Salamon, Michael Smith and Lauren Lee, along with several alumni and several instructors. They are all smart, trustworthy and dedicated people who believe in doing their best to help their students and to help those students have an impact in AI Safety and other places that matter.

In my endorsement of MIRI, I mentioned that the link between AI and rationality cuts both ways. Thinking about AI has helped teach me how to think. That effect does not get the respect it deserves. But there’s no substitute for studying the art of thinking directly. That’s where CFAR comes in.


CFAR is at a unique stage of its development. If the fundraiser goes well, CFAR will be able to purchase a permanent home. Last year CFAR spent about $500,000 on renting space. Renting the kind of spaces CFAR needs is expensive. Almost all of these needs would be covered by CFAR’s new home, with a mortgage plus maintenance that they estimate costing at most $10,000 a month, saving 75% on space costs and a whopping 25% of CFAR’s annual budget. The marginal cost of running additional workshops would fall even more than that.

In addition to that, the ability to keep and optimize a permanent home, set up for their purposes, will make things run a lot smoother. I expect a lot of gains from this.

Whether or not CFAR will get to do that depends on the results of their current fundraiser, and on what they can raise before the end of the year. The leverage available here is quite high – we can move to a world in which the default is that each week there is likely a workshop being run.


As with MIRI, it is important that I also state my concerns and my biases. The dangers of bias are obvious. I am highly invested in exactly the types of thinking CFAR promotes. That means I can verify that they are offering ‘the real thing’ in an important sense, and that they have advanced not only the teaching of the art but also the art itself, but it also means that I am especially inclined to think such things are valuable. Again as with MIRI, I know many of the principals, which means good information but also might be clouding my judgment.

In addition, I have concerns about the philosophy behind CFAR’s impact report.

In the report, impact is measured in terms of students who had an ‘increase in expected impact (IEI)’ as a result of CFAR. Impact is defined as doing effective altruist (EA) type things, either donating to EA-style organizations, working with such organizations (including MIRI and CFAR), or a career path towards on EA-aligned work, including AI safety, or leading rationalist/EA events.  151 of the 159 alumni with such impact fall into one of those categories, with only 8 contributing in other ways.

I sympathize with this framework. Not measuring at all is far worse than measuring. Measurement requires objective endpoints one can measure.

I don’t have a great alternative. But the framework remains inherently dangerous. Since CFAR is all about learning how to think about the most important things, knowing how CFAR is handling such concerns becomes an important test case.


The good news is that CFAR is thinking hard and well about these problems, both in my private conversations with them and in their listed public concerns. I’m going to copy over the ‘limitations’ section of the impact statement here:

  • The profiles contain detailed information about particular people’s lives, and our method of looking at them involved sensitive considerations of the sort that are typically discussed in places like hiring committees rather than in public. As a result, our analysis can’t be as transparent as we’d like and it is more difficult for people outside of CFAR to evaluate it or provide feedback.
  • We might overestimate or underestimate the impact that a particular alum is having on the world. Risk of overestimation seems especially high if we expect the person’s impact to occur in the future. Risk of underestimation seems especially high if the person’s worldview is different from ours, in a way that is relevant to how they are attempting to have an impact.
  • We might overestimate or underestimate the size of CFAR’s role in the alum’s impact. We found it relatively easier to estimate the size of CFAR’s role when people reported career changes, and harder when they reported increased effectiveness or skill development. For example, the September 2016 CFAR for Machine Learning researchers (CML) program was primarily intended to help machine learning researchers develop skills that would lead them to be more thoughtful and epistemically careful when thinking about the effects of AI, but we have found it difficult to assess how well it achieved this aim.
  • We only talked with a small fraction of alumni. Focusing only on these 22 alumni would presumably undercount CFAR’s positive effects. It could also cause us to miss potential negative effects: there may be some alums who counterfactually would have been doing high-impact work, but instead are doing something less impactful because of CFAR’s influence, and this methodology would tend to leave them out of the sample.
  • This methodology is not designed to capture broad, community-wide effects which could influence people who are not CFAR alums. For example, one alum that we interviewed mentioned that, before attending CFAR, they benefited from people in the EA/rationality community encouraging them to think more strategically about their problems. If CFAR is contributing to the broader community’s culture in a way that is helpful even to people who haven’t attended a workshop, then that wouldn’t show up in these analyses or the IEI count.
  • When attempting to shape the future of CFAR in response to these data, we risk overfitting to a small number of data points, or failing to adjust for changes in the world over the past few years which could affect what is most impactful for us to do.

These are very good concerns to have. Many of the most important effects of CFAR are essentially impossible to objectively measure, and certainly can’t be quantified in an impact report of this type.

My concern is that measuring in this way will be distortionary. If success is measured and reported, to EAs and rationalists, as alumni who orient towards and work on EA and rationalist groups and causes, the Goodhart’s Law dangers are obvious. Workshops could become increasingly devoted to selling students on such causes, rather than improving student effectiveness in general and counting on effectiveness to lead them to the right conclusions.

Avoiding this means keeping instructors focused on helping the students, and far away from the impact measurements. I have been assured this is the case. Since our community is unusually scrupulous about such dangers, I believe we would be quick to notice and highlight the behaviors I am concerned about, if they started happening. This will always be an ongoing struggle.

As I said earlier, I have no great alternative. The initial plan was to use capitalism to keep such things in check, but selling to the public is if anything more distortionary. Other groups that offer vaguely ‘self-help’ style workshops end up devoting large percentages of their time to propaganda and to giving the impression of effectiveness rather than actual effectiveness. They also cut off many would-be students from the workshops due to lack of available funds. So one has to pick one’s poison. After seeing how big a distortion market concerns were to MetaMed, I am ready to believe that the market route is mostly not worth it.


I believe that both MIRI and CFAR are worthy places to donate, based on both public information and my private information and analysis. Again I want to emphasize that you should do your own work and draw your own conclusions. In particular, the case for CFAR relies on believing in the case for rationality, the same way that the case for MIRI relies on believing in the need for work in AI Safety. There might be other causes and other organizations that are more worthy; statistically speaking, there probably are. These are the ones I know about.

Merry Christmas to all.


Posted in Uncategorized | 10 Comments

I Vouch For MIRI

Another take with more links: AI: A Reason to Worry, A Reason to Donate

I have made a $10,000 donation to the Machine Intelligence Research Institute (MIRI) as part of their winter fundraiser. This is the best organization I know of to donate money to, by a wide margin, and I encourage others to also donate. This belief comes from a combination of public information, private information and my own analysis. This post will share some of my private information and analysis to help others make the best decisions.

I consider AI Safety the most important, urgent and under-funded cause. If your private information and analysis says  another AI Safety organization is a better place to give, give to there. I believe many AI Safety organizations do good work. If you have the talent and skills, and can get involved directly, or get others who have the talent and skills involved directly, that’s even better than donating money.

If you do not know about AI Safety and unfriendly artificial general intelligence, I encourage you to read about them. If you’re up for a book, read this one.

If you decide you care about other causes more, donate to those causes instead, in the way your analysis says is most effective. Think for yourself, do and share your own analysis, and contribute as directly as possible.


I am very confident in the following facts about artificial general intelligence. None of my conclusions in this section require my private information.

Humanity is likely to develop artificial general intelligence (AGI) vastly smarter and more powerful than humans. We are unlikely to know that far in advance when this is about to happen. There is wide disagreement and uncertainty on how long this will take, but certainly there is substantial chance this happens within our lifetimes.

Whatever your previous beliefs, the events of the last year, including AlphaGo Zero, should convince you that AGI is more likely to happen, and more likely to happen soon.

If we do build an AGI, its actions will determine what is done with the universe.

If the first such AGI we build turns out to be an unfriendly AI that is optimizing for something other than humans and human values, all value in the universe will be destroyed. We are made of atoms that could be used for something else.

If the first such AGI we build turns out to care about humans and human values, the universe will be a place of value many orders of magnitude greater than it is now.

Almost all AGIs that could be constructed care about something other than humans and human values, and would create a universe with zero value. Mindspace is deep and wide, and almost all of it does not care about us.

The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe out all value in the universe.

AI Safety is a hard problem on many levels. Solving it is much harder than it looks even with the best of intentions, and incentives are likely to conspire to give those involved very bad personal incentives. Without security mindset, value alignment and tons of advance work, chances of success are very low.

We are currently spending ludicrously little time, attention and money on this problem.

For space reasons I am not further justifying these claims here. Jacob’s post has more links.


In these next two sections I will share what I can of my own private information and analysis.

I know many principles at MIRI, including senior research fellow Eliezer Yudkowsky and executive director Nate Soares. They are brilliant, and are as dedicated as one can be to the cause of AI Safety and ensuring a good future for the universe. I trust them, based on personal experience with them, to do what they believe is best to achieve these goals.

I believe they have already done much exceptional and valuable work. I have also read many of their recent papers and found them excellent.

MIRI has been invaluable in laying the groundwork for this field. This is true both on the level of the field existing at all, and also on the level of thinking in ways that might actually work.

Even today, most who talk about AI Safety suggest strategies that have essentially no chance of success, but at least they are talking about it at all. MIRI is a large part of why they’re talking at all. I believe that something as simple as these DeepMind AI Safety test environments is good, helping researchers understand there is a problem much more deadly than algorithmic discrimination. The risk is that researchers will realize a problem exists, then think ‘I’ve solved these problems, so I’ve done the AI Safety thing’ when we need the actual thing the most.

From the beginning, MIRI understood the AI Safety problem is hard, requiring difficult high-precision thinking, and long term development of new ideas and tools. MIRI continues to fight to turn concern about ‘AI Safety’ into concern about AI Safety.

AI Safety is so hard to understand that Eliezer Yudkowsky decided he needed to teach the world the art of rationality so we could then understand AI Safety. He did exactly that, which is why this blog exists.

MIRI is developing techniques to make AGIs we can understand and predict and prove things about. MIRI seeks to understand how agents can and should think. If AGI comes from such models, this is a huge boost to our chances of success. MIRI is also working on techniques to make machine learning based agents safer, in case that path leads to AGI first. Both tasks are valuable, but I am especially excited by MIRI’s work on logic.


Eliezer’s model was that if we teach people to think, then they can think about AI.

What I’ve come to realize is that when we try to think about AI, we also learn how to think in general.

The paper that convinced OpenPhil to increase its grant to MIRI was about Logical Induction. That paper was impressive and worth understanding, but even more impressive and valuable in my eyes is MIRI’s work on Functional Decision Theory. This is vital to creating an AGI that makes decisions, and has been invaluable to me as a human making decisions. It gave me a much better way to understand, work with and explain how to think about making decisions.

Our society believes in and praises Causal Decision Theory, dismissing other considerations as irrational. This has been a disaster on a level hard to comprehend. It destroys the foundations of civilization. If we could spread practical, human use of Functional Decision Theory, and debate on that basis, we could get out of much of our current mess. Thanks to MIRI, we have a strong formal statement of Functional Decision Theory.

Whenever I think about AI or AI Safety, read AI papers or try to design AI systems, I learn how to think as a human. As a side effect of MIRI’s work, my thinking, and especially my ability to formalize, explain and share my thinking, has been greatly advanced. Their work even this year has been a great help.

MIRI does basic research into how to think. We should expect such research to continue to pay large and unexpected dividends, even ignoring its impact on AI Safety.


I believe it is always important to use strategies that are cooperative and information creating, rather than defecting and information destroying, and that preserve good incentives for all involved. If we’re not using a decision algorithm that cares more about such considerations than maximizing revenue raised, even when raising for a cause as good as ‘not destroying all value in the universe,’ it will not end well.

This means that I need to do three things. I need to share my information, as best I can. I need to include my own biases, so others can decide whether and how much to adjust for them. And I need to avoid using strategies that would be distort or mislead.

I have not been able to share all my information above, due to a combination of space, complexity and confidentiality considerations. I have done what I can. Beyond that, I will simply say that what remaining private information I have on net points in the direction of MIRI being a better place to donate money.

My own biases here are clear. The majority of my friends come from the rationality community, which would not exist except for Eliezer Yudkowsky. I met my wife Laura at a community meetup. I know several MIRI members personally, consider them friends, and even ran a strategy meeting for them several years back at their request. It would not be surprising if such considerations influenced my judgment somewhat. Such concerns go hand in hand with being in a position to do extensive analysis and acquire private information. This is all the more reason to do your own thinking and analysis of these issues.

To avoid distortions, I am giving the money directly, without qualifications or gimmicks or matching funds. My hope is that this will be a costly signal that I have thought long and hard about such questions, and reached the conclusion that MIRI is an excellent place to donate money. OpenPhil has a principle that they will not fund more than half of any organization’s budget. I think this is an excellent principle. There is more than enough money in the effective altruist community to fully fund MIRI and other such worthy causes, but these funds represent a great temptation. They risk causing great distortions, and tying up action with political considerations, despite everyone’s best intentions.

As small givers (at least, relative to some) our biggest value lies not in the use of the money itself, but in the information value of the costly signal our donations give and in the virtues we cultivate in ourselves by giving. I believe MIRI can efficiently utilize far more money than it currently has, but more than that this is me saying that I know them, I know their work, and I believe in and trust them. I vouch for MIRI.






Posted in Uncategorized | 12 Comments

Pascal’s Muggle Pays

Reply To (Eliezer Yudkowsky): Pascal’s Muggle Infinitesimal Priors and Strong Evidence

Inspired to Finally Write This By (AlexMennen at Lesser Wrong): Against the Linear Utility Hypothesis and the Leverage Penalty.

The problem of Pascal’s Muggle begins:

Suppose a poorly-dressed street person asks you for five dollars in exchange for doing a googolplex’s worth of good using his Matrix Lord powers.

“Well,” you reply, “I think it very improbable that I would be able to affect so many people through my own, personal actions – who am I to have such a great impact upon events?  Indeed, I think the probability is somewhere around one over googolplex, maybe a bit less.  So no, I won’t pay five dollars – it is unthinkably improbable that I could do so much good!”

“I see,” says the Mugger.

At this point, I note two things. I am not paying. And my the probability the mugger is a Matrix Lord is much higher than five in a googolplex.

That looks like a contradiction. It’s positive expectation to pay, by a lot, and I’m not paying.

Let’s continue the original story.

A wind begins to blow about the alley, whipping the Mugger’s loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle.  In the sky above, a gap edged by blue fire opens with a horrendous tearing sound – you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too – and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.

“That’s not actually me,” the Mugger says, “just a conceptual representation, but I don’t want to drive you insane.  Now give me those five dollars, and I’ll save a googolplex lives, just as promised.  It’s easy enough for me, given the computing power my home universe offers.  As for why I’m doing this, there’s an ancient debate in philosophy among my people – something about how we ought to sum our expected utilities – and I mean to use the video of this event to make a point at the next decision theory conference I attend.   Now will you give me the five dollars, or not?”

“Mm… no,” you reply.

No?” says the Mugger.  “I understood earlier when you didn’t want to give a random street person five dollars based on a wild story with no evidence behind it.  But now I’ve offered you evidence.”

“Unfortunately, you haven’t offered me enough evidence,” you explain.

I’m paying.

So are you.

What changed?


The probability of Matrix Lord went up, but the odds were already there, and he’s probably not a Matrix Lord (I’m probably dreaming or hypnotized or nuts or something).

At first the mugger could benefit by lying to you. More importantly, people other than the mugger could benefit by trying to mug you and others who reason like you, if you pay such muggers. They can exploit taking large claims seriously.

Now the mugger cannot benefit by lying to you. Matrix Lord or not, there’s a cost to doing what he just did and it’s higher than five bucks. He can extract as many dollars as he wants in any number of ways. A decision function that pays the mugger need not create opportunity for others.

I pay.

In theory Matrix Lord could derive some benefit like having data at the decision theory conference, or a bet with another Matrix Lord, and be lying. Sure. But if I’m even 99.999999999% confident this isn’t for real, that seems nuts.

(Also, he could have gone for way more than five bucks. I pay.)

(Also, this guy gave me way more than five dollars worth of entertainment. I pay.)

(Also, this guy gave me way more than five dollars worth of good story. I pay.)


The leverage penalty is a crude hack. Our utility function is given, so our probability function had to move or the Shut Up and Multiply would do crazy things like pay muggers.

The way out is our decision algorithm. As per Logical Decision Theory, our decision algorithm is correlated to lots of things, including the probability of muggers approaching you on the street and what benefits they offer. The reason real muggers use a gun rather than a banana is mostly that you’re far less likely to hand cash over to someone holding a banana. The fact that we pay muggers holding guns is why muggers hold guns. If we paid muggers holding bananas, muggers would happily point bananas.

There is a natural tendency to slip out of Functional Decision Theory into Causal Decision Theory. If I give this guy five dollars, how often will it save all these lives? If I give five dollars to this charity, what will that marginal dollar be spent on?

There’s a tendency for some, often economists or philosophers, to go all lawful stupid about expected utility and berate us for not making this slip. They yell at us for voting, and/or asking us to justify not living in a van down by the river on microwaved ramen noodles in terms of our expected additional future earnings from our resulting increased motivation and the networking effects of increased social status.

To them, we must reply: We are choosing the logical output of our decision function, which changes the probability that we’re voting on reasonable candidates, changes the probability there will be mysterious funding shortfalls with concrete actions that won’t otherwise get taken, changes the probability of attempted armed robbery by banana, and changes the probability of random people in the street claiming to be Matrix Lords. It also changes lots of other things that may or may not seem related to the current decision.

Eliezer points out humans have bounded computing power, which does weird things to one’s probabilities, especially for things that can’t happen. Agreed, but you can defend yourself without making sure you never consider benefits multiplied by 3↑↑↑3 without also dividing by 3↑↑↑3. You can have a logical algorithm that says not to treat differently claims of 3↑↑↑3 and 3↑↑↑↑3 if the justification for that number is someone telling you about it. Not because the first claim is so much less improbable, but because you don’t want to get hacked in this way. That’s way more important than the chance of meeting a Matrix Lord.

Betting on your beliefs is a great way to improve and clarify your beliefs, but you must think like a trader. There’s a reason logical induction relies on markets. If you book bets on your beliefs at your fair odds without updating, you will get dutch booked. Your decision algorithm should not accept all such bets!

People are hard to dutch book.

Status quo bias can be thought of as evolution’s solution to not getting dutch booked.


Split the leverage penalty into two parts.

The first is ‘don’t reward saying larger numbers’. Where are these numbers coming from? If the numbers come from math we can check, and we’re offered the chance to save 20,000 birds, we can care much more than about 2,000 birds. A guy designing pamphlets picking arbitrary numbers, not so much.

Scope insensitivity can be thought of as evolution’s solution to not getting Pascal’s mugged. The one child is real. Ten thousand might not be. Both scope insensitivity and probabilistic scope sensitivity get you dutch booked.

Scope insensitivity and status quo bias cause big mistakes. We must fight them, but by doing so we make ourselves vulnerable.

You also have to worry about fooling yourself. You don’t want to give your own brain reason to cook the books. There’s an elephant in there. If you give it reason to, it can write down larger exponents.

The second part is applying Bayes’ Rule properly. Likelihood ratios for seeming high leverage are usually large. Discount accordingly. How much is a hard problem. I won’t go into detail here, except to say that if calculating a bigger impact doesn’t increase how excited you are about an opportunity, you are doing it wrong.



Posted in Uncategorized | 2 Comments

Book Review: The Complacent Class

Epistemic Status: My read. The mind of another is always tricky. At a minimum, Tyler would not agree with many of my framing and word choices.

The Book: The Complacent Class

Slide Deck: The Complacent Class and the Philosophy of Tyler Cowen, by my friend Anthony. This provides a good summary of his main facts and points on the surface level.

Tyler Cowen explicitly believes, and has said many times, that for society to succeed we need to believe untrue things. In his model, a society’s beliefs need to be chosen for their impact on society first. A lot of that impact is in raising the status of behaviors we want to see more, and lowering the status of behaviors we want to see less. He is often explicit about this.

Accuracy is a secondary consideration.

He might (not his example) prefer we think french fries are not delicious. We would eat less of them and be better off, even if french fries are delicious.

This is distinct from the type of self-deception Robin Hanson talks about in his book The Elephant in the Brain.

This approach is tricky. How do you avoid believing arbitrary false facts? How do we choose the actions with the best impact without an accurate model of the world? How do we engage in chains of reasoning?

This is a complex problem.

His solution is the Straussian reading.

The surface text has the first level message we want people to believe and act on, to make the world a better place. We must look deeper for the real, often opposite, second level message. We can then have a third level well-hidden discourse about what the proper first level message ought to be.

Tyler explicitly endorses looking for Straussian readings in a wide variety of texts. His philosophy implies the need for us to believe that which is not, while simultaneously being able to think objectively about the consequences of actions.

Tyler even links enthusiastically to many reviews of his book that claim his book is saying things Tyler explicitly disavows! In particular, he highlighted this truly epic masterpiece of extrapolation.

Tyler intending a Straussian reading of his own books is not only not a strange hypothesis. It should be our prior.

His first level message is that you (and America) are too complacent. You should not be complacent. To survive we must become less complacent.

His second level message is that while it is bad for society if its people are complacent, it is in your interest to be complacent. 

His third level message is that we must solve this collective action problem. We must enforce and reward the norms that we need everyone to have, rather than enforce and reward those who do what causes that particular person to enjoy the best outcome.

Tyler Cowen, intentionally or not, is making the case against Causal Decision Theory.

(Important Note: Statements that I believe are labeled as such, otherwise any views expressed are my model of Tyler’s views.)

What is Complacency?

Exploitation, as opposed to exploration. Also exploitation in the sense of enjoying the benefits of our civilization without providing for its upkeep, or the upkeep of its norms, or the honoring of those who pay these upkeep costs.

The opposite of complacency is striving. A striver values exploration. Even if a striver is not getting their hands dirty on a particular issue, they honor those who do, and those who give such honor. They honor those who fight in the arena, even (especially!) when they take make mistakes or take necessary but unsavory actions.

From a selfish perspective, almost all individuals in America should mostly exploit rather than explore or strive. Exploitation has it pretty great. Exploitation captures most personal benefits of exploration at a fraction of the cost.

Society benefits when we explore, but not when we exploit. We need people to explore more and exploit less, despite this making the explorer worse off. 

Tyler proposes to do this partly by making exploration easier and better, but mostly by raising the status of exploration relative to exploitation.

A book called The Exploiting Class would be misunderstood, so Tyler wisely found a less loaded word.

Complacent reads as negative but not evil. This helps us accept we are too complacent, and perhaps change.

The complacent are free ridersThey want the benefits of our civilization and culture, without helping maintain it. A system with too many free riders will collapse.

Payment here does not mean money. There is plenty of money.

Reinforcing the cohesion of society, of civic life, exploring the area around you, traveling, enriching those around you: All are payment. Interacting with the physical world and the people around you is payment. Sharing ideas is payment. Creating opportunities for the unexpected is payment.

Real productivity is payment. Keeping the system running. Driving the trucks, running the trains, policing the streets, staffing the army? Payment.

Creating opportunity to do real things? Creating good jobs? Excellent payment.

Research and experimentation is even better payment. Don’t forget to publish, especially negative results.

Innovation is better still.

Honor is a key payment. Raising the status of making payments is payment! As is lowering the status of those who refuse to make payments.

As is reinforcing the beliefs and norms that create payments. We need our ethos, our founding myths and our functional solutions.

We need our functional solutions even when they involve bad things. We need to do this without endorsing those bad things. 

Remember Orwell’s statement that “People sleep peaceably in their beds at night only because rough men stand ready to do violence on their behalf.” Extend this to honoring such men and actions, and other less violent ones we also prefer not to think about too carefully. We get to espouse high ideals that we could never fully uphold, because we take necessary actions that violate those ideals.

Tyler believes that fully accurate historical accounts, and fully accurate accounts of the actions our government and authorities take, are destroying our national myths and cohesion. This could be catastrophically bad. We need people to believe in liberal democratic values. We also require actions and systems that directly conflict with those values to prevent civilizational collapse.

The first best solution, for the people to understand both halves, won’t work. Most people are not capable of holding both halves in their head at once. We must find a way to do both at once anyway, or history ends up being cyclical.

The good things in life undermine our willingness to pay for their upkeep.

P. C. Hodgell said: “That which can be destroyed by the truth should be.”

Tyler Cowen could not disagree more. He sees the truth destroying things and he is terrified. 

You Can’t Say That Out Loud!

Our society has made talking about some of its central issues and problems quite difficult.

A loosely defined and generally expanding class of facts, thoughts and ideas are classified as Bad Things. Many of these Bad Things were consensus views as recently as a few years ago. Many of the Bad Things are believed by many or even a majority of the people. Some are obviously true, important to people’s well being and believed by basically everyone.

If you defend even one of these Bad Things even once, for a single sentence, that makes you a horrible person. Some people will literally try to destroy your entire life for it. Even if your statement is factually accurate. Even if your true statement was not a Bad Thing at the time, but retroactively becomes a Bad Thing. Even if you do so while explicitly fighting that same Bad Thing. Or condemning Bad Thing insufficiently strongly.

There are also Good Things, for which the situation is reversed.

Tyler describes problems where people value Bad Things. In some cases he wants to point out that this is in their self-interest. In others, he wants to point out that those Bad Things are necessary for society.

Thus, he needs even more obfuscation than usual.

His book contains facts. It condemns Bad Things and praises Good Things.

If all works as planned, this constructs a model that causes the man on the street to take proper actions.

The text also contains enough information to construct a second model. This second model is deniable. It is explicit about what must be raised and lowered in status. It knows some Good Things are bad, some Bad Things good. Sometimes for the individual, sometimes for society. Most importantly, the second model explains what is going on and why, but then encourages us to preach for the first model, because the second model predicts that widespread belief in the first model will have good results.

A Trip Through the Book

When read at first level, The Complacent Class is not a great book. When read at second level, it is a much better one.

Chapter 1: The Complacent Classes

Tyler identifies three classes of complacent people.

There are those who have it made. When the exploiting is great, might as well exploit. The good life is not that expensive.

There are those who dig in. Hanging on is tough, what they have is pretty good. They feel the squeeze from society and lack the spare resources to explore. They also don’t see much upside to doing so. Better to focus on holding on to what they have.

There are those who got stuck. Things are bad, without good opportunities. Striving risks lose what little they have. Why try to escape? Effective marginal tax rates for workers trying to rise out of poverty can approach 100%. For those on disability, trying to work can mean permanent loss of benefits.

Personal marginal returns to such people striving are very low. Benefits to exploitation and complacency have gone up, benefits of exploration and striving have gone down.

Everyone exploits, then complains that no one explored.

Chapter 2: Not Moving

Moving is hardcore exploration. A lot of people are stuck in dead end places with no economic future. To allocate people properly, many must move.

Americans stopped moving. That’s really bad.

Why did they stop? Moving sucks.

Part of this is unavoidable. Moving across the street eats thousands of dollars and weeks of your life. Moving across the country eats a lot more. You need new pals and new career. You leave everyone behind. All local knowledge must be rebuilt.

Other parts are avoidable. Moving endangers vital government benefits and occupational licences. endangering your livelihood. This kept a close family member of mine in New Jersey. If you have a child, moving outside the local region is often effectively illegal.

Cities like New York City and San Francisco, where people want to move, are prohibitively expensive due to building restrictions.

We stay in our jobs longer than ever, no matter the media picture. Only 21% of employees have been in their current job less than a year.  Searching for a job is even less fun than it used to be, and often means a step down.

Tyler describes an experiment. Poor families were offered the chance to move to richer neighborhoods for no extra rent, on the theory it would help children. For those who moved, future income went up by a discounted value of $99,000 per child. But half the parents turned the (free, subsidized) move down!

Chapter 3: Segregation

Tyler here is referring both to segregation by race but also by income, culture or other preferences. He notes that all are increasing, because that is what people want. 

We want to live near people similar to us, go to school with them, attend religious services with them, work with them, hang out with them and be friends with them. We want them to want the same products, stores, restaurants and other such things that we do, so there will be demand for them. We value those who share our values and have things in common with us.

Rich people certainly prefer to live around and interact with other rich people. This is what rich people are mostly spending their money on. The rich person version of a thing is often better, but it’s mostly expensive to exclude the non-rich, especially for housing and schooling.

Revealed preference says many of us will spend essentially all of our money to be sorted into the group spending that amount of money on the same types of things.

He condemns this sorting with many sentences like:

Most forms of segregation ultimately corrode the basis of prosperity and innovation and eat into the trust and seed capital of society.

What are these destructive side effects? Tyler mostly does not say, attempting to pass it off as obvious. The word segregation, Bad Thing par excellence, does all the work to construct the first level.

The second level is that segregation is what everyone is spending their money on. Positional goods and signaling are crowding out other spending. The case against education is in here along with the case against zoning. Even if you’re not rich, you still spend all your money to position yourself as above those with even less money to throw into a giant pit. Saying this outright would imply that the real problem is not distributional, and undermine our democratic and egalitarian and definitely not racist values, and Tyler’s perceived support of them. So he introduces this evidence for other reasons, and allows us to figure out on our own that this sorting acts as a giant tax.

Chapter 4: Innovation

Have you heard about that whole modesty debate rationalists have been having lately? The central argument is whether or not one can benefit from trying to find new ideas and think for themselves, versus trusting in experts or consensus. The fight has mostly been at the first level, where Eliezer and others (including myself) have argued that civilization is so inadequate to solving its problems that one can personally benefit from innovation on a personal level, and can improve one’s epistemic accuracy on a broader level. This is playing in hard mode, both in the sense that thinking for one’s self is hard mode, and also that claiming that one reaps net benefits directly from doing this is also hard mode.

We fight on the first level because it is increasingly fashionable to follow Causal Decision Theory and argue that actions that help one’s self directly are smart and rational and wise, and those that do not are stupid and irrational and unwise. Most would agree that there are side benefits to the group when one engages in innovation, thinking for one’s self and questioning conventional wisdom. This is how the group gets smart in the first place. But we are forced to argue on the first level, with one hand tied behind our backs, because to argue for the second level is seen as conceding the first level, and thus losing the argument, because these hard to measure benefits to others clearly should not count.

Tyler implicitly concedes the first level, in order to argue the second one. In this context, the strategy seems reasonable, since his audience is quite different than Eliezer’s, and thus requires far more modesty.

Tyler points out that lack of innovation is very bad. It hurt productivity growth and hence economic growth, leading to stagnation. He points to monopoly power on the rise, decreasing number of Americans engaged in innovation and declining productivity growth. More than anything, he points to the lack of rising living standards.

This draws the link between innovation and prosperity, both by assumption and by math. If lack of prosperity proves lack of innovation, then prosperity depends on innovation. So by using such evidence of a lack of prosperity as proof, and by sketching this proof, he shows innovation is great and must be encouraged. Monopoly power is cited as a problem, but mostly the solution is a personal message that the reader needs to innovate.

Tyler also points out that direct statistics show we are innovating less, such as there being fewer start-ups, but does not dwell on this to avoid discouraging innovation more. What Tyler leaves unsaid is that striving to innovate is not sufficiently personally rewarded, which is why we need to encourage it by raising it in status. Tyler does not want to discourage would-be innovators by pointing out is current lack of sufficient rewards. Rather, he points out that all our futures depend on such innovators, and hopes this will inspire people to inspire others or even themselves.

I continue to think that we’ve been more innovative and productive than the statistics give us credit for. The smart phone has truly been transformational, as has the internet, as has social media, as has the rise in widely available great television, movies, music and games of all kinds. The statistics don’t properly measure these gains, and thus I am a great stagnation skeptic. However, I also view these technologies as having done a lot of damage, encouraging the rise of addictive behaviors and atomization in particular, so I do think things are rather bad. We’re very differently off with smart phones and social media, but are we better off?

Tyler says he is a happiness optimist but a revenue pessimist, so it’s not clear our perspectives here are that different. Perhaps we are mostly using different frames.

Either way, we both want more innovation, as innovation is hugely socially beneficial.

Chapter 5: Matching

Tyler believes that the gains here come primarily from matching:

I submit it is from matching, which is the supreme skill of the complacent class. We spend our money and invest our time a lot better than before because of matching. Matching is in fact the new grand project of our time, and exactly how grand still remains to be seen. Still, it is likely the largest potential source of unmeasured gains in American well-being, so let’s look at it more closely.

It’s hard to be more explicit than this: We spend our money and invest our time a lot better because of matching. That seems great!

He also notes, for those who did not notice:

“Better matching,” for all its pleasures and virtues, is also in some regards uncomfortably close to the concept of “more segregation.”

“Matching” and “segregation” are indeed two words that are very well… matched.

What does that tell us about segregation?

Tyler begins with the example of music, where less revenue is raised but for $10 a month one can search through and listen to most of the best music ever recorded, along with use of a recommendation engine. While I think those engines are terrible, the ability to freely sample potential music on demand is wonderful. We forget how awful much music used to be in a world where one had fixed albums and we could not try before we buy.

Tyler’s next example is online dating. It is hard to deny that online dating has made dating a much better experience, enabling us to find much better-matched partners with much less investment, and with much less need to play destructive games along the way.

Tyler then warns about advertisers using this matching technology. I mostly find this helpful. Advertisements for random things are obnoxious. The advertisements that are worth paying to have me see are exactly the ones I want to see. Tyler points out that this is perhaps a selfish view, because as advertisers gather too much data they will become better able to hack our preferences.

We better match students to schools, residents to hospitals (using a really cool matching algorithm) and employees to jobs.

There are even matches for dogs.

Tyler claims that those who are bad at handling information suffer from such matching. I’m not so sure. Tyler gives the example of Yelp, where many look only at the misleading star ratings rather than the informative long reviews. There’s truth to that, but even Yelp’s somewhat paid-for star ratings are a huge improvement over no information, and over time better places get more business and overall quality improves. Everyone wins.

The more worrisome case is two-way matching. When employees are better matched to jobs, that is good for productivity, but it is bad for bad employees the same way Yelp reviews are bad for bad restaurants. Before, people without much to offer as employees, or dates or friends or what not, could still sometimes find good matches due to scarcity and difficulty of search. This gave them opportunity to improve, and a subsidy from the better-off to the worse-off. By improving matching, we’ve cut those down a lot, and now those who are low-quality can only pair with others who are low-quality.

Thus, we make our society effectively more unequal and less progressive, the better matching we have, and need to compensate for that.

The second level concern is that as two-way matching improves, we force people to invest more and more resources on getting better matches. Thus, the case against education and other positional goods from chapter three is being silently screamed here. The more important matching is, the more important it is to be viewed as high-quality rather than be high-quality, the less people will invest in being actually high-quality and especially in innovation. What’s the point of doing something useful if no one knows it will be useful, and thus no one wants not only to fund you, but to have anything to do with you?

Tyler makes this explicit at the end of the chapter:

Matchers gain, strivers lose.

The nominal context there is the last important point, which is that sufficiently good two-way matching systems create increasingly efficient markets for that which is matched.

If the dating pool is not well-matched, all good signs are good signs, and one should seek someone with as many good qualities as possible. If the medical residency market is not well-matched, hospitals should seek out the best medical students on every dimension, and medical students should seek the best hospitals.

If the pools are well-matched on two sides, that changes. Now the overall quality of your match has been Zeroed Out. Your date will be ‘built on one hundred points‘ in some sense. If your date is unusually smart, you should like that only if you value smarts more than most; if you value them less, you’d want a less smart date. You’re looking for a better match with things you value more than the market, rather than a better person. You still have to watch out lest you accept a match with a worse overall score, so your cognitive burden is still there, but you despair of doing much better than you ‘deserve”.  

Thus Tyler’s example of dismissing a potential date for being a Red Sox fan, even if you have no opinion of the Red Sox or baseball; this is object-level harmless, but a signal of a non-ideal fit, which is death. If it was something harmless that no one else valued, it would be fully harmless, and the date would not be ruled out.

The broader problem is that matching promotes exploitation over exploration. Inefficient matching techniques, even simple ones like having to walk around and browse to find what you want, force one to encounter and evaluate new possibilities. Having matching systems that are too efficient at finding a good short-term payoff act like dumb hill climbers. Everyone gets trapped at a local maxima, never exposed to risky but exciting possibilities they might like.

I have noticed recommendation engines getting worse at this, becoming stricter hill climbers and less engines of discovery. Amazon, which gets called out by Tyler in the context of allowing us not to leave the house, is especially seems terrible at this. Their music app’s recommendations are often as simple as “Hey, I heard you listened to Ingrid Michaelson so I got you some Ingrid Michaelson to go with your Ingrid Michaelson.” Indeed I do like Ingrid Michaelson but this is not only completely unhelpful – I figured out on my own that this was an option – it’s also the opposite of exploration.

Tyler also points out that Choices are Bad, further punishing exploration.

I have a lot more to say about matching systems, rating systems and recommendation engines, and how to build and utilize them, that would be beyond the scope of the book. I hope I get to such topics, as I find them both fascinating and useful.

Chapter 6: We Stopped Rioting and Legalized Marijuana

At this point the theme is clear. Smoking marijuana is exploitation and personal consumption. Marijuana and striving do not mix. One slows down and has a good time today, at the expense of productivity, some classes of art and creativity notwithstanding. Many report good experiences, although I do not.

Rioting and speaking out is the opposite, ideally a form of altruistic punishment. Except for letting on vent rage, rioting makes everything worse today. The threat of future rioting gives us reason to build a better tomorrow, and allows us to send messages. Taking risks to advance political causes is not in one’s self interest, and if things are pretty good for you, letting others do it also seems bad. Thus, our response of effectively de-escalating riots and shutting down protests and effective free speech. Instead of costly but meaningful actions like rioting, we now have social media likes and professionally orchestrated events, which are not remotely the same thing.

We also saw a large decrease in crime, a huge quality of life benefit. I am not sure why Tyler thinks these gains may not last, other than historical perspective or fear of the consequences of slow growth or loss of jobs.

To me this chapter was mostly about adding more examples of the relevant patterns.

Chapter 7: How a Dynamic Society Looks and Feels

Tyler here points out how different it feels to be in a dynamic society where everything is changing, like China, versus modern America and its huge investments in stability. He suggests we are outsourcing our dynamism to others, so we can take the good parts without paying the costs in upheaval and uncertainty. Again, this seems like the right move for a given person, or even in the short term for the whole society, but with bad long term consequences.

Chapter 8: Political Stagnation

Our government is terrible. It is unresponsive to the needs and opinions of its people. It can’t get things done. Everything it does do costs way too much. It has locked down many of its decisions and assigned them to automatic processes and technocrats. Gridlock rules, and getting important things done has become impossible, regardless of which important things one favors, because such things require us to strive and disrupt and everyone is more interested in short term stability and avoiding disruption.

But you knew that.

Chapter 9: The Return of Chaos

Tyler’s thesis is summed up simply as:

Ultimately peace and stability must be paid for. They must be paid for with real resources, with tax revenue, and they also require the support of people.

The rest of the chapter points to places where instability seems to be emerging, and ways in which it might further emerge. I found the evidence cited to be weak; it felt like grasping at straws and choosing the best available examples. Tyler leans on campus tensions as a sign of willingness to protest and strive, which I do not think is the underlying dynamic there. His examples of what instability might look like don’t feel that convincing, or even that unstable. Perhaps he did not want these scenarios to be what people took away from the book, or talked about. I can see that being wise.

The more convincing case is simply the quote above. We don’t need to know what form the instability will take to know that, if we can’t pay for stability, instability will come. If something can’t go on forever, it won’t.


I read The Complacent Class three month ago, but found this review hard to write for the same reasons I think Tyler wrote the book. Eliezer Yudkowsky’s Inadequate Equilibria, and the discussions that resulted, attempted to tackle related issues from a different angle. Both ask why our civilization is stuck in a variety of bad equilibria. Eliezer gives an intuitive framework for understanding how it can be in everyone’s individual interest to stay stuck. Tyler gives examples where everyone is doing what is best for them personally, to be complacent and exploit rather than to strive and explore. Tyler shows that our incentives and attitudes increasingly favor this.

We have turned this despair into norms. We say complacency and modesty are wise and rational, striving and thinking are foolish and irrational. When people stubbornly refuse to worship Moloch, or they defy Ra, we call them stupid. Or worse.

How do we get to a better equilibrium?

Eliezer speaks to the individual. He says, behold the failure modes and foolishness that trap us. Even if you cannot coordinate to get us out, if you think for yourself on the object-level, you can find ways to know things, and at least improve your own life through direct action. We have to face Moloch alone, but we need not listen to his whispers in our head. The greater scale benefits of such actions are left implicit.

Tyler speaks to the society. He says, behold the failure modes and foolishness that trap us. We must coordinate to get out, and return to thinking for ourselves and fostering exploration, experimentation and interaction. Facing down Moloch is bad for the individual living among his army, but together we can best him. The first step is to stop whispering his praises, and start cursing his name. We raise the status of those who strive, and lower that of the complacent. We make people believe that striving is the path to success. He hopes some read that striving is higher status and more promising, while others read that we must raise its status and make it appear more promising. Then the meta-norm takes over, incentives change and the equilibrium shifts.

Both answers hold promise.

Eliezer is right that more exploration and doing more object-level work is good for the individual on the margin. We are falling into addictive traps and engaging in hyperbolic discounting, resulting in habits reinforcing exploitation we don’t even enjoy over exploration that would prove valuable. Eliezer’s audience has even more to gain here than most.

This alone won’t get it done, though, because Tyler is right too. Even before social pressures and hyperbolic discounts make things worse, individuals benefit from doing more exploiting than is socially optimal. Getting people to do the personally optimal mix, while ignoring the bigger picture, isn’t good enough. We need Tyler’s approach as well. Tyler admits that complacency is the self-interested thing to do, and suggests we work to fix that through shifting incentives and norms, and that we consider the bigger picture. Doing what directly personally benefits us, without an eye to what causes and is caused by such behaviors and habits and norms, is neither wise nor praiseworthy.

Is Tyler right that in order to do this, that as a society we must believe untrue things? Or could we accomplish this with a better set of norms that understood and reinforced the value of striving and the links between different people’s values and actions sufficiently to result in the right amount of striving? Can the American on the street learn to approximate functional decision theory?

I’m not sure. I suspect that we can do it. I hope that we can do it, that if those who can understand and who led us into this hole could agree to stop digging, that the resulting norms would do the rest. People want to do big things, take chances, help others. We intuitively grasp the subtle causal links across time, the honor we should bestow upon the striver, the call to adventure. I hope it is only because we have lied to them that these desires are irrational and bad, and overwritten those intuitions, that we then feel the need to lie to them again about what will directly improve their lives.

Posted in Uncategorized | 2 Comments

Book Review: The Captured Economy

Epistemic Status: The choir

On Tyler Cohen’s claim that it was an important book, I read The Captured Economy by Brink Lindsey and Steven Teles. Its thesis is that regressive regulation is strangling our economy and increasing inequality. They claim that the damage from such policies is larger than we realize, and suggest structural reforms to start fixing the problem.

They focus on four issues: Financial regulation, zoning and land use restrictions, patent and copyright law, and occupational licensing.

I already strongly agreed on all four, although not on all the details. No reasonable person could, at this point, claim the regulations in question have not been subject to regulatory capture, and extended far beyond any worthwhile public interest.

This review advocates for reform of those policies. This is as political as I hope this blog ever gets. Politics remains the mindkiller. Down that road lies madness.

The book updated me on the scope of the damage, and on how to improve policy.

While I liked the book, I had three problems.

The first was the presumption of the centrality of inequality, as opposed to deadweight loss and economic growth. I hate to reinforce the gauntlet that inequality is the thing to be concerned about.

The second was that it played somewhat loose with its arguments. It used the trick of comparing the ‘top X’ to the ‘bottom X’ things and then being shocked at how these two were not equal. It used the frame that calling intellectual property ‘property’ was a trick, all but calling all intellectual property theft. Their analysis of financial regulation suffered from lack of insider knowledge, and their case for zoning was enriched by assumptions that seem too strong.

The third was not addressing the legitimate cases for the policies the book opposed, or what the transition away from them would look like. Occupational licensing has gone way too far, but if you’re going to target lawyers and doctors (as you should, better to go after the real culprits and Play in Hard Mode) then you’ll need to explain why the alternative isn’t madness. Similar objections apply in other sections.

On the margin, the cases are ironclad. I knew I would know most of it already, but there was some new material even for me.

I recommend this book if and only if you want a book-length case for its thesis. Otherwise, this review is sufficient. The arguments and facts are not new.

Would you rather read an important book, an impactful book or a good book?


Financial Regulation

Their basic premises are that financial institutions are permitted too much leverage,  given discounted implicit and explicit government guarantees, and set up to skim large profits off subsidized retirement accounts and mortgages. They don’t go over how current regulations form barriers to entry, especially for banks.

The call for decreasing bad regulation rather than new rules to offset existing bad regulation is refreshing. Restricting leverage ratios and charging market prices for deposit and other insurance don’t add complexity. I do worry about their love for Dodd-Frank, but this mostly seems like a ‘take what we can get’ attitude.

They especially blame Basel I’s lax requirements for worsening the financial crisis.


Land Use and Zoning

Restrictive zoning is enormously destructive. Housing costs in New York and greater San Francisco are ludicrous, preventing productivity gains from migration. The surplus produced in cities is largely eaten by landlords. They estimate 0.2% GDP growth per year is being lost to this effect, which adds up fast and seems a reasonable estimate. They only hint how this effects the culture.  The case for building more housing is easy. The case for building it where it would be most valuable is easy. They handle both well.

The principled case against it, as opposed to the rent-seeking case, is absent. They do not consider housing as an exclusionary or positional good, or that cities lack sufficient infrastructure to support additional people, that such construction is a taking of others’ property, or other ‘good’ objections. These complications require a response, but not on the current margin. If you think Palo Alto has enough housing stock, I’m guessing you own some of that housing stock.

The book’s suggestion is to require zoning to allow increases in density, but let local authority decide exactly where that density goes. Implementation details need to be worked out for such ideas to become practical, important work that it seems is not being done.

An additional cause they don’t mention is to allow existing buildings to be renovated or re-purposed without meeting new regulations and safety rules that make any use uneconomic, wiping out entire communities.

A fifth section of the book might talk about regulations surrounding infrastructure.


Intellectual Property

Among the people I know, I am on the extreme end of believing intellectual property is real property, and violations of intellectual property are not ethical or acceptable. I respect most intellectual property ‘in the dark’ on principle. Yes I am judging all of you for not doing this. I certainly respect IP far more than the book’s authors.

The current state of patent and copyright law, however, is far beyond what anyone I know, including me, thinks is reasonable. Things are out of hand and must be scaled back.

The book does a good job explaining how this came to be, and showing the current madness does not increase innovation or creation on the margin, in case anyone disagreed with that.

If it were up to me, I would scale back maximum time on copyrights to about 10 years (including retroactively), require active registration, and also require mandatory licensing as we do for music on the radio. For patents, I would make them harder to get, harder to enforce, require mandatory licensing, and require owners to set a buyout price that the government could pay to invalidate the patent. Then I’d tax that price above some threshold. That’s harsher than I would have been before reading the book.

The book only superficially discusses the benefits of IP protection. It brushes off their value, arguing we didn’t have such protections in earlier eras, treating calling it ‘property’ as a propaganda trick. I don’t think that is fair. Different eras face different problems and different technological landscapes. I wish the book had seriously engaged with such issues.

I have another idea about how to help with such issues, which I will share some time.


Occupational Licensing

I love that they go straight for lawyers and doctors. If you are going to go after occupational licensing, go after it. In the comments for Hero Licensing, I was challenged by ‘are you going to let people be doctors or lawyers without a license, too?’ I was tempted to reply “Yes.” The book pulls no punches, painting doctors and lawyers as guild members whose conversations long ago ended in a conspiracy against the public – and all would-be lawyers and would-be doctors.

With doctors and lawyers, things are out of hand, but there’s a real case to be made that the public needs protecting. For those giving manicures and selling burial caskets, and such, not so much. If you think interior decorators need an occupational license, I have a guess what your profession is. In many cases, the licensing requirements are mind-blowing to those who don’t understand how toxic such dynamics can be. These restrictions prevent people from working in order to allow insiders to form a cartel and steal from us.

Lawyers use rules to control supply and inflate costs (e.g. lawyers legally have to own and run their firms, which kind of does require someone to have been mailed their villain card), pushing up legal fees far beyond those in other countries. Doctors are so stingy with legally allocating residency slots that a quarter of all graduates from US medical schools don’t match to a residency. That’s completely insane.

At a minimum, we should ban occupational licenses for activities that don’t risk serious harm. If you want to decorate an interior, or sell a wooden box, go for it. Also at a minimum, we should ensure enough supply for professions we do license. There’s no excuse, for example, to not make funding available for enough residency slots to support every graduate of a US medical school and every worthy foreign applicant (as determined by the hospitals involved), and that change alone would greatly improve our health care system.

I know of two worthy objections to abolishing licensing entirely.

The first objection is that such work done badly is dangerous. The book doesn’t get into solutions for this, such as requiring insurance (that unqualified people would be unable to get), or reputation systems, or simply market forces.

The better objection is that such substandard care or representation could be forced upon the poor or unsuspecting, or upon those who show up to the wrong hospital at the wrong time. Public defenders and emergency room doctors are forced upon people, and we need to ensure quality.

But while the authors don’t make a strong case that we can completely eliminate licensing, they don’t have to. We don’t need impractical libertarian should-we-get-rid-of-drivers-licenses purity/bravery debates. We can get rid of most of the damage such rules do while retaining most of the benefits, especially if we fix our supply problems, by limiting scope to narrow ranges of activity of a few professions, even if that means some amount of regulatory capture. I’d happily settle for that as would the authors. There’s no need to even approach truly dangerous waters.

Again, I understand limiting book scope. Brevity is important.


Laying the Groundwork

I found the final section most informative. What’s a made-of-gears model of how these rules get into place? What would help?

I understood special interests caring a lot about an issue and apply concerted pressure in places no one else cares enough to pay attention or fight back. Classic slow but inevitable regulatory capture. The book suggests structural reforms that make it harder to change the rules while no one is paying attention, but admits this won’t be enough.

The more interesting intervention they suggest is funding think tanks and academic departments to create position papers, people to call for information, impact analyses and, most importantly, full draft legislation ready to go when the moment arrives.

As they model the legislative process, professional cartels (er, organizations) and industries don’t bribe politicians with money so much as with know-how. Industry lobbyists and experts tell politicians what impact various laws would have, and provide templates and ideas. Without sufficient staff or research available elsewhere, lawmakers turn to lobbyists for information and to avoid mistakes.

This springs the modesty trap. Most industry information is mostly true, if slanted. If you can’t do your own work, the only option is to trust them. That becomes a habit. Years later, they own the entire space.

The authors propose increasing the quantity and quality of congressional staff (including state and local levels), by giving them higher salaries and bigger budgets, and doing the same for research teams, with allocation strategies to ensure they work on policy and not partisan politics.

An even more direct solution they suggest is that we the people do the research. When the crisis happens and the people cry out for reform it is far too late to start brainstorming ideas and writing impact papers. That work needs to have already been done. Otherwise, at best you’ll get half baked ideas (hi, tax reform and health care reform 2017!) and at worst the new rules will be written directly by incumbents.

Even when everyone sees the moment coming, we don’t prepare properly. Again, see 2017 (or 2009). Garbled messes at best, no matter one’s politics. Creating good rules is hard, but it’s not seven years with the world’s economic engine on the line and we can’t even half-ass this properly level hard. Given that no one is doing a decent half-ass job on even the big things, it’s no shock the ball is dropped on stuff like reform of intellectual property or easing zoning restrictions. For all that we complain about these issues, there isn’t a carefully constructed bill waiting for its moment. There should be!

That seems like the cause. We can and should advocate for reforms, but more than that we should build trusted sources of information legislators can turn to, and draft actual legislation with actual legal language that could go into a bill at a moment’s notice. This seems like a neglected potential Effective Altruist cause, or at least neglected method.



Missing was a discussion of what regressive redistribution does to the culture. We’re not only talking about GDP growth or Gini coefficients. How would America feel without these thefts?

This wouldn’t be the full libertarian paradise of freedom and opportunity (whether or not such a thing is possible), but nor would life be nasty, brutish or short. Most would experience much better freedom and opportunity, even before the effects of greater economic growth.

The cost of living would decline. It would once again be (more) legal to get pretty good versions of life’s goods and services at pretty good prices, all of which kings of old would have killed for. People would move, explore and experiment without being excluded from work, and without all their productivity gains captured by the landlords. We’d learn by doing, and do what we could do, rather than competing for rents paid for in suffering and lost time even for those who collect them. In our free time, we’d enjoy the full fruits of civilization’s historic creativity, with most of the world’s great works available for free, or almost for free, on demand, and be free to create in turn. Inventions would still be rewarded and celebrated, but also pass to the people and be built upon.

When one did succeed on one’s merits, there would be less fear it would be taken away. Inequality by theft and connection hurts the legitimately successful most of all. They get hit by redistribution from themselves to the thieves, then get hit again progressive redistribution to help the other victims of the thefts.

Sounds good to me.


Posted in Uncategorized | 2 Comments

More Dakka

Epistemic Status: Hopefully enough Dakka

Eliezer Yudkowsky’s book Inadequate Eqilibria is excellent. I recommend reading it, if you haven’t done so. Three recent reviews are Scott Aaronson’s, Robin Hanson’s (which inspired You Have the Right to Think and a great discussion in its comments) and Scott Alexander’s. Alexander’s review was an excellent summary of key points, but like many he found the last part of the book, ascribing much modesty to status and prescribing how to learn when to trust yourself, less convincing.

My posts, including Zeroing Out and Leaders of Men have been attempts to extend the last part, offering additional tools. Daniel Speyer offers good concrete suggestions as well. My hope here is to offer both another concrete path to finding such opportunities, and additional justification of the central role of social control (as opposed to object-level concerns) in many modest actions and modesty arguments.

Eliezer uses several examples of civilizational inadequacy. Two central examples are the failure of the Bank of Japan and later the European Central Bank to print sufficient amounts of money, and the failure of anyone to try treating seasonal affective disorder with sufficiently intense artificial light.

In a MetaMed case, a patient suffered from a disease with a well-known reliable biomarker and a safe treatment. In studies, the treatment improved the biomarker linearly with dosage. Studies observed that sick patients whose biomarkers reached healthy levels experienced full remission. The treatment was fully safe. No one tried increasing the dose enough to reduce the biomarker to healthy levels. If they did, they never reported their results.

In his excellent post Sunset at Noon, Raymond points out Gratitude Journals:

“Rationalists obviously don’t *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. And no one I know has even seriously tried it. Do literally *none* of these people care about their own happiness?”

“Huh. Do *you* keep a gratitude journal?”

“Lol. No, obviously.”

– Some Guy at the Effective Altruism Summit of 2012

Gratitude journals are awkward interventions, as Raymond found, and we need to find details that make it our own, or it won’t work. But the active ingredient, gratitude, obviously works and is freely available. Remember the last time someone expressed gratitude to you and it made your day worse? Remember the last time you expressed gratitude to someone else, or felt gratitude about someone or something, and it made your day worse?

In my experience it happens approximately zero times. Gratitude just works, unmistakably. I once sent a single gratitude letter. It increased my baseline well-beingThen I didn’t write more. I do try to remember to feel gratitude, and express it. That helps. But I can’t think of a good reason not to do that more, or for anyone I know to not do it more.

In all four cases, our civilization has (it seems) correctly found the solution. We’ve tested it. It works. The more you do, the better it works. There’s probably a level where side effects would happen, but there’s no sign of them yet.

We know the solution. Our bullets work. We just need more. We need More (and better) (metaphorical) Dakka – rather than firing the standard number of metaphorical bullets, we need to fire more, absurdly more, whatever it takes until the enemy keels over dead.

And then we decide we’re out of bullets. We stop.

If it helps but doesn’t solve your problem, perhaps you’re not using enough.


We don’t use enough to find out how much enough would be, or what bad things it might cause. More Dakka might backfire. It also might solve your problem.

The Bank of Japan didn’t have enough money. They printed some. It helped a little. They could have kept printing more money until printing more money either solves their problem or starts to cause other problems. They didn’t.

Yes, some countries printed too much money and very bad things happened, but no  countries printed too much money because they wanted more inflation. That’s not a thing.

Doctors saw patients suffer for lack of light. They gave them light. It helped a little. They could have tried more light until it solved their problem or started causing other problems. They didn’t.

Yes,people suffer from too much sunlight, or spending too long in tanning beds, but those are skin conditions (as far as I know) and we don’t have examples of too much of this kind of artificial light, other than it being unpleasant.

Doctors saw patients suffer from a disease in direct proportion to a biomarker. They gave them a drug. It helped a little, with few if any side effects. They could have increased the dose until it either solved the problem or started causing other problems. They didn’t.

Yes, drug overdoses cause bad side effects, but we could find no record of this drug causing any bad side effects at any reasonable dosage, or any theory why it would.

People express gratitude. We are told it improves subjective well-being in studies. Our subjective well-being improves a little. We could express more gratitude, with no real downsides. Almost all of us don’t.

On that note, thanks for reading!

A decision was universally made that enough, despite obviously not being enough, was enough. ‘More’ was never tried.

This is important on two levels.


The first level is practical. If you think a problem could be solved or a situation improved by More Dakka, there’s a good chance you’re right.

Sometimes a little more is a little better. Sometimes a lot more is a lot better. Sometimes each attempt is unlikely to work, but improves your chances.

If something is a good idea, you need a reason to not try doing more of it.

No, seriously. You need a reason.

The second level is, ‘do more of what is already working and see if it works more’ is as basic as it gets. If we can’t reliably try that, we can’t reliably try anything. How could you ever say ‘If that worked someone would have tried it’?

You can’t. If no one says they tried it, probably no one tried it. There might be good reasons not to try it. There also might not. There’d still be a good chance no one tried it.

There’s also a chance someone did try it and isn’t reporting the results anywhere you can find. That doesn’t mean it didn’t work, let alone that it can never work.


Why would this be an overlooked strategy?

It sounds crazy that it could be overlooked. It’s overlooked.

Eliezer gives three tools to recognize places systems fail, using highly useful economic arguments I recommend using frequently:

1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;

2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information

3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

In these cases, I do not think such explanations are enough.

If the Bank of Japan didn’t print more money, that implies the Bank of Japan wasn’t sufficiently incentivized to hit their inflation target. They must have been maximizing primarily for prestige instead. I can buy that, but why didn’t they think the best way to do that was to hit the inflation target? Alexander’s suggested payoff matrix, where printing more money makes failure much worse, isn’t good enough. It can’t be central on its own. The answer was too clear, the payoff worth the odds, and they had the information, as I detail later.

Eliezer gives the model of researchers looking for citations plus grant givers looking for prestige, as the explanation for why his SAD treatment wasn’t tested. I don’t buy it. Story doesn’t make sense.

If more light worked, you’d get a lot of citations, for not much cost or effort. If you’re writing a grant, this costs little money and could help many people. It’s less prestigious to up the dosage than be original, but it’s still a big prestige win.

If you say they want to associate with high status research folk, then they won’t care about the grant contents, so it reduces to a one-factor market, where again researchers should try this.

Alexander noticed the same confusion on that one.

In the drug dosage case, Eliezer’s tools do better. No doctor takes the risk of being sued if something goes wrong, and no company makes money by funding the study and it’s too expensive for a grant, and trying it on your own feels too risky. Maybe. It still does not feel like enough. The paths forward are too easy, too cheap, the payoff too large and obvious. Even one wealthy patient could break through, and it would be worth it. Yet even our patient, as far as we know, didn’t even try it and certainly didn’t report back.

The gratitude case doesn’t fit the three modes at all.


Here is my model.I hope it illuminates when to try such things yourself.

Two key insights here are The Thing and the Symbolic Representation of The Thing, and Scott Alexander’s Concept-Shaped Holes Can Be Impossible To Notice. Both are worth reading, in that order.

I’ll summarize the relevant points.

The standard amount of something, by definition, counts as the symbolic representation of the thing. The Bank of Japan ‘printed money.’ The standard SAD treatment ‘exposes people to light.’ Our patients’ doctors prescribed ‘standard drug.’ Today, various people ‘left with plenty of time,’ ‘came up with a plan,’ ‘were part of a community,’ ‘ate pizza,’ ‘listened to the other person,’ ‘focused on their breath,’ ‘bought enough nipple tops for the baby’s bottles,’ ‘did their job’ and ‘added salt and pepper.’

They got results. A little. Better than nothing. But much less than was desired.

The Bank of Australia printed enough money. Eliezer Yudkowsky exposed his wife to enough light. Our patient was told to take enough of the drug to actually work. Meanwhile, other people actually left with plenty of time, actually came up with a workable plan, actually were part of a community, ate real pizza, actually listened to another person, actually focused on their breath, bought enough nipple tops for the baby’s bottles, actually did their job, and added copious amounts of sea salt and freshly ground pepper.

Some of these are about quality rather than quantity. You could also think of that as a bigger quantity of effort, or willingness to pay more money or devote more time. Still, it’s worth noting that an important variant of ‘use more,’ ‘do more’ or ‘do more often’ is ‘do it better.’

Being part of that second group is harder than it looks:

You need to realize the thing might exist at all.

You need to realize the symbolic representation of the thing isn’t the thing.

You need to ignore the idea that you’ve done your job.

You need to actually care about solving the problem.

You need to think about the problem a little.

You need to ignore the idea that no one could blame you for not trying.

You need to not care that what you’re about to do is unusual or weird or socially awkward.

You need to not care that what you’re about to do might be high status.

You need to not care that what you’re about to do might be low status.

You need to not care that what you’re about to do might not work.

You need to not be concerned that what you’re about to do might work.

You need to not care that what you’re about to do might backfire.

You need to not care that what you’re about to do is immodest.

You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down.

You need to not care about the implicit accusation you’re making against everyone who didn’t try it.

You need to not care that what you’re about to do might be wasteful. Or inappropriate. Or weird. Or unfair. Or morally wrong. Or something.

Why is this list getting so long? What is that answer of ‘don’t do it’ doing on the bottom of the page?


Long list is long. A lot of items are related. Some will be obvious, some won’t be. Let’s go through the list.

You need to realize the thing might exist at all.

One cannot do better unless one realizes it might be possible to do better. Scott gives several examples of situations in which he doubted the existence of the thing.

You need to realize the symbolic representation of the thing isn’t the thing.

Scott gives several examples where he thought he knew what the thing was, only to find out he had no idea; what he thought was the thing was actually a symbolic representation, a pale shadow. If you think having a few friends is what a community is, it won’t occur to you to seek out a real one.

You need to ignore the idea that you’ve done your job.

There was a box marked ‘thing’. You’ve checked that box off by getting the symbolic version of the thing. It’s easy to then think you’ve done the job and are somehow done. Even if you’re doing this for yourself or someone you care about, there’s this urge to get to and think ‘job done’, ‘quest complete’, and not think about details. You need to realize you’re not doing the job so you can say you’ve done the job, or so you can tell yourself you’ve done the job. Even if you didn’t get what you wanted, your real job was to get the right to tell a story you can tell yourself that you tried to get it, right?

You need to actually care about solving the problem.

You’re doing the job so the job gets done.  That’s why doing the symbolic version doesn’t mean you’re done. Often people don’t care much about solving the problem. They care whether they’re responsible. They care  whether socially appropriate steps have been taken.

You need to ignore the idea that no one could blame you for not trying.

Alexander notes how important this one is, and it’s really big.

People often care primarily about doing that which no one could blame them for. Being blamed or scapegoated is really bad. Even self-blame! We instinctively fear someone will discover and expose us, and make ourselves feel bad. We cover up the evidence and create justifications. Doing the normal means no one could blame you. If you don’t grasp that this is a thing,  read as much of Atlas Shrugged as needed until you grasp that. It should only take a chapter or two, but this idea alone is worth a thousand page book in order to get, if that’s what it takes. I’m not kidding.

Blame does happen. The real incentive here is big. The incentive people think they have to do this, even when the chance of being blamed is minimal, is much, much bigger.

You need to think about the problem a little.

People don’t like thinking.

You need to not care that what you’re about to do is unusual or weird or socially awkward.

There’s a primal fear of doing anything unusual or weird. More would be unusual and weird. It might be slightly socially awkward. You’d never know until it actually was awkward. That would be just awful. Can’t have that. No one is watching or cares, but some day someone might find you out and then expose you as no good. We go around being normal, only guessing which slightly weird things would get us in trouble, or that we’d need to get someone else in trouble for! So we try to do none of them. That’s what happens when not operating on object-level causal models full of gears about what will work.

You need to not care that what you’re about to do might be high status.

Doing or tying to do something high status is to claim high status. Claiming status you’re not entitled to is a good way to get into a lot of trouble. Claiming to usefully think, or to know something, is automatically high status. Are you sure you have that right?

You need to not care that what you’re about to do might be low status.

Your status would go down. That’s even worse. If it’s high status you lose, if it’s low status you also lose, and you don’t even know which one it is since no one does it! Might even be both. Better to leave the whole thing alone.

You need to not care that what you’re about to do might not work.

Failing is just awful. Even things that are supposed to mostly fail. Even getting ludicrous odds. Only explicitly permitted narrow exceptions are permitted, which shrink each year. Otherwise we must, must succeed, or nothing we do will ever work and everyone will know that. I founded a company once*. It didn’t work. Now everyone knows rationalists can’t found companies. Shouldn’t have tried.

* – Well, three times.

You need to not be concerned that what you’re about to do might work.

Even worse, it might work. Then what? No idea. Does not compute. You’d have to keep doing weird thing, or advocate for weird thing. How weird would that be? What about the people you’d prove wrong? What would you even say?

You need to not care that what you’re about to do might backfire.

It might not only not work, it might have real consequences. That’s a thing. Can’t think of why that might happen. Every brainstormed risk seems highly improbable and not that big a deal. But why take that risk?

You need to not care that what you’re about to do is immodest. 

By modesty, anything you think of, that’s worth thinking, has been thought of. Anything worth trying has been tried, anything worth doing done. Ignore that there’s a first time for everything. Who are you to claim there’s something worth trying? Who are you to claim you know better than everyone else? Did you not notice all the other people? Are you really high status enough to claim you know better than all of them? Let’s see that hero licence of yours, buster. Object-level claims are status claims!

You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down. 

The world won’t let you get away with that. It will make this blow up in your face. And laugh. At you. People know this. They’ll  instinctively join the conspiracy making it happen, coordinating seamlessly. Their alternative is thinking for themselves, or other people might thinking for themselves rather than playing imitation games. Unthinkable. Let’s scapegoat someone and reinforce norms.

You need to not care about the implicit accusation you’re making against everyone who didn’t try it.

You’re not only calling them wrong. You’re saying the answer was in front of their face the whole time. They had an obvious solution and didn’t take it. You’re telling them they didn’t have a good reason for that. They gonna be pissed.

You need to not care that what you’re about to do might be wasteful. Or inappropriate. Or unfair. Or low status. Or lack prestige. Or be morally wrong. Or something. There’s gotta be something!

The answer is right there at the bottom of the page. This isn’t done, so don’t do it. Find a reason. If there isn’t a good one, go with what you got. Flail around as needed.

That’s what the Bank of Japan was actually afraid of. Nothing. A vague feeling they were supposed to be afraid of something, so they kept brainstorming until something sounded plausible.

Printing money might mean printing too much! The opposite is true. Not printing money now means having to print even more later, as the economy suffers.

Printing money would destroy their credibility! The opposite is true. Not printing money destroyed their credibility.

People don’t like it when we print too much money! The opposite is true. Everyone was yelling at them to print more money.

The markets don’t like it when we print too much money! The opposite is true. We have real time data. The Nikkei goes up on talk of printing money, down on talk of not printing money, and goes wild on actual unexpected money printing. It’s almost as if the market thinks printing money is awesome and has a rational expectations model. The bond market? The  rising interest rates? Not a peep.

Printing money wouldn’t be prestigious! It would hurt bank independence! The opposite is true. Not printing money forced Prime Minister Shinzo Abe to threaten them into printing more money. They were seen as failures. Everyone respects the Bank of Australia because they did print more money.

This same vague fear, combined with trivial inconveniences, is what stops the other solutions, too.

Not only are these trivial fears that shouldn’t stop us, they’re not even things that would happen. When you try the thing, almost nothing bad of this sort ever happens at all.

At all. This is low risks of shockingly mild social disapproval. Ignore.

These worries aren’t real. They’re in your head. 

They’re in my head, too. The voice of Pat Modesto is in your head. It is insidious. It says whatever it has to. It lies. It cheats. It is the opposite of useful.

If someone else has these concerns, the concerns are in their head, whispering in their ear. Don’t hold it against them. Help them.

Some such worries are real. They can point to real costs and benefits. Check! But they’re mostly trying to halt thinking about the object level, to keep you from being the nail that sticks up and gets hammered down. When someone else raises them, mostly they’re the hammer. The fears are mirages we’ve been trained and built to see.

You don’t have that problem, you say? Great! Other people do have that problem. Sympathize and try to help. Otherwise, keep doing what you’re doing, only more so. And congratulations.


My practical suggestion is that if you do, buy or use a thing, and it seems like that was a reasonable thing to do, you should ask yourself:

Can I do more of this? Can I do this better? Put in more effort, more time and/or more money? Might that do the job better? Could that be a good idea? Could that be worth it? How much more? How much better?

Make a quick object level model of what would happen. See what it looks like. Discount your chances a little if no one does it, but only a little. Maybe half, tops. Less if those who succeeded wouldn’t say anything. In some cases, the thing you’re about to try is actually done all the time, but no one talks about it. If you suspect that, definitely try it.

You’ll hear the voice. This isn’t done. There must be a reason. When you hear that, get excited. You might be on to something.

If you’re getting odds to try, try. Use the try harder, Luke! You can do this. Pull out More Dakka.

It’s also worth looking back on things you’ve done in the past and asking the same question.

I’ve linked several times to the Challenging the Difficult sequence, but none of this need be difficult. Often all that’s needed, but never comes, is an ordinary effort.

The bigger picture point is also important. These are the most obvious things. Those bad reasons stop actual everyone from trying things that cost little, on any level, with little risk, on any level, and that carry huge benefits. For other things, they stop almost everyone. When someone does try them and reports back that it worked, they’re ignored.

Something possibly being slightly socially awkward, or causing a likely nominal failure, acts as a veto. Rationalizations for this are created as needed.

Adding that to the economic model of inadequate equilibria, and the fact that almost no one got as far as considering this idea at all, is it any wonder that you can beat ‘consensus’ by thinking of and trying object-level things?

Why wouldn’t that work?






Posted in Uncategorized | 26 Comments

You Have the Right to Think

Epistemic Status: Public service announcement. We will then return to regularly scheduled programming.

Written partly as a response to (Robin Hanson): Why be Contrarian, responding to the book Inadequate Equilibria by Eliezer Yudkowsky

Warning: Applause lights incoming. I’m aware. Sorry. Seemed necessary.

We the people, in order to accomplish something, do declare:

You have the right to think.

You have the right to disagree with people where your model of the world disagrees.

You have the right to disagree with the majority of humanity, or the majority of smart people who have seriously looked in the direction of the problem.

You have the right to disagree with ‘expert opinion’.

You have the right to decide which experts are probably right when they disagree.

You have the right to disagree with ‘experts’ even when they agree.

You have the right to disagree with real experts that all agree, given sufficient evidence.

You have the right to disagree with real honest, hardworking, doing-the-best-they-can experts that all agree, even if they wouldn’t listen to you, because it’s not about whether they’re messing up.

You have the right to have an opinion even if doing a lot of other work would likely change that opinion in an unknown direction.

You have the right to have an opinion even if the task ‘find the real experts and get their opinions’ would likely change that opinion.

You have the right to update your beliefs based on your observations.

You have the right to update your beliefs based on your analysis of the object level.

You have the right to update your beliefs based on your analysis of object-level arguments and analysis.

You have the right to update your beliefs based on non-object level reasoning, on any meta level.

You have the right to disagree with parts of systems smarter than you, that you could not duplicate.

You have the right to use and update on your own meta-rationality.

You have the right to believe that your meta-rationality is superior to most others’ meta-rationality.

You have the right to use as sufficient justification of that belief that you know what meta-rationality is and have asked whether yours is superior.

You have the right to believe the object level, or your analysis thereof, if you put in the work, without superior meta-rationality.

You have the right to believe that someone else has superior meta-rationality and all your facts and reasoning, and still disagree with them.

You have the right to believe you care about truth a lot more than most people.

You have the right to actually care about truth a lot more than most people.

You have the right to believe that most people do care about truth, but also many other things.

You have the right to believe that much important work is being and has been attempted by exactly zero people, and you can beat zero people.

You have the right to believe that many promising simple things never get tried, with no practical or legal barrier in the way.


You have the right to disagree despite the possible existence of a group to whom you would be wise to defer, or claims by others to have found such a group.

You have the right to update your beliefs about the world based on clues to others’ anything, including but not limited to meta-rationality, motives including financial and social incentives, intelligence, track record and how much attention they’re paying.

You have the right to realize the modesty arguments in your head are mostly not about truth and not useful arguments to have in your head.

You have the right to realize the modesty arguments others make are mostly not about truth.

You have the right to believe that the modesty arguments that do work in theory mostly either don’t hold in practice or involve specific other people.

You have the right to not assume the burden of proof when confronted with a modesty argument.

You have the right to not answer unjustified isolated demands for rigor, whether or not they take the form of a modesty argument.

You have the right, when someone challenges your beliefs via reference class tennis, to ignore them.

You have the right to disagree even when others would not, given your facts and reasoning, update their beliefs in your direction.

You have the right to share your disagreement with others even when you cannot reasonably provide evidence you expect them to find convincing.

You do not need a ‘disagreement license,’ of any kind, implied or actual, to do any of this disagreeing. To the extent that you think you need one, I hereby grant one to you. I also grant a ‘hero license‘, and a license license to allow you to grant yourself additional such licenses if you are asked to produce one.

You do not need a license for anything except when required by enforceable law.

You have the right to be responsible with these rights and not overuse them, realizing that disagreements should be the exception and not the rule.

You have the right to encourage others to use these rights.

You have the right to defend these rights.

Your rights are not limited to small or personal matters, or areas of your expertise, or where you can point to specific institutional failures.

Congress has the right to enforce these articles by appropriate legislation.

Oh, and by right? I meant duty. 





Posted in Uncategorized | 41 Comments