Book Review: The Captured Economy

Epistemic Status: The choir

On Tyler Cohen’s claim that it was an important book, I read The Captured Economy by Brink Lindsey and Steven Teles. Its thesis is that regressive regulation is strangling our economy and increasing inequality. They claim that the damage from such policies is larger than we realize, and suggest structural reforms to start fixing the problem.

They focus on four issues: Financial regulation, zoning and land use restrictions, patent and copyright law, and occupational licensing.

I already strongly agreed on all four, although not on all the details. No reasonable person could, at this point, claim the regulations in question have not been subject to regulatory capture, and extended far beyond any worthwhile public interest.

This review advocates for reform of those policies. This is as political as I hope this blog ever gets. Politics remains the mindkiller. Down that road lies madness.

The book updated me on the scope of the damage, and on how to improve policy.

While I liked the book, I had three problems.

The first was the presumption of the centrality of inequality, as opposed to deadweight loss and economic growth. I hate to reinforce the gauntlet that inequality is the thing to be concerned about.

The second was that it played somewhat loose with its arguments. It used the trick of comparing the ‘top X’ to the ‘bottom X’ things and then being shocked at how these two were not equal. It used the frame that calling intellectual property ‘property’ was a trick, all but calling all intellectual property theft. Their analysis of financial regulation suffered from lack of insider knowledge, and their case for zoning was enriched by assumptions that seem too strong.

The third was not addressing the legitimate cases for the policies the book opposed, or what the transition away from them would look like. Occupational licensing has gone way too far, but if you’re going to target lawyers and doctors (as you should, better to go after the real culprits and Play in Hard Mode) then you’ll need to explain why the alternative isn’t madness. Similar objections apply in other sections.

On the margin, the cases are ironclad. I knew I would know most of it already, but there was some new material even for me.

I recommend this book if and only if you want a book-length case for its thesis. Otherwise, this review is sufficient. The arguments and facts are not new.

Would you rather read an important book, an impactful book or a good book?


Financial Regulation

Their basic premises are that financial institutions are permitted too much leverage,  given discounted implicit and explicit government guarantees, and set up to skim large profits off subsidized retirement accounts and mortgages. They don’t go over how current regulations form barriers to entry, especially for banks.

The call for decreasing bad regulation rather than new rules to offset existing bad regulation is refreshing. Restricting leverage ratios and charging market prices for deposit and other insurance don’t add complexity. I do worry about their love for Dodd-Frank, but this mostly seems like a ‘take what we can get’ attitude.

They especially blame Basel I’s lax requirements for worsening the financial crisis.


Land Use and Zoning

Restrictive zoning is enormously destructive. Housing costs in New York and greater San Francisco are ludicrous, preventing productivity gains from migration. The surplus produced in cities is largely eaten by landlords. They estimate 0.2% GDP growth per year is being lost to this effect, which adds up fast and seems a reasonable estimate. They only hint how this effects the culture.  The case for building more housing is easy. The case for building it where it would be most valuable is easy. They handle both well.

The principled case against it, as opposed to the rent-seeking case, is absent. They do not consider housing as an exclusionary or positional good, or that cities lack sufficient infrastructure to support additional people, that such construction is a taking of others’ property, or other ‘good’ objections. These complications require a response, but not on the current margin. If you think Palo Alto has enough housing stock, I’m guessing you own some of that housing stock.

The book’s suggestion is to require zoning to allow increases in density, but let local authority decide exactly where that density goes. Implementation details need to be worked out for such ideas to become practical, important work that it seems is not being done.

An additional cause they don’t mention is to allow existing buildings to be renovated or re-purposed without meeting new regulations and safety rules that make any use uneconomic, wiping out entire communities.

A fifth section of the book might talk about regulations surrounding infrastructure.


Intellectual Property

Among the people I know, I am on the extreme end of believing intellectual property is real property, and violations of intellectual property are not ethical or acceptable. I respect most intellectual property ‘in the dark’ on principle. Yes I am judging all of you for not doing this. I certainly respect IP far more than the book’s authors.

The current state of patent and copyright law, however, is far beyond what anyone I know, including me, thinks is reasonable. Things are out of hand and must be scaled back.

The book does a good job explaining how this came to be, and showing the current madness does not increase innovation or creation on the margin, in case anyone disagreed with that.

If it were up to me, I would scale back maximum time on copyrights to about 10 years (including retroactively), require active registration, and also require mandatory licensing as we do for music on the radio. For patents, I would make them harder to get, harder to enforce, require mandatory licensing, and require owners to set a buyout price that the government could pay to invalidate the patent. Then I’d tax that price above some threshold. That’s harsher than I would have been before reading the book.

The book only superficially discusses the benefits of IP protection. It brushes off their value, arguing we didn’t have such protections in earlier eras, treating calling it ‘property’ as a propaganda trick. I don’t think that is fair. Different eras face different problems and different technological landscapes. I wish the book had seriously engaged with such issues.

I have another idea about how to help with such issues, which I will share some time.


Occupational Licensing

I love that they go straight for lawyers and doctors. If you are going to go after occupational licensing, go after it. In the comments for Hero Licensing, I was challenged by ‘are you going to let people be doctors or lawyers without a license, too?’ I was tempted to reply “Yes.” The book pulls no punches, painting doctors and lawyers as guild members whose conversations long ago ended in a conspiracy against the public – and all would-be lawyers and would-be doctors.

With doctors and lawyers, things are out of hand, but there’s a real case to be made that the public needs protecting. For those giving manicures and selling burial caskets, and such, not so much. If you think interior decorators need an occupational license, I have a guess what your profession is. In many cases, the licensing requirements are mind-blowing to those who don’t understand how toxic such dynamics can be. These restrictions prevent people from working in order to allow insiders to form a cartel and steal from us.

Lawyers use rules to control supply and inflate costs (e.g. lawyers legally have to own and run their firms, which kind of does require someone to have been mailed their villain card), pushing up legal fees far beyond those in other countries. Doctors are so stingy with legally allocating residency slots that a quarter of all graduates from US medical schools don’t match to a residency. That’s completely insane.

At a minimum, we should ban occupational licenses for activities that don’t risk serious harm. If you want to decorate an interior, or sell a wooden box, go for it. Also at a minimum, we should ensure enough supply for professions we do license. There’s no excuse, for example, to not make funding available for enough residency slots to support every graduate of a US medical school and every worthy foreign applicant (as determined by the hospitals involved), and that change alone would greatly improve our health care system.

I know of two worthy objections to abolishing licensing entirely.

The first objection is that such work done badly is dangerous. The book doesn’t get into solutions for this, such as requiring insurance (that unqualified people would be unable to get), or reputation systems, or simply market forces.

The better objection is that such substandard care or representation could be forced upon the poor or unsuspecting, or upon those who show up to the wrong hospital at the wrong time. Public defenders and emergency room doctors are forced upon people, and we need to ensure quality.

But while the authors don’t make a strong case that we can completely eliminate licensing, they don’t have to. We don’t need impractical libertarian should-we-get-rid-of-drivers-licenses purity/bravery debates. We can get rid of most of the damage such rules do while retaining most of the benefits, especially if we fix our supply problems, by limiting scope to narrow ranges of activity of a few professions, even if that means some amount of regulatory capture. I’d happily settle for that as would the authors. There’s no need to even approach truly dangerous waters.

Again, I understand limiting book scope. Brevity is important.


Laying the Groundwork

I found the final section most informative. What’s a made-of-gears model of how these rules get into place? What would help?

I understood special interests caring a lot about an issue and apply concerted pressure in places no one else cares enough to pay attention or fight back. Classic slow but inevitable regulatory capture. The book suggests structural reforms that make it harder to change the rules while no one is paying attention, but admits this won’t be enough.

The more interesting intervention they suggest is funding think tanks and academic departments to create position papers, people to call for information, impact analyses and, most importantly, full draft legislation ready to go when the moment arrives.

As they model the legislative process, professional cartels (er, organizations) and industries don’t bribe politicians with money so much as with know-how. Industry lobbyists and experts tell politicians what impact various laws would have, and provide templates and ideas. Without sufficient staff or research available elsewhere, lawmakers turn to lobbyists for information and to avoid mistakes.

This springs the modesty trap. Most industry information is mostly true, if slanted. If you can’t do your own work, the only option is to trust them. That becomes a habit. Years later, they own the entire space.

The authors propose increasing the quantity and quality of congressional staff (including state and local levels), by giving them higher salaries and bigger budgets, and doing the same for research teams, with allocation strategies to ensure they work on policy and not partisan politics.

An even more direct solution they suggest is that we the people do the research. When the crisis happens and the people cry out for reform it is far too late to start brainstorming ideas and writing impact papers. That work needs to have already been done. Otherwise, at best you’ll get half baked ideas (hi, tax reform and health care reform 2017!) and at worst the new rules will be written directly by incumbents.

Even when everyone sees the moment coming, we don’t prepare properly. Again, see 2017 (or 2009). Garbled messes at best, no matter one’s politics. Creating good rules is hard, but it’s not seven years with the world’s economic engine on the line and we can’t even half-ass this properly level hard. Given that no one is doing a decent half-ass job on even the big things, it’s no shock the ball is dropped on stuff like reform of intellectual property or easing zoning restrictions. For all that we complain about these issues, there isn’t a carefully constructed bill waiting for its moment. There should be!

That seems like the cause. We can and should advocate for reforms, but more than that we should build trusted sources of information legislators can turn to, and draft actual legislation with actual legal language that could go into a bill at a moment’s notice. This seems like a neglected potential Effective Altruist cause, or at least neglected method.



Missing was a discussion of what regressive redistribution does to the culture. We’re not only talking about GDP growth or Gini coefficients. How would America feel without these thefts?

This wouldn’t be the full libertarian paradise of freedom and opportunity (whether or not such a thing is possible), but nor would life be nasty, brutish or short. Most would experience much better freedom and opportunity, even before the effects of greater economic growth.

The cost of living would decline. It would once again be (more) legal to get pretty good versions of life’s goods and services at pretty good prices, all of which kings of old would have killed for. People would move, explore and experiment without being excluded from work, and without all their productivity gains captured by the landlords. We’d learn by doing, and do what we could do, rather than competing for rents paid for in suffering and lost time even for those who collect them. In our free time, we’d enjoy the full fruits of civilization’s historic creativity, with most of the world’s great works available for free, or almost for free, on demand, and be free to create in turn. Inventions would still be rewarded and celebrated, but also pass to the people and be built upon.

When one did succeed on one’s merits, there would be less fear it would be taken away. Inequality by theft and connection hurts the legitimately successful most of all. They get hit by redistribution from themselves to the thieves, then get hit again progressive redistribution to help the other victims of the thefts.

Sounds good to me.


Posted in Uncategorized | 2 Comments

More Dakka

Epistemic Status: Hopefully enough Dakka

Eliezer Yudkowsky’s book Inadequate Eqilibria is excellent. I recommend reading it, if you haven’t done so. Three recent reviews are Scott Aaronson’s, Robin Hanson’s (which inspired You Have the Right to Think and a great discussion in its comments) and Scott Alexander’s. Alexander’s review was an excellent summary of key points, but like many he found the last part of the book, ascribing much modesty to status and prescribing how to learn when to trust yourself, less convincing.

My posts, including Zeroing Out and Leaders of Men have been attempts to extend the last part, offering additional tools. Daniel Speyer offers good concrete suggestions as well. My hope here is to offer both another concrete path to finding such opportunities, and additional justification of the central role of social control (as opposed to object-level concerns) in many modest actions and modesty arguments.

Eliezer uses several examples of civilizational inadequacy. Two central examples are the failure of the Bank of Japan and later the European Central Bank to print sufficient amounts of money, and the failure of anyone to try treating seasonal affective disorder with sufficiently intense artificial light.

In a MetaMed case, a patient suffered from a disease with a well-known reliable biomarker and a safe treatment. In studies, the treatment improved the biomarker linearly with dosage. Studies observed that sick patients whose biomarkers reached healthy levels experienced full remission. The treatment was fully safe. No one tried increasing the dose enough to reduce the biomarker to healthy levels. If they did, they never reported their results.

In his excellent post Sunset at Noon, Raymond points out Gratitude Journals:

“Rationalists obviously don’t *actually* take ideas seriously. Like, take the Gratitude Journal. This is the one peer-reviewed intervention that *actually increases your subjective well being*, and costs barely anything. And no one I know has even seriously tried it. Do literally *none* of these people care about their own happiness?”

“Huh. Do *you* keep a gratitude journal?”

“Lol. No, obviously.”

– Some Guy at the Effective Altruism Summit of 2012

Gratitude journals are awkward interventions, as Raymond found, and we need to find details that make it our own, or it won’t work. But the active ingredient, gratitude, obviously works and is freely available. Remember the last time someone expressed gratitude to you and it made your day worse? Remember the last time you expressed gratitude to someone else, or felt gratitude about someone or something, and it made your day worse?

In my experience it happens approximately zero times. Gratitude just works, unmistakably. I once sent a single gratitude letter. It increased my baseline well-beingThen I didn’t write more. I do try to remember to feel gratitude, and express it. That helps. But I can’t think of a good reason not to do that more, or for anyone I know to not do it more.

In all four cases, our civilization has (it seems) correctly found the solution. We’ve tested it. It works. The more you do, the better it works. There’s probably a level where side effects would happen, but there’s no sign of them yet.

We know the solution. Our bullets work. We just need more. We need More (and better) (metaphorical) Dakka – rather than firing the standard number of metaphorical bullets, we need to fire more, absurdly more, whatever it takes until the enemy keels over dead.

And then we decide we’re out of bullets. We stop.

If it helps but doesn’t solve your problem, perhaps you’re not using enough.


We don’t use enough to find out how much enough would be, or what bad things it might cause. More Dakka might backfire. It also might solve your problem.

The Bank of Japan didn’t have enough money. They printed some. It helped a little. They could have kept printing more money until printing more money either solves their problem or starts to cause other problems. They didn’t.

Yes, some countries printed too much money and very bad things happened, but no  countries printed too much money because they wanted more inflation. That’s not a thing.

Doctors saw patients suffer for lack of light. They gave them light. It helped a little. They could have tried more light until it solved their problem or started causing other problems. They didn’t.

Yes,people suffer from too much sunlight, or spending too long in tanning beds, but those are skin conditions (as far as I know) and we don’t have examples of too much of this kind of artificial light, other than it being unpleasant.

Doctors saw patients suffer from a disease in direct proportion to a biomarker. They gave them a drug. It helped a little, with few if any side effects. They could have increased the dose until it either solved the problem or started causing other problems. They didn’t.

Yes, drug overdoses cause bad side effects, but we could find no record of this drug causing any bad side effects at any reasonable dosage, or any theory why it would.

People express gratitude. We are told it improves subjective well-being in studies. Our subjective well-being improves a little. We could express more gratitude, with no real downsides. Almost all of us don’t.

On that note, thanks for reading!

A decision was universally made that enough, despite obviously not being enough, was enough. ‘More’ was never tried.

This is important on two levels.


The first level is practical. If you think a problem could be solved or a situation improved by More Dakka, there’s a good chance you’re right.

Sometimes a little more is a little better. Sometimes a lot more is a lot better. Sometimes each attempt is unlikely to work, but improves your chances.

If something is a good idea, you need a reason to not try doing more of it.

No, seriously. You need a reason.

The second level is, ‘do more of what is already working and see if it works more’ is as basic as it gets. If we can’t reliably try that, we can’t reliably try anything. How could you ever say ‘If that worked someone would have tried it’?

You can’t. If no one says they tried it, probably no one tried it. There might be good reasons not to try it. There also might not. There’d still be a good chance no one tried it.

There’s also a chance someone did try it and isn’t reporting the results anywhere you can find. That doesn’t mean it didn’t work, let alone that it can never work.


Why would this be an overlooked strategy?

It sounds crazy that it could be overlooked. It’s overlooked.

Eliezer gives three tools to recognize places systems fail, using highly useful economic arguments I recommend using frequently:

1. Cases where the decision lies in the hands of people who would gain little personally, or lose out personally, if they did what was necessary to help someone else;

2. Cases where decision-makers can’t reliably learn the information they need to make decisions, even though someone else has that information

3. Systems that are broken in multiple places so that no one actor can make them better, even though, in principle, some magically coordinated action could move to a new stable state.

In these cases, I do not think such explanations are enough.

If the Bank of Japan didn’t print more money, that implies the Bank of Japan wasn’t sufficiently incentivized to hit their inflation target. They must have been maximizing primarily for prestige instead. I can buy that, but why didn’t they think the best way to do that was to hit the inflation target? Alexander’s suggested payoff matrix, where printing more money makes failure much worse, isn’t good enough. It can’t be central on its own. The answer was too clear, the payoff worth the odds, and they had the information, as I detail later.

Eliezer gives the model of researchers looking for citations plus grant givers looking for prestige, as the explanation for why his SAD treatment wasn’t tested. I don’t buy it. Story doesn’t make sense.

If more light worked, you’d get a lot of citations, for not much cost or effort. If you’re writing a grant, this costs little money and could help many people. It’s less prestigious to up the dosage than be original, but it’s still a big prestige win.

If you say they want to associate with high status research folk, then they won’t care about the grant contents, so it reduces to a one-factor market, where again researchers should try this.

Alexander noticed the same confusion on that one.

In the drug dosage case, Eliezer’s tools do better. No doctor takes the risk of being sued if something goes wrong, and no company makes money by funding the study and it’s too expensive for a grant, and trying it on your own feels too risky. Maybe. It still does not feel like enough. The paths forward are too easy, too cheap, the payoff too large and obvious. Even one wealthy patient could break through, and it would be worth it. Yet even our patient, as far as we know, didn’t even try it and certainly didn’t report back.

The gratitude case doesn’t fit the three modes at all.


Here is my model.I hope it illuminates when to try such things yourself.

Two key insights here are The Thing and the Symbolic Representation of The Thing, and Scott Alexander’s Concept-Shaped Holes Can Be Impossible To Notice. Both are worth reading, in that order.

I’ll summarize the relevant points.

The standard amount of something, by definition, counts as the symbolic representation of the thing. The Bank of Japan ‘printed money.’ The standard SAD treatment ‘exposes people to light.’ Our patients’ doctors prescribed ‘standard drug.’ Today, various people ‘left with plenty of time,’ ‘came up with a plan,’ ‘were part of a community,’ ‘ate pizza,’ ‘listened to the other person,’ ‘focused on their breath,’ ‘bought enough nipple tops for the baby’s bottles,’ ‘did their job’ and ‘added salt and pepper.’

They got results. A little. Better than nothing. But much less than was desired.

The Bank of Australia printed enough money. Eliezer Yudkowsky exposed his wife to enough light. Our patient was told to take enough of the drug to actually work. Meanwhile, other people actually left with plenty of time, actually came up with a workable plan, actually were part of a community, ate real pizza, actually listened to another person, actually focused on their breath, bought enough nipple tops for the baby’s bottles, actually did their job, and added copious amounts of sea salt and freshly ground pepper.

Some of these are about quality rather than quantity. You could also think of that as a bigger quantity of effort, or willingness to pay more money or devote more time. Still, it’s worth noting that an important variant of ‘use more,’ ‘do more’ or ‘do more often’ is ‘do it better.’

Being part of that second group is harder than it looks:

You need to realize the thing might exist at all.

You need to realize the symbolic representation of the thing isn’t the thing.

You need to ignore the idea that you’ve done your job.

You need to actually care about solving the problem.

You need to think about the problem a little.

You need to ignore the idea that no one could blame you for not trying.

You need to not care that what you’re about to do is unusual or weird or socially awkward.

You need to not care that what you’re about to do might be high status.

You need to not care that what you’re about to do might be low status.

You need to not care that what you’re about to do might not work.

You need to not be concerned that what you’re about to do might work.

You need to not care that what you’re about to do might backfire.

You need to not care that what you’re about to do is immodest.

You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down.

You need to not care about the implicit accusation you’re making against everyone who didn’t try it.

You need to not care that what you’re about to do might be wasteful. Or inappropriate. Or weird. Or unfair. Or morally wrong. Or something.

Why is this list getting so long? What is that answer of ‘don’t do it’ doing on the bottom of the page?


Long list is long. A lot of items are related. Some will be obvious, some won’t be. Let’s go through the list.

You need to realize the thing might exist at all.

One cannot do better unless one realizes it might be possible to do better. Scott gives several examples of situations in which he doubted the existence of the thing.

You need to realize the symbolic representation of the thing isn’t the thing.

Scott gives several examples where he thought he knew what the thing was, only to find out he had no idea; what he thought was the thing was actually a symbolic representation, a pale shadow. If you think having a few friends is what a community is, it won’t occur to you to seek out a real one.

You need to ignore the idea that you’ve done your job.

There was a box marked ‘thing’. You’ve checked that box off by getting the symbolic version of the thing. It’s easy to then think you’ve done the job and are somehow done. Even if you’re doing this for yourself or someone you care about, there’s this urge to get to and think ‘job done’, ‘quest complete’, and not think about details. You need to realize you’re not doing the job so you can say you’ve done the job, or so you can tell yourself you’ve done the job. Even if you didn’t get what you wanted, your real job was to get the right to tell a story you can tell yourself that you tried to get it, right?

You need to actually care about solving the problem.

You’re doing the job so the job gets done.  That’s why doing the symbolic version doesn’t mean you’re done. Often people don’t care much about solving the problem. They care whether they’re responsible. They care  whether socially appropriate steps have been taken.

You need to ignore the idea that no one could blame you for not trying.

Alexander notes how important this one is, and it’s really big.

People often care primarily about doing that which no one could blame them for. Being blamed or scapegoated is really bad. Even self-blame! We instinctively fear someone will discover and expose us, and make ourselves feel bad. We cover up the evidence and create justifications. Doing the normal means no one could blame you. If you don’t grasp that this is a thing,  read as much of Atlas Shrugged as needed until you grasp that. It should only take a chapter or two, but this idea alone is worth a thousand page book in order to get, if that’s what it takes. I’m not kidding.

Blame does happen. The real incentive here is big. The incentive people think they have to do this, even when the chance of being blamed is minimal, is much, much bigger.

You need to think about the problem a little.

People don’t like thinking.

You need to not care that what you’re about to do is unusual or weird or socially awkward.

There’s a primal fear of doing anything unusual or weird. More would be unusual and weird. It might be slightly socially awkward. You’d never know until it actually was awkward. That would be just awful. Can’t have that. No one is watching or cares, but some day someone might find you out and then expose you as no good. We go around being normal, only guessing which slightly weird things would get us in trouble, or that we’d need to get someone else in trouble for! So we try to do none of them. That’s what happens when not operating on object-level causal models full of gears about what will work.

You need to not care that what you’re about to do might be high status.

Doing or tying to do something high status is to claim high status. Claiming status you’re not entitled to is a good way to get into a lot of trouble. Claiming to usefully think, or to know something, is automatically high status. Are you sure you have that right?

You need to not care that what you’re about to do might be low status.

Your status would go down. That’s even worse. If it’s high status you lose, if it’s low status you also lose, and you don’t even know which one it is since no one does it! Might even be both. Better to leave the whole thing alone.

You need to not care that what you’re about to do might not work.

Failing is just awful. Even things that are supposed to mostly fail. Even getting ludicrous odds. Only explicitly permitted narrow exceptions are permitted, which shrink each year. Otherwise we must, must succeed, or nothing we do will ever work and everyone will know that. I founded a company once*. It didn’t work. Now everyone knows rationalists can’t found companies. Shouldn’t have tried.

* – Well, three times.

You need to not be concerned that what you’re about to do might work.

Even worse, it might work. Then what? No idea. Does not compute. You’d have to keep doing weird thing, or advocate for weird thing. How weird would that be? What about the people you’d prove wrong? What would you even say?

You need to not care that what you’re about to do might backfire.

It might not only not work, it might have real consequences. That’s a thing. Can’t think of why that might happen. Every brainstormed risk seems highly improbable and not that big a deal. But why take that risk?

You need to not care that what you’re about to do is immodest. 

By modesty, anything you think of, that’s worth thinking, has been thought of. Anything worth trying has been tried, anything worth doing done. Ignore that there’s a first time for everything. Who are you to claim there’s something worth trying? Who are you to claim you know better than everyone else? Did you not notice all the other people? Are you really high status enough to claim you know better than all of them? Let’s see that hero licence of yours, buster. Object-level claims are status claims!

You need to not instinctively assume that this will backfire because attempting it would be immodest, so the world will find some way to strike you down. 

The world won’t let you get away with that. It will make this blow up in your face. And laugh. At you. People know this. They’ll  instinctively join the conspiracy making it happen, coordinating seamlessly. Their alternative is thinking for themselves, or other people might thinking for themselves rather than playing imitation games. Unthinkable. Let’s scapegoat someone and reinforce norms.

You need to not care about the implicit accusation you’re making against everyone who didn’t try it.

You’re not only calling them wrong. You’re saying the answer was in front of their face the whole time. They had an obvious solution and didn’t take it. You’re telling them they didn’t have a good reason for that. They gonna be pissed.

You need to not care that what you’re about to do might be wasteful. Or inappropriate. Or unfair. Or low status. Or lack prestige. Or be morally wrong. Or something. There’s gotta be something!

The answer is right there at the bottom of the page. This isn’t done, so don’t do it. Find a reason. If there isn’t a good one, go with what you got. Flail around as needed.

That’s what the Bank of Japan was actually afraid of. Nothing. A vague feeling they were supposed to be afraid of something, so they kept brainstorming until something sounded plausible.

Printing money might mean printing too much! The opposite is true. Not printing money now means having to print even more later, as the economy suffers.

Printing money would destroy their credibility! The opposite is true. Not printing money destroyed their credibility.

People don’t like it when we print too much money! The opposite is true. Everyone was yelling at them to print more money.

The markets don’t like it when we print too much money! The opposite is true. We have real time data. The Nikkei goes up on talk of printing money, down on talk of not printing money, and goes wild on actual unexpected money printing. It’s almost as if the market thinks printing money is awesome and has a rational expectations model. The bond market? The  rising interest rates? Not a peep.

Printing money wouldn’t be prestigious! It would hurt bank independence! The opposite is true. Not printing money forced Prime Minister Shinzo Abe to threaten them into printing more money. They were seen as failures. Everyone respects the Bank of Australia because they did print more money.

This same vague fear, combined with trivial inconveniences, is what stops the other solutions, too.

Not only are these trivial fears that shouldn’t stop us, they’re not even things that would happen. When you try the thing, almost nothing bad of this sort ever happens at all.

At all. This is low risks of shockingly mild social disapproval. Ignore.

These worries aren’t real. They’re in your head. 

They’re in my head, too. The voice of Pat Modesto is in your head. It is insidious. It says whatever it has to. It lies. It cheats. It is the opposite of useful.

If someone else has these concerns, the concerns are in their head, whispering in their ear. Don’t hold it against them. Help them.

Some such worries are real. They can point to real costs and benefits. Check! But they’re mostly trying to halt thinking about the object level, to keep you from being the nail that sticks up and gets hammered down. When someone else raises them, mostly they’re the hammer. The fears are mirages we’ve been trained and built to see.

You don’t have that problem, you say? Great! Other people do have that problem. Sympathize and try to help. Otherwise, keep doing what you’re doing, only more so. And congratulations.


My practical suggestion is that if you do, buy or use a thing, and it seems like that was a reasonable thing to do, you should ask yourself:

Can I do more of this? Can I do this better? Put in more effort, more time and/or more money? Might that do the job better? Could that be a good idea? Could that be worth it? How much more? How much better?

Make a quick object level model of what would happen. See what it looks like. Discount your chances a little if no one does it, but only a little. Maybe half, tops. Less if those who succeeded wouldn’t say anything. In some cases, the thing you’re about to try is actually done all the time, but no one talks about it. If you suspect that, definitely try it.

You’ll hear the voice. This isn’t done. There must be a reason. When you hear that, get excited. You might be on to something.

If you’re getting odds to try, try. Use the try harder, Luke! You can do this. Pull out More Dakka.

It’s also worth looking back on things you’ve done in the past and asking the same question.

I’ve linked several times to the Challenging the Difficult sequence, but none of this need be difficult. Often all that’s needed, but never comes, is an ordinary effort.

The bigger picture point is also important. These are the most obvious things. Those bad reasons stop actual everyone from trying things that cost little, on any level, with little risk, on any level, and that carry huge benefits. For other things, they stop almost everyone. When someone does try them and reports back that it worked, they’re ignored.

Something possibly being slightly socially awkward, or causing a likely nominal failure, acts as a veto. Rationalizations for this are created as needed.

Adding that to the economic model of inadequate equilibria, and the fact that almost no one got as far as considering this idea at all, is it any wonder that you can beat ‘consensus’ by thinking of and trying object-level things?

Why wouldn’t that work?






Posted in Uncategorized | 23 Comments

You Have the Right to Think

Epistemic Status: Public service announcement. We will then return to regularly scheduled programming.

Written partly as a response to (Robin Hanson): Why be Contrarian, responding to the book Inadequate Equilibria by Eliezer Yudkowsky

Warning: Applause lights incoming. I’m aware. Sorry. Seemed necessary.

We the people, in order to accomplish something, do declare:

You have the right to think.

You have the right to disagree with people where your model of the world disagrees.

You have the right to disagree with the majority of humanity, or the majority of smart people who have seriously looked in the direction of the problem.

You have the right to disagree with ‘expert opinion’.

You have the right to decide which experts are probably right when they disagree.

You have the right to disagree with ‘experts’ even when they agree.

You have the right to disagree with real experts that all agree, given sufficient evidence.

You have the right to disagree with real honest, hardworking, doing-the-best-they-can experts that all agree, even if they wouldn’t listen to you, because it’s not about whether they’re messing up.

You have the right to have an opinion even if doing a lot of other work would likely change that opinion in an unknown direction.

You have the right to have an opinion even if the task ‘find the real experts and get their opinions’ would likely change that opinion.

You have the right to update your beliefs based on your observations.

You have the right to update your beliefs based on your analysis of the object level.

You have the right to update your beliefs based on your analysis of object-level arguments and analysis.

You have the right to update your beliefs based on non-object level reasoning, on any meta level.

You have the right to disagree with parts of systems smarter than you, that you could not duplicate.

You have the right to use and update on your own meta-rationality.

You have the right to believe that your meta-rationality is superior to most others’ meta-rationality.

You have the right to use as sufficient justification of that belief that you know what meta-rationality is and have asked whether yours is superior.

You have the right to believe the object level, or your analysis thereof, if you put in the work, without superior meta-rationality.

You have the right to believe that someone else has superior meta-rationality and all your facts and reasoning, and still disagree with them.

You have the right to believe you care about truth a lot more than most people.

You have the right to actually care about truth a lot more than most people.

You have the right to believe that most people do care about truth, but also many other things.

You have the right to believe that much important work is being and has been attempted by exactly zero people, and you can beat zero people.

You have the right to believe that many promising simple things never get tried, with no practical or legal barrier in the way.


You have the right to disagree despite the possible existence of a group to whom you would be wise to defer, or claims by others to have found such a group.

You have the right to update your beliefs about the world based on clues to others’ anything, including but not limited to meta-rationality, motives including financial and social incentives, intelligence, track record and how much attention they’re paying.

You have the right to realize the modesty arguments in your head are mostly not about truth and not useful arguments to have in your head.

You have the right to realize the modesty arguments others make are mostly not about truth.

You have the right to believe that the modesty arguments that do work in theory mostly either don’t hold in practice or involve specific other people.

You have the right to not assume the burden of proof when confronted with a modesty argument.

You have the right to not answer unjustified isolated demands for rigor, whether or not they take the form of a modesty argument.

You have the right, when someone challenges your beliefs via reference class tennis, to ignore them.

You have the right to disagree even when others would not, given your facts and reasoning, update their beliefs in your direction.

You have the right to share your disagreement with others even when you cannot reasonably provide evidence you expect them to find convincing.

You do not need a ‘disagreement license,’ of any kind, implied or actual, to do any of this disagreeing. To the extent that you think you need one, I hereby grant one to you. I also grant a ‘hero license‘, and a license license to allow you to grant yourself additional such licenses if you are asked to produce one.

You do not need a license for anything except when required by enforceable law.

You have the right to be responsible with these rights and not overuse them, realizing that disagreements should be the exception and not the rule.

You have the right to encourage others to use these rights.

You have the right to defend these rights.

Your rights are not limited to small or personal matters, or areas of your expertise, or where you can point to specific institutional failures.

Congress has the right to enforce these articles by appropriate legislation.

Oh, and by right? I meant duty. 





Posted in Uncategorized | 37 Comments

The Darwin Results

Epistemic Status: True story (numbers are best recollections)

This is post three in the sequence Zbybpu’f Nezl.

Previously (required): The Darwin Game, The Darwin Pregame.


It was Friday night and time to play The Darwin Game. Excited players gathered around their computers to view the scoreboard and message board.

In the first round, my score went up slightly, to something like 109 from the starting 100. One other player had a similar score. A large group scored around 98. Others did poorly to varying degrees, with one doing especially poorly. That one played all 3s.

Three, including David, shot up to around 130.

If it isn’t obvious what happened, take a minute to think about it before proceeding.


The CliqueBots had scores of 98 or so. They quickly figured out what happened.

David lied. He sent the 2-0-2 signal, and cooperated with CliqueBots, but instead of playing all 3s against others, he and two others cooperated with others too.


CliqueBots had been betrayed by MimicBots. The three defectors prospered, and the CliqueBots would lose.

Without those three members, the CliqueBots lacked critical mass. Members would die slowly, then increasingly quickly. If the three defectors had submitted CliqueBots, the CliqueBots would have grown in the first round, reaching critical mass. The rest of us would have been wiped out.

Instead, the three defectors would take a huge early lead, and the remaining members would constitute, as our professor put it, their ‘packed lunch.’

The opening consisted of CliqueBots being wiped out, along with G-type weirdos, A-type attackers and D-style cooperators that got zero points from the CliqueBots.

Meanwhile, on the message board, the coalition members were pissed. 


Everyone who survived into the middle game cooperated with everyone else. Victory would come down to efficiency, and size boosted efficiency. Four players soon owned the entire pool: Me and the three defectors.

I thought I had won. The coalition members wasted three turns on 2-0-2. Nothing could make up for that. My self-cooperation was far stronger, and I would outscore them over the first two rounds when we met due to the 0. It wouldn’t go fast, but I would grind them out.

It did not work out that way. David had the least efficient algorithm and finished fourth, but I was slowly dying off as the game ended after round 200. Maybe there was a bug or mistake somewhere. Maybe I was being exploited a tiny bit in the early turns, in ways that seem hard to reconcile with the other program being efficient. I never saw their exact programs, so I’m not sure. I’d taken this risk, being willing to be slightly outscored in early turns to better signal and get cooperation, so that’s probably what cost me in the end. Either way, I didn’t win The Darwin Game, but did survive long enough to win the Golden Shark. If I hadn’t done as well as I did in the opening I might not have, so I was pretty happy.


Many of us went to a class party at the professor’s apartment. I was presented with my prize, a wooden block with a stick glued on, at the top of which was a little plastic shark, with plaque on the front saying Golden Shark 2001.

Everyone wanted to talk about was how awful David was and how glad they were I had won while not being him. They loved that my core strategy was so simple and elegant.

I tried gently pointing out David’s actions were utterly predictable. I didn’t know about the CliqueBot agreement, but I was deeply confused how they didn’t see this ‘betrayal’ coming a mile away. Yes, the fact that they were only one or two CliqueBots short of critical mass had to sting, but was David really going to leave all that value on the table? Even if betraying them hadn’t been the plan all along?

They were having none of it. I didn’t press. Why spoil the party?


Several tipping points could have led to very different outcomes.

If there had been roughly two more loyal CliqueBots, the CliqueBots would have  snowballed. Everyone not sending 2-0-2 would have been wiped out in order of how much they gave in to the coalition (which in turn accelerates their victory). Betrayers would have bigger pools, but from there all would cooperate with all and victory would come down to if anyone tweaked their cooperation algorithms to be slightly more efficient. David’s betrayal may have cost him the Golden Shark.

If someone had said out loud “I notice that anyone who cares about winning is unlikely to submit the CliqueBot program, but instead will start 2-0-2 and then cooperate with others anyway” perhaps the CliqueBots reconsider.

If enough other players had played more 2s against the CliqueBots, as each of us was individually rewarded for doing, the CliqueBots would have won. If the signal had been 2-5-2 instead of 2-0-2, preventing rivals from scoring points on turn two, that might have been enough.

If I had arrived in the late game with a slightly larger pool, I would have snowballed and won. If another player submits my program, we each end up with half the pool.

Playing more 2s against attackers might have won me the entire game. It also might have handed victory to the CliqueBots.

If I had played a better split of 2s and 3s at the start, the result would have still depended on the exact response of other programs to starting 2s and 3s, but that too might have been enough.

Thus these paths were all possible:

The game ended mostly in MimicBots winning from the momentum they got from the CliqueBots.

It could have ended in an EquityBot (or even a DefenseBot) riding its efficiency edge in the first few turns to victory after the CliqueBots died out. Scenarios with far fewer CliqueBots end this way; without the large initial size boost, those first three signaling turns are a killer handicap.

It could have ended in MimicBots and CliqueBots winning together and dividing the pool. This could happen even if their numbers declined slightly early on, if they survived long enough while creating sufficient growth of FoldBot.

CliqueBots could have died early but sufficiently rewarded FoldBots to create a world where a BullyBot could succeed, and any BullyBot that survived could turn around and win.

It could have had MimicBots and CliqueBots wipe out everyone else, then ended in victory for very subtle MimicBots, perhaps that in fact played 3s against outsiders, that exploited the early setup turns to get a tiny edge. Choosing an algorithm that can’t be gamed this way would mean choosing a less efficient one.

In various worlds with variously sized initial groups of CliqueBots and associated MimicBots, and various other programs, the correct program to submit might be a CliqueBot, a MimicBot that attacks everyone else but cheats on the coordination algorithm, a MimicBot that also cooperates with others, a BullyBot with various tactics, an EquityBot with various levels of folding, or an FoldBot. There are even scenarios where all marginal submissions lose, because the program that would win without you is poisoning the pool for its early advantage, so adding another similar program kills you both. 

This is in addition to various tactical settings and methods of coordination that depend on exactly what else is out there.

Everyone’s short term interest in points directly conflicts with their long term goal of having a favorable pool. The more you poison the pool, the better you do now, but if bots like you poison the pool too much, you’ll all lose.

There is no ‘right’ answer, and no equilibrium.

What would have happened if the group had played again?

If we consider it only as a game, my guess is that this group would have been unable to trust each other enough to form a coalition, so cooperative bots in the second game would send no signal. Since cooperative bots won the first game, most entries would be cooperative bots. Victory would likely come down to who could get a slight edge during the coordination phase, and players would be tempted to enter true FoldBots and otherwise work with attackers, since they would expect attackers to die quickly. So there’s some chance a well-built BullyBot could survive long enough to win, and I’d have been tempted to try it.

If we include the broader picture, I would expect an attempt to use out-of-game incentives to enforce the rules of a coalition. The rise of a true CliqueBot.


I spent so long on the Darwin Game story and my thinking process about it for several reasons.

One, it’s a fun true story.

Two, it’s an interesting game for its own sake.

Three, because it’s a framework we can extend and work with, that has a lot of nice properties. There’s lots to maximize and balance at different levels, no ‘right’ answer and no equilibrium. It isn’t obvious what to reward and what to punish.

Four, it naturally ties your current decisions to your future and past decisions, and to what the world looks like and what situations you will find yourself in.

Five, it was encountered ‘in the wild’ and doesn’t involve superhuman-level predictors. A natural objection to models is ‘you engineered that to give the answer you want’. Another is ‘let’s figure out how to fool the predictor.’ Hopefully minimizing such issues will help people take these ideas seriously.

There are many worthwhile paths forward. I have begun work on several. I am curious which ones seem most valuable and interesting, or where people think I will go next, and encourage such discussion and speculation.

Posted in Uncategorized | 12 Comments

The Darwin Pregame

Epistemic Status: True story

This is intended as post two of the sequence Zbybpu’f Nezl.

Previously (required): The Darwin Game

Leads to: The Darwin Results


This is my reconstruction of my thoughts at the time.

The Darwin Game requires surviving the early, middle and late games.

In the opening, you need to maximize scoring against whatever randomness people submit. Survival probably isn’t enough. The more copies of yourself you bring to the middle game, the more you face yourself, which snowballs. Get as many points as you can.

In the middle game, you face whatever succeeded in the opening. Strategies that survived the opening in bad shape can make a comeback here, if they are better against this new pool. What strategies do well against you matters.

In the end game, you’ll need to beat the successful middle game strategies, all of which have substantial percentages of the pool. Eventually you’ll be heads up against one opponent. Not letting opponents outscore you in a pairing becomes vital.

How would the game play out? What types of strategies would thrive?

I  divided the types as follows:

There were attackers, who would attempt to get the opponent to accept a 3/2 or 4/1 split. They might or might not give up on that if you refused, and presumably most would use a signal to self-cooperate, but not all. One person did submit “return 3.”

Then there were cooperators, who attempt to split the pot evenly. I assumed that meant alternating 3/2 splits. This then divided into those who would fold if attacked, allowing you to score above 2.5 per turn, those that would let themselves be outscored but would make sure you scored less than 2.5 per turn, and those that would not allow themselves to be outscored. The last group might or might not forgive an early attempt to attack them.

There would also be bad programs. People do dumb things. Someone might play all 2s, or pick numbers fully at random, or who knows what else.

As a list (attackers from here on means both AttackBot and BullyBot):

AttackBot. Attackers who don’t give up.

BullyBot. Attackers who give up.

CarefulBot. Cooperators who harshly punish attackers.

DefenseBot. Cooperators who don’t let you outscore them but don’t otherwise punish.

EquityBot. Cooperators who let you outscore them, but make sure you don’t benefit.

FoldBot. Cooperators who accept full unfavorable 3/2 splits.

GoofBot. Weird stuff.

My prior was we’d see all seven, with most looking to cooperate.

Was attacking a good strategy?

Attacking only works against FoldBots. When attacking fails, even DefenseBots might take a while to re-establish cooperation. CarefulBots could wipe you out. It was also impossible to know how long to keep attacking before concluding opponents weren’t going to fold.

With a pool of bots chosen by humans, attacking strategies (AttackBot or BullyBot) likely would fail hard in the opening.

The endgame was a different story. All GoofBots would be dead. Unless FoldBots fold too quickly to a BullyBot, in a given round they strictly outscore CarefulBots, DefenseBots and EquityBots. Each round, provided they exist, FoldBots would become a bigger portion of the cooperative pool. If you were an AttackBot or BullyBot, and survived long enough, you would kill off the CarefulBots, then the DefenseBots and finally the EquityBots as the FoldBots out-competed them, leaving a world of AttackBots, BullyBots and FoldBots. If all but one attacker was gone, the last attacker to survive would win if it cooperated efficiently against itself, since it would score above average each round. In theory a steady state could exist with multiple attackers keeping each other in check, but that isn’t stable since advantages in size snowball.

CarefulBots are strictly worse than DefenseBots, so those were out. GoofBots are terrible.

This meant there were five choices:

I could submit an AttackBot that cooperates with itself, and hope to survive into the endgame. I quickly dismissed this as unlikely to work.

I could submit a BullyBot that cooperates with itself, attacks but accepts an even split against stubborn opponents. But this rewards stubborn opponents while wiping out non-stubborn opponents in the mid-game, which means your endgame trump card stops working. I dismissed this as well.

DefenseBots don’t lose heads-up by non-tiny amounts, and punish anyone who tries to outscore them, wiping them out in the mid-game. But you score nothing against AttackBots in the opening, before you can shape the pool much. At best you take a smaller pool into the mid-game, where efficient cooperation with your own copies starts to snowball.

I saw the emotional appeal of DefenseBots, but using one didn’t made sense. Its defenses were too robust and expensive, and you still lose to a smart AttackBot heads-up if you’re outnumbered. I’d need to take more risk.

That was the problem with being a FoldBot. FoldBots feed attackers. You are free riding on the rest of the cooperative pool. You hope they kill attackers despite that. The problem is that if even one copy of an attacker survives, as you and other FoldBots grow strong, attacking becomes a better and better strategy. I decided this wasn’t worth that risk.

I would submit an EquityBot. I wouldn’t protect against them outscoring me. I would protect against them outscoring what cooperation would have gotten them. If at any point they wanted to split the remaining pie, I would accept. Even if they refused, I’d give them some points on a 3/2 split, so long as they were punished for it, and I wasn’t growing their portion of the pool.

This raised the threshold percentage of the pool I needed to win heads-up against an attacker, but with a size disadvantage I’d lose no matter what, and I’d still win if I had a sufficiently large size edge, which was more likely if I did better early on.

Too much folding and you strengthen someone who beats you. Too little and you fall behind letting others snowball.

I decided to alternate 3/2 even if my opponent was going 3/3. This said both ‘I’m not going to give up’ and ‘you are welcome to cooperate at any time,’ and still punished the opponent reasonably hard. After long enough I even risked throwing in a few more 2s.

I considered sending a signal to recognize myself, but realized there was no point. Better to start coordinating right away. I’d randomize my first turn to 2 or 3, and once my opponent didn’t match me I would alternate. I figured opponents would start 2 more often than 3, so I decided to do a 50/50 split to take advantage of that, coordinating faster and with a slight edge, at the expense of doing slightly worse against myself, but this was probably just a mistake and I should have done an uneven split (but not quite the fully maximizing-for-self-play ratio). However, in an endgame against a similar program, you can definitely get an edge by being slightly more willing to play 3s early than your opponent.

Opponents that wanted to cooperate would have a very easy time recognizing my offer and cooperating. That left special case logic.

If my opponent was alternating on the same schedule as me (somehow we started 2/3, but then we’d 2/2 then 3/3 then 2/2), then I’d play 2 twice in a row to break that up. Ideally, if the opponent was offering a different cycle that was fair, I’d match that (so if they went 1/4/1/4, I’d submit 4 next time, and if they did 1 I’d start alternating), but I didn’t expect such cases so I didn’t make that logic robust, as the professor had already thrown out part of a previous submission for being too complex, and I wanted to preserve the more important parts.

If my opponent was playing all 2s even after I started alternating, I put in logic to play all 3s. If they played even one 3, I’d back down permanently. I also put in logic against a few other bizarre simple bots (like all 1s, all 4s, seems to be completely random, etc) but didn’t worry about it too much since they’d be wiped out very quickly and complexity is bad.

If my opponent was playing all 3s without a starting signal, and kept it up long enough, that meant he’d defect against himself, which meant he couldn’t win an endgame, and also meant that he was highly unlikely to ever give up, so I’d eventually fold. If they were going to lose in the long run, better to get what I could. Letting them survive longer would only help me.


David took a different approach.

David knew about the class mailing list.

David assembled a large group. They agreed to submit 2-0-2 as their first three moves. If both sides sent the signal, they’d cooperate using a reasonable randomization system. If they didn’t get the signal back, they’d play all 3s. They’d be pure CliqueBots, cooperating with each other and defecting against everyone else. With a large enough group, they’d wipe out the other players and share the victory. David would win The Golden Shark and his guaranteed A+.

I would find out about the coalition after round one.


We were all set for game night. We had each chosen the logical output of our decision functions. The professor set up a website where we could see the game played out in real time over the course of several hours (due to a combination of that’s more fun and the game was slow to run), with a discussion board for him to offer observations and us to comment.

Next time I’ll reveal what happened on game night. Predictions are encouraged. Please do not comment here if you have read The Darwin Results.





Posted in Uncategorized | 13 Comments

The Darwin Game

Epistemic Status: True story

This post intends to begin the sequence Zbybpu’f Nezl.

In college I once took a class called Rational Choice. Because obviously.

Each week we got the rules for, played and discussed a game. It was awesome.

For the grand finale, and to determine the winner of the prestigious Golden Shark Award for best overall performance, we submitted computer program architectures (we’d tell the professor what our program did, within reason, and he’d code it for us) to play The Darwin Game.

The Darwin Game is a variation slash extension of the iterated prisoner’s dilemma. It works like this:

For the first round, each player gets 100 copies of their program in the pool, and the pool pairs those programs at random. You can and often will play against yourself.

Each pair now plays an iterated Nash bargaining game, as follows. Each turn, each player simultaneously submits an integer number from 0 to 5. If the two numbers add up to 5 or less, each player earns points equal to their own number. If the two numbers add up to 6 or more, neither player gets points. This game then lasts for a large but unknown number of turns, so no one knows when the game is about to end; for us this turned out to be 102 turns.

Each pairing is independent of every other pairing. You do not know what round of the game it is, whether you are facing a copy of yourself, or any history of the game to this point. Your decision algorithm does the same thing each pairing. You do know the results of previous turns in the same pairing.

At the end of the round, all of the points scored by all of your copies are combined. Your percentage of all the points scored by all programs becomes the percentage of the pool your program gets in the next round. So if you score 10% more points, you get 10% more copies next round, and over time successful programs will displace less successful programs. Hence the name, The Darwin Game.

Your goal is to have as many copies in the pool at the end of the 200th round as possible, or failing that, to survive as many rounds as possible with at least one copy.

If both players coordinate to split the pot, they will score 2.5 per round.

To create some common terminology for discussions, ‘attack’ or means to submit 3 (or a higher number) more than half the time against an opponent willing to not do that, and to ‘fold’ or ‘surrender’ is to submit 2 (or a lower number) more than half the time, with ‘full surrender’ being to always submit 2. To ‘cooperate’ is to alternate 2 and 3 such that you each score 2.5 per round.

In this particular example we expected and got about 30 entries, and I was in second place in the points standings, so to win The Golden Shark, I had to beat David by a substantial amount and not lose horribly to the students in third or fourth.

What program do you submit?

(I recommend actually taking some time to think about this before you proceed.)

Some basic considerations I thought about:

1. The late game can come down to very small advantages that compound over time.

2. You need to survive the early game and win the late game. This means you need to succeed in a pool of mostly-not-smart programs, and then win in a pool of smart programs, and then in a pool of smart programs that outlasted other smart programs.

3. Scoring the maximum now regardless of what your opponent scores helps you early, but kills you late. In the late game, not letting your opponent score more points than you is very important, especially once you are down to two or three programs.

4. In the late game, how efficiently you cooperate with yourself is very important.

5. Your reaction to different programs in the mid game will help determine your opponents in the end game. If an opponent that outscores you in a pairing survives into the late game, and co-operates with itself, you lose.

6. It is all right to surrender, even fully surrender, to an opponent if and only if they will be wiped out by other programs before you become too big a portion of the pool, provided you can’t do better.

7. It is much more important to get to a good steady state than to get there quickly, although see point one. Getting people to surrender to you would be big game.

8. Some of the people entering care way more than others. Some programs will be complex and consider many cases and be trying hard to win, others will be very simple and not trying to be optimal.

9. It is hard to tell what others will interpret as cooperation and defection, and it might be easy to accidentally make them think you’re attacking them.

10. There will be some deeply silly programs out there at the start. One cannot assume early programs are doing remotely sensible things.

That leaves out many other considerations, including at least one central one. Next time, I’ll go over my and David’s preparations, and post three will reveal what happened on game night.

Note: Please do not comment here once you have read The Darwin Pregame.


Posted in Uncategorized | 24 Comments

Zeroing Out

Related to (Eliezer Yudkowsky at Less Wrong): An Equilibrium of No Free Energy

Related to (Satvik Beri at Less Wrong): Competitive Truth Seeking

Follow-up to: Leaders of Men



Suppose you have an insight about Google. The efficient market hypothesis says you can’t make a profit. Your insight is not a new insight. The market has already priced it in. You know no more about Google’s future price than you did before.

That’s the bad news. That’s also the good news: If you didn’t have that insight , you wouldn’t know any less about Google’s future price. Efficient market!

I call this zeroing out. Your ignorance is not punished.

This means any unique knowledge you do find will be rewarded. If you get good Google news first, you don’t need to know anything else. Buy, buy, buy!

Continue reading

Posted in Economic Analysis, Rationality | Tagged , , , | 8 Comments