Eternal, and Hearthstone Economy versus Magic Economy

The game Eternal (that is my referral link), created by Magic professionals including lead designer Patrick Chapin, is modern Magic: The Gathering, with some simplifications and tweaks, on a phone, with a Hearthstone interface and economy.

That is super high praise.

Magic: The Gathering is the best game of all time. Eternal gives you the core Magic game play, things like mana bases and Magic-style attacking and blocking and even a stack. It even gives you genuine drafts. And it does all of that on a phone, with a good free play experience. Like Hearthstone, it looks crisp and good, plays light and fun. And in contrast to Magic: The Gathering Online, it works. Which is nice.

Do I have quibbles? Of course I have quibbles! I hate the move to 75 card constructed decks instead of 60. The changes to the color pie don’t work for me. The Eternal community takes its names for the colors and color pairs seriously, as opposed to winking every time they say ‘time’ and ‘primal’ instead of white and blue. Legendary cards are a bit ludicrous. Organized play isn’t where I’d like. I feel like there’s a better way to do social. Things seem copied from Hearthstone or Magic that feel like they don’t belong. But these are quibbles. I can’t argue much. The game is great, and there’s lots of little things I’m really happy about.

Two things rise above quibble.

I want an ‘old school’ Magic experience. I want players’ hands, lands, colors, decks, spells, creatures, right to attack and so forth all up for grabs. Most players, studies show, disagree. Players want to play their cards, cast their spells, fight with creatures. A little control and a little combo is good but mostly men need to be fighting, or players won’t be happy. Eternal certainly has control. But like modern Magic, Eternal can’t surprise and twist its premises. I’d love to see a modern take on old school, with Armageddon and Winter Orb, Mind Twist and Moat. I also miss true power: Ancestral Recall, Time Walk, Black Lotus. I have some ideas. But despite the name, that is not Eternal.

What I want to explore here is the economy.

Magic Economy

The original Magic economy is simple. Wizards sells you packs, you get cards you can keep, trade or sell. Cards are worth money. Most are worth very little since supply exceeds demand, a few have limited supply and lots of demand and are worth a lot (e.g. a Black Lotus once went for $20,000).

Players can trade, but that’s work. Work sucks. So the default nowadays is to buy and sell. Offline that’s dollars, online that’s tickets or credits for tickets, the dollars of Magic Online. Offline the bid/ask spread on cards is wide because traders have huge physical costs, so players can’t be turning over collections all the time, but online they can. Using trading robots, the prices for online cards have become standardized and tight, so you only pay a few percent to buy a card and then sell it back later. The trading interface’s terribleness is all that keeps this contained.

Magic events online and offline have prizes, usually packs and sometimes cash or invitations to tournaments. Players can ‘grind’ such events to gain cards at a lower price than buying them, and sufficiently above average players can ‘go infinite’ and outright profit from playing, building collections for free. These prizes push cards into the economy, so the value of packs declines. Buying online packs with dollars, instead of trading for them, is a sucker’s game.

Hearthstone Economy

The Hearthstone economic model, which Eternal copies, binds your cards to your account. You can destroy unwanted and surplus cards to gain dust (sandstone in Eternal) which can be used to make new cards at a much worse rate. So everyone gets the cards they need most, and there’s no check on supply of top legendary (Magic calls them Mythic Rare) cards.

You can also buy packs of cards or other in-game activities in exchange for dollars. As in Magic, you can enter events that offer prizes for winning and give you a better deal than buying packs.

Unlike Magic, there are also rewards you earn from completing ‘quests’ that are available daily, which reward you for things like winning games, winning with different types of decks or in different types of games, or in-game effects like playing resources or dealing damage. Eternal adds to this a prize for your first win of the day against another human.

Studies show the daily quests and rewards are highly addictive and motivating. I am not fully convinced such tactics are ethical, or should be legal! While they remain legal, they will be industry standard.

Hearthstone attempts to calibrate such that if you play well and do the daily quests, you can mostly keep yourself in cards, barely, for free. It’s quite the grind. Eternal aims similarly. A key difference is that Hearthstone draft doesn’t let you keep your cards, so it’s practical to let players who do well ‘go infinite,’ whereas Eternal drafts do let you keep the cards, and therefore exhaust your currency supply no matter how good you are.

Note that Magic Arena, the future Magic digital game, plans as per public announcements to use the Hearthstone model. I’d comment further, but game is in beta and they’ve asked for confidentiality.


Trade is Great!

If you allow cards to trade, the cards have monetary value.

This has huge advantages.

Markets are awesome. Players who want cards most get to buy them. Players who don’t value popular or powerful cards can sell them or trade them for money or other cards. Building a collection means you create something of value, potentially great value; getting into Magic early was quite profitable. We get the joys of trading and speculation, and the ability of Wizards to ‘print money.’

If strategies become powerful, playing them becomes more expensive, whereas other strategies become cheap. This creates a balancing effect and encourages diversity. A funky deck that uses a lot of rares no one wants is still cheap! If the deck doesn’t work out, little is lost.

Compare this to the Hearthstone model. If you want a card, you’ll need to create it. That’s not something players can do often unless they’re spending a lot of money. Dust supplies are highly limited! So when using dust, players aim to create the most powerful and versatile ‘good stuff’ cards they can, or the cards for tested tournament-level strategies. Now the best of the rarest cards are seen everywhere, and the other legendary cards are effectively even more expensive, and mostly unavailable. Creativity and variety are discouraged.

Even worse is the fear of wasting what little dust you have. If you create a card and later open it in a pack, and now have more copies than you can play, you’ve wasted most of that dust. That’s a huge feel-bad moment when you open the card you’ve created, replacing what would have been a feel-great moment when you get the exact card you want, and a fear every time you bust open a pack. Thus the temptation to hoard resources and not spend them, or feel bad about using them, even when you know what you want.

While building your collection, you’re gaining the ability to play, but you’re not building something of value. In theory, I could sell my account, but few will do that, it’s tied to other things, and I doubt it would fetch much cash anyway. When one is done with Hearthstone or Eternal, the cards sit around unused and unloved, forever, and no value is regained. Early adapters don’t get rewarded much.

A whole side of the game, and its rewards, has been cut off.

Trade is Terrible!

If you don’t allow cards to trade, the cards don’t have monetary value.

This also has huge advantages.

Markets are alien things people hate. It says something that I have taken multiple full-time jobs that are primarily about trading, and even I hate trading Magic cards. I hate it. It’s time consuming. It’s annoying. You’re constantly worried about being ripped off, and feel bad ripping others off. It makes everything a commodity, measurable in cash. If I only lose a few percent when I turn cards I randomly open in packs into cards I want to play with, I no longer opened possibility or awesomeness or phat loot. I opened dollar bills. Lotteries are exciting in their own way, but it’s not the same thing.

This then bears on the rewards one gets from playing. If a card game tried to reward me for playing games by paying me money, that would not work outside of professional competition. There’s no way they can offer enough. What’s my hourly rate? A few dollars an hour if I’m lucky? That’s far worse than zero.

Remember Diablo III? Originally it had an auction house where everything could be bought and sold for real money. This turned its phat loot into pennies, making the game no fun. When they took the auction house out, the game became fun. Markets force you to use them, and think in their terms. See Polyani, and beware commodification labor, land, money and in-game digital objects.

Those cool incremental rewards you were handing out? Not only are they worthless, they’re now also worth money. So what happens? An army of bots comes out to collect that money. Now you’re playing whack-a-mole at best, or looking on helplessly at worst. Every reward must be robust to infinite accounts mindlessly clicking infinite buttons. That’s a hard and unsolved problem.

When playing free games like Hearthstone and Eternal, I know there are two modes I can be in. I can play for free and enjoy the fight to be competitive, conserving and growing my resources. Alternatively, I can buy in fully, and have everything I need, with the only goal being to be the best. There’s no in-between. Once the cards are worth money, I’m doomed, since my time is so much more valuable.

Trade Is Life!

We have nice things because of trade. We have Magic in all its glory because of trade. Although the no-trade policies solve a lot of problems, I still can’t get away from them being unhealthy and wrong. They remove market incentives in order to create Skinner boxes. A Skinner box built around an awesome game is still also a Skinner box.

But the problems are real. The no-trade model is winning for reasons.

The key will be figuring out how to evolve into a new mode that gets the advantages of free trade without imposing such a burden, or forcing us to give up so much that people find fun. I believe solutions exist, and I intend to find them.


Posted in Uncategorized | 15 Comments


Epistemic Status: Seems worth sharing

Assumes Knowledge Of: System 1 and System 2 from Thinking, Fast and Slow

While in San Francisco this past week, I found myself in many complex and rapid conversations, with new ideas and concepts flying by the second. I hadn’t experienced this since least February’s AI Safety Unconference. It is wonderful to be pushed to one’s limits, but it means one must often rely on System-1 rather than System-2, or one will be left behind and lost. I now realize this is what happened to me in foreign language classes – I was using System-2 to listen and talk, and that doesn’t work at all if others don’t wait for you.

This meant that often I’d grasp what was being said in an intuitive sense using System-1, or agree or disagree in that fashion, without the time to unpack it. This can lead to great understanding, but also lead to waking up the next day having no idea what happened, or thinking about it in detail and realizing you didn’t understand it after all. So when people asked if I understood things, I instinctively started saying things like “I understand-1 but not 2.” I also did this for beliefs and agreements.

I debated explaining it but decided not to, as a test to see if others would intuit the meaning, most seemed to (or ignored the numbers, or came up with something equivalent in context); one person explicitly asked, and said he’d use it. I think this may be a valuable clarification tool, as it’s very different to understand-1 versus understand-2, or agree-1 versus agree-2, and seems better than saying something like “I kind of agree” or “I think I agree but I’m not sure,” which are both longer and less precise ways people often say the same thing.

My system-1 also added understand-3. Understand-3 means “I understand this well enough to teach it.” By extension, believe-3 or agree-3 means “I believe this, know how to convince others, and think others should be convinced.” To truly understand something, one must be able to explain, teach or defend it, which also means you and others can build solidly upon it. Writing has helped me turn much understanding-2 (and understanding-1) into understanding-3.

Posted in Uncategorized | 3 Comments

Zvi Will Be In San Francisco January 15-21

Happy New Year, everyone!

I will be taking a week’s vacation (without the family) in San Francisco this month, flying in on January 15 and leaving on January 21. Some of that time will be used to get fresh air, relax and refresh, but the main goal is to connect with as many friends as possible, new and old, and catch up on all that is happening. I come seeking knowledge, friendship and advice, perhaps opportunity, and offering all of them in return.

Great things are afoot. Other great things should be.

I don’t know exactly where I will be staying yet, but I anticipate spending most nights in Berkeley.

If you would like to get together during my visit, let me know! Email is best (it’s the***, where *** is my first name) is always best if you want to contact me privately. If you’d like to do a public thing, the comments here are open.





Posted in Uncategorized | 3 Comments

Book Review: The Elephant in the Brain

We don’t only constantly deceive others. In order to better deceive others, we also deceive ourselves. You’d pay to know what you really think.

Robin Hanson has worked tirelessly to fill this unmet need. Together with Kevin Simler, he now brings us The Elephant in the Brain.

I highly recommend the book, especially to those not familiar with Overcoming Bias and claims of the type “X is not about Y.” The book feels like a great way to create common knowledge around the claims in question, a sort of Hansonian sequence. For those already familiar with such concepts, it will be fun and quick read, and still likely to contain some new insights for you.

Two meta notes. In some places, I refer to Robin, in others to ‘the book’. This is somewhat random but also somewhat about which claims I have previously seen on Overcoming Bias. I nowhere mean any disrespect to Kevin Simler. Also, this is a long review, so my apologies for not having the time to write a shorter one, lest this linger too long past the book’s publication.


The book divides into two halves. In the first half, it is revealed (I’m shocked, shocked to find gambling in this establishment) that we are political animals and constant schemers that are constantly looking out for number one, but we have the decency to pretend otherwise lest others discover we are constantly scheming political animals. The easiest way to pretend otherwise is often to fool yourself first. That’s exactly what we do.

What are our real motives? Exactly what you’d think they would be. We want the loyal and strong political allies, true friends, the best sex with the most fit mates, food on the table, enforcement of our preferred norms, high status, respect and other neat stuff like that. The whole reproductive fitness package. To get these things, we must send the right signals to others, and detect the right ones in others, and so forth.

This insight is then used to shine a light on some of our most important institutions:

This book attempts to shine light on just those dark, unexamined facets of public life: venerated social institutions in which almost all participants are strategically self-deceived, markets in which both buyers and sellers pretend to transact one thing while covertly transacting another. The art scene, for example, isn’t just about “appreciating beauty”; it also functions as an excuse to affiliate with impressive people and as a sexual display (a way to hobnob and get laid). Education isn’t just about learning; it’s largely about getting graded, ranked, and credentialed, stamped for the approval of employers. Religion isn’t just about private belief in God or the afterlife, but about conspicuous public professions of belief that help bind groups together. In each of these areas, our hidden agendas explain a surprising amount of our behavior—often a majority. When push comes to shove, we often make choices that prioritize our hidden agendas over the official ones.


I know it is as a shock to you, but Robin doesn’t think X is about the Y it claims to be about. Instead it’s about signaling and status. I know, it’s a lot to take in. Stop. Catch your breath.

Robin and Kevin realize this is good news: 

This may sound like pessimism, but it’s actually great news. However flawed our institutions may be, we’re already living with them—and life, for most of us, is pretty good.

So if we can accurately diagnose what’s holding back our institutions, we may finally succeed in reforming them, thereby making our lives even better.

There are many examples throughout the book where particular things are held up as about status and signaling. In some cases, it is explained why these actions are valuable or socially beneficial, but in most cases the clear implication is that it is all pointless zero-sum games and a terrible waste.

The perfect encapsulation of this attitude is something I remember from Overcoming Bias, rather than from the book, in a post called Harnessing Polarization:

Human status competition can be wasteful. For example, often many athletes all work hard to win a contest, yet if they had all worked only half as hard, the best one could still have won.

As a sports fan and a former professional game player, I respond that yes, the same athlete wins. But it wouldn’t count. The win would be tainted, the spectacle ruined. We live to compete with all we have, and to watch others compete with all they have. We shake each others’ hands, say ‘good game’ and mean it. After we’re done, we analyze what happened to gain understanding. We celebrate striving, excellence and achievement.

Competition is not about winning.

What looks like the definition of a zero-sum game is quite the opposite.

Thus the battles between our elephants to gain advantage, in their inevitably increasing complexity, create the arms race of bigger and better brains, more and more complex behaviors, cultures and tools, and all the nice things. Each of us has an incentive to adjust the incentives of those around us back to that which is hard, that which is useful, that which makes us stronger, up and down the chains of meta levels.

Everything is Bayesian evidence of others’ true motives and loyalties and capabilities. We need elephant-sized brains to have any hope of processing all of it.

Bask in the wondrous deeds of unconscious Bayesian analysis and leaky self-deception. The Elephant in the Brain is a wonderful defense of hypocrisy. 

I, for one, take great joy in the merely real, the greatest show on Earth.


The next few sections will be a quote that stood out, and my associated thoughts.

Who would you rather team up with: someone who stands by while rules are flouted, or someone who stands up for what’s right?

Good question!

The book assumes the answer is the one who stands up for what’s right. But are you sure about that?

Are you sure you don’t want a winner? Who will be smarter than to waste resources reinforcing norms?

A central concept in the book is that norms are enforced because the meta-norm of enforcing norms is also enforced by the meta-meta-norm of enforcing the enforcing of rules, and so on. The meta-norm automatically acts on itself, so the cycle never ends.

For that to work, the answer to the team up question must be the one who stands up for what is right. For the elephant that is what being right means. In turn, for that to work, everyone must have a common knowledge expectation that the meta-norm will be enforced.

The Darwin Game illustrates (among other things, also spoiler alert) what happens when there can be no enforcement of the enforcement of norms.

In a world… of venture capital talks about what happens when being a winner becomes the norm. When the meta-norm enforces ‘be a winner’ it also punishes all other considerations. You destroy all pro-social norms.

One could also point to certain political campaigns and attitudes. Normally one must pretend to be in favor of allies being in favor of norm enforcement, and tearing that mask off is really, really bad.

That generalizes. Any actions not reinforced by norms are seen by the meta-norm as rival actions to norm enforcement, and are punished.

Neutrality is not a thing. What is a thing is a war of norm against norm, reward against reward, enforcement against enforcement, level upon level of complexity.



What’s not acceptable is sycophancy: brown-nosing, bootlicking, groveling, toadying, and sucking up. Nor is it acceptable to “buy” high-status associates via cash, flattery, or sexual favors. These tactics are frowned on or otherwise considered illegitimate, in part because they ruin the association signal for everyone else. We prefer celebrities to endorse products because they actually like those products, not because they just want cash. We think bosses should promote workers who do a good job, not workers who just sleep with the boss.

The first rule of signaling is cheat.

The second rule of signaling is catch cheaters.

The third rule, therefore, is don’t get caught.

When too easy a strategy earns too big an associative payoff, that’s a great deal for you. But it ruins the signal for others. They must bring the signal back in line with the difficulty of the action, by making the strategy more difficult and/or assessing a penalty against the signal.

So the game moves up a level, to signaling of difficulty, where the rules still apply. The game gets more complex and the brains get bigger.

Like the best games, the signaling game is self balancing, because the players change the rules to make it so.

How fun!



This line stood out as quite odd:

You botched a big presentation at work. Feel the pang of shame? That’s your brain telling you not to dwell on that particular information.

Things that make me feel shame bother me for years. Decades, even. We punish and discipline and shape behavior by associating it with shame. These days most of us consider shame toxic.

If shame is the brain’s way of telling you not to dwell on that particular information, it’s doing quite the epic fail of a job.

It seems more like the opposite. Shame is a tool for making damn sure you do dwell on that information, and everyone else knows it. It restores balance. It is explicitly a tax.

Shame royally sucks. That’s the whole point. That’s how you pay the tax. Much better to do without, but we must then restore balance another way.


Of course, we realize that a few expert opinions don’t necessarily reflect a consensus among all experts—nor, it should be noted, is consensus opinion necessarily the truth.

Well said. Consider this your periodic reminder.

In part two, the book breaks down some of the motives behind major areas of life: Body language, laughter, conversation, consumption, art, charity, education, medicine, religion and politics.

The second half did an excellent job of pointing out the signaling, status and strategic motives behind these areas of life. In this regard, I bought most of the book’s claims. Such motives are all around us and central to almost everything involving multiple people. We see over and over the structures from the first half of the book. As the chapter titles put it: Norms, cheating, self-deception, counterfeit reasons.

The pattern is to present the standard explanations for what happens in such areas of life, and then point out that this explanation doesn’t make sense – much of our behavior remains unexplained. So instead, such areas must be about other reasons, and such other reasons are found and explored.

Thus Robin’s claim that, contrary to popular belief, X is not about Y:

Food isn’t about Nutrition
Clothes aren’t about Comfort
Bedrooms aren’t about Sleep
Marriage isn’t about Romance
Talk isn’t about Info
Laughter isn’t about Jokes
Charity isn’t about Helping
Church isn’t about God
Art isn’t about Insight
Medicine isn’t about Health
Consulting isn’t about Advice
School isn’t about Learning
Research isn’t about Progress
Politics isn’t about Policy

For most people, such statements provide a useful nudge in the correct direction. In an important sense, all the above statements are true. In another important sense, they’re all false.

Everything is, in an important sense, about these games of signaling and status and alliances and norms and cheating. If you don’t have that perspective, you need it.

But let’s not take that too far. That’s not all such things are about.  Y still matters: you need a McGuffin. From that McGuffin can arise all these complex behaviors. If the McGuffin wasn’t important, the fighters would leave the arena and play their games somewhere else. To play these games, one must make a plausible case one cares about the McGuffin, and is helping with the McGuffin.

Otherwise, the other players of the broad game notice that you’re not doing that. Which means you’ve been caught cheating.

Robin’s standard reasoning is to say, suppose X was about Y. But if all we cared about was Y, we’d simply do Z, which is way better at Y. Since we don’t do Z, we must care about something else instead. But there’s no instead; there’s only in addition to. 

A fine move in the broad game is to actually move towards accomplishing the McGuffin, or point out others not doing so. It’s far from the only fine move, but it’s usually enough to get some amount of McGuffin produced.

There’s also the question of what we do when we realize people don’t care much about the McGuffin (hereafter simply Y). There are three paths one can take.

Path one is to say we could accomplish Y better by doing Z, so let’s do Z. Here’s how to make medicine about health and politics about policy! That can be a very good result, but it would also kill the signaling component of the enterprise. That likely means paying for it in other ways, so Z doing Y better is not a sufficient case that it’s a better system.

Path two is to disparage and discard the McGuffin, and play the broad game more openly. Medicine isn’t about health, so here’s the treatment that shows you care. Politics isn’t about policy, so I’m not even going to pretend to care about that. This is very bad. This is a huge violation of the broader game’s rules. It’s cheating. It’s too easy, it doesn’t show you’re clever or have spare resources, and the norms it implies are disastrous.

The broader game is very clear that when you realize that X is mostly not about Y, you’re supposed to keep saying that X is about Y, because that’s both how we keep the game’s difficulty high enough to send worthwhile signals, and how we get a little bit of Y out of the whole thing. A little is often all you need; thanks to the civilization this game has built, there really is room for most of us to waste most of our time. The whole system is built on this hypocrisy.

I’ll now go through the chapters in part II, and give my take on the particular claims.

VI – Body Language

I think this chapter is mostly spot on. Body language is about sending leaky and instantaneous, and therefore more honest, communication, often about status, while also having rich bandwidth and being deniable if reported to others. We prefer natural body language because it is a signal we can trust, whereas unnatural body language is basically lying. By using such communication methods, we test each other on many levels, including our skill in interpreting and giving such communication, and get to see in a deniable fashion what others are thinking is going on. Every move we make is based on everything it implies; just saying what you mean is impossible.

Making more things more explicit, on the margin, is often a good idea. As are further explanations of unclear signals, especially to those who struggle with them. Ask Culture gives people the valuable tool of direct communication without the need to violate norms, but trying to screen off rich implicit communication channels, or treat them as something we aren’t responsible for parsing, is a mistake. Life is full of Bayesian evidence and we have a brilliant processor for rich data sets. We’d be fools not to use it.

Body language is about communication, especially communication about status, but then there isn’t really another thing for body language to not be about.

VII – Laughter

This chapter also seems mostly spot on, and seems much better than my previous models of humor. Alternate explanations of laughter don’t explain observed behavior, whereas laughter as a play signal fits the data quite well. Laughing at something means it is playful, which has far reaching implications. It explains why good-natured teasing and laughter strengthens relationships, while mean-spirited teasing and laughter weakens them: the statement that something is playful changes its implications. Calling the right thing play shines a positive light on your motives, whereas overreaching does the opposite. It means you don’t care. And of course, laughter lets you communicate things you can’t otherwise say, as in the chapter’s closing quote from Oscar Wilde: “If you want to tell people the truth, make them laugh; otherwise they’ll kill you.”

The book doesn’t pay much attention to the simple fact that laughing is fun, and thus incentivized. Laughter becomes a tool that identifies and positively reinforces play in the exploratory, clever and experimental senses, which is great because such play is highly useful. The better and more clever and unpredictable and original the play, the better the laugh, encouraging more useful tasks and bigger brains. And of course, the better one’s sense of humor – e.g. knowing which things are clever and unpredictable and original – the better one looks, and thus the virtuous arms race continues.

Laughter isn’t about jokes, but that was always clear. Jokes are an attempt to hack the humor function and get laughter. Useful, but not the point.


VI – Consumption

Clearly this is intended to be a positive association for many viewers, but in Kevin’s case, the ad actually backfired. There’s nothing wrong with the product itself; it smells great and masks body odor effectively. But the cultural associations were enough to dissuade Kevin from using the product. This shows how arbitrary images can turn customers away, but by similar principles, other lifestyle ads must be having an opposite, positive effect.

Kevin (correctly) not wanting to consume [product] due to its associations is not a bug. It’s a feature. The point of advertising [product] in this way is to associate it with people who are very much Not Kevin. If Kevin were to use the product, it would be doing quite a bad job associating with and signaling Not Kevin! You wouldn’t want people smelling way too much [product] on people, and thinking it was Kevin.

I’m sure [product] likes extra sales, but there is a bigger plan in motion here.

You could also look at it as good Bayesian information for Kevin. If [product] is paying to create these associations, that is evidence [product] is a relatively better fit for people who would respond positively. By responding negatively, Kevin gets the information that the product is a poor fit for him. Since [product] exists in a competitive market, that probably means it’s a bad choice. Even if Kevin reacted neutrally to the message, that is a worse than expected reaction to advertising, so Kevin should update negatively on the product. Smart elephant.

At a minimum, he’d be paying for the signal, so it’s likely overpriced.

This is also quite healthy from an incentive perspective. If creating an association with your product helped with some customers without hurting with other customers, it would be an overly effective, free action. It would be too easy, unearned status. In other words, it would be cheating. We catch such cheaters who are so low quality as to be caught, and punish their actions with equal and opposite reactions.

In another example of good advertising:

The U.S. Marine Corps, for example, advertises itself as a place to build strength and character. In doing so, it’s not advertising only to potential recruits; it’s also reminding civilians that the people who serve in the Marines have strength and character. This helps to ensure that when soldiers come home, they’ll be respected.

And what would we do without advertising?

To get a better sense for just how much of our consumption is driven by signaling motives (i.e., conspicuous consumption), let’s try to imagine a world where consumption is entirely inconspicuous.

The book presents the thought experiment of the Obliviated world, where we are no longer able to form meaningful impressions of other people’s things. What happens then? How much of our consumption would then be pointless?

The book’s answer is quite a lot:

Today it’s considered inappropriate to wear sweatpants to a dinner party or around the office. But in an Obliviated world, where no one is even capable of noticing, why not?

Living rooms—which are often decorated lavishly with guests in mind, then used only sparingly—will eventually disappear or get repurposed.

Who would care?

I would. I would care.

I spend a large portion of my life in my living room. Even with no guests. I want it to be a nice place. For me. The idea that they’d disappear is downright bizarre to me. They’re for living! It’s a thing!

I buy most of my luxury goods where they are not easy to notice.

In the medicine chapter it is noted:

When people buy chocolates for their sweethearts on Valentine’s Day, for example, they usually buy special fancy chocolates in elaborate packaging, not the standard grocery-store Hershey’s bar.

Yes, part of that is because you want to signal that Valentine’s Day is special. But part of that is that you presumably like your sweetheart, and wouldn’t want him or her eating a Hershey’s bar.

I do not think, as I believe Robin does, that most product variety and customization is worthless aside from signaling. Instead, I thank the signaling market for encouraging such great variety, and curse it for often making the good stuff so much more expensive. 

Robin has always dismissed the value of inconspicuous consumption. I think it is vastly underrated. I often end up buying the package of conspicuous and inconspicuous consumption together, but there’s no other way to get the half I want.  Perhaps that also means I am the exception that proves the rule. I can believe most people instead throw away the other half.

If your restaurant consumption is, as the book claims, ‘more for showing off’ than for personal use, you are doing it wrong. Same with (from the same chart) mobile phones and living room furniture.

In the long run, if consumption were somehow totally inconspicuous, would I slowly lose the associations I have and devalue my inconspicuous consumption as well? Would the next generation get no joy from nice things? Perhaps the next generation. I think I’d stick to my guns. But the good stuff often is actually good. Consumption often really is about consuming, even if much of it (and much of the more expensive versions of it) mostly aren’t. The system works.

The system also works in the sense that getting capitalists to create more and better conspicuous consumption is both an engine of innovation and technological progress, and also an efficient method of conspicuous signaling. It’s certainly not a perfect signal of conspicuous signaling, we’d prefer we were all competing to donate to efficient charities (hold that thought for four sections), but moving the competition into innovation and design and production, and the bidding up of scarce resources, is quite the economic engine.

Consider the alternative. In an Obliviated world, where would all that signaling go? We’d have to turn to other sources. They would likely have to be behaviors, which seems quite nasty and negative sum without the beneficial economic side effects. The beauty of conspicuous consumption is it eats your money but that money ends up elsewhere. Mostly harmless. If it ate your time that time is gone. The alternative of using measurement and having a meritocracy is great up to a point and then turns into a dystopia, especially when it starts involving machine learning on more and more data. Then it eats your time and your money and your relationships and everything else.

There is speculation that this is happening in many jobs. It used to be that one could “suit up” and otherwise consume rote formality, and use that as a signal of seriousness and professionalism. That has increasingly stopped working, so we rely on other signals, which turns over even worse areas of our behavior to the signaling market, forcing people to turn to cultural and class signals, and other Bayesian evidence, eating up much more of people’s resources and making it that much harder to move up in the world. Every little piece of body language and every tone of voice gets obsessed over, even more than by default.

Which is terrible, but then again, I really hate suits.

VII – Conversation

Conversation isn’t about information is a strange claim. Of course conversation is about information. That doesn’t mean there isn’t anything else going on; the same ‘anything else’ is always going on when people interact. In this case, I think ‘conversation is about info’ is more importantly true than ‘conversation is not (entirely) about info.’ Info is quite the important McGuffin.

We can assume that everyone is calculating when it is to their benefit to share information, versus when it is better to hoard it. When they should talk versus when they should listen. So when I see statements like:

A full accounting will include two other, much larger costs:

  1. The opportunity cost of monopolizing information.
  2. The costs of acquiring the information in the first place.

In light of these costs, it seems a winning strategy would be to relax and play it safe, letting others do all the work to gather new information. If they’re just going to share it with you anyway, as an act of altruism, why bother?

But that’s not the instinct we find in the human animal. We aren’t lazy, greedy listeners. Instead we’re both intensely curious and happy to share the fruits of our curiosity with others. … If speakers are giving away little informational “gifts” in every conversation, what are they getting in return?

Hello, entire thesis of the book? Once again, we see the naive view of ‘I thought this was a narrow, isolated action, I’d be shocked, shocked to see humans trying to use it to maximize their reproductive fitness in this establishment.’ Then when a simple one-active-pathway model is suggested, in this case a quid pro quo arrangement of “I’ll share something with you if you return the favor,” we are ‘puzzled’ to find that it doesn’t fully explain the richness of human behavior.

Having read Robin for many years, it feels like he’s almost playing dumb. Or perhaps he’s continuously expecting readers to forget or dismiss the central thesis of both blog and book, and think humans do things in isolated ways that don’t interact with the wider world and wider hidden (and open) motives.

What excites me most about The Elephant in the Brain is that, having put the central point in book form, we can hope to take it as a given so we can get on with working out the implications and extensions, rather than constantly being shocked at what we find in this establishment.

So we have the following puzzles:

Puzzle 1: People Don’t Keep Track of Conversational Debts

Puzzle 2: People Are More Eager to Talk Than Listen

Puzzle 3: The Criterion of Relevance (in general, whatever we say needs to relate to the topic or task at hand)

Puzzle 4: Suboptimal Exchanges (when two people meet for the first time, they rarely talk about the most important topics they know).

The book’s resolution to this, which it draws from Geoffrey Miller’s The Mating Mind and Jean-Louis Dessalles’ Why We Talk is titled “Sex and Politics” where it is suggested we should “stop looking at conversation as an exchange of information, and instead try to see the benefits of speaking as something other than receiving more information later down the road.”

There’s that word again. Instead. I would suggest ‘in addition to’. If I share information with you it might benefit me in ways other than  you giving me back information of value in this conversation. Well, sure. Like any other time I do anything that benefits someone else, I’m going to accumulate goodwill and a sort of debt, and it’s likely that benefiting you benefits me, or that how you will act given this information will benefit me, and also there’s all the signaling and other secondary implications.

Miller and Dessalles focus on the signaling aspect, in particular showing off. Miller speaks of impressing potential mates, Dessalles of potential allies. The book suggests the metaphor that we each carry around a backpack full of tools, and when we pull one out others get a duplicate for free if they don’t have one already. By pulling out tools that are valuable and relevant, I provide evidence of an extensive and valuable tool set:

You want to know whether the applicant is sharp or dull, plugged-in or out of the loop. You want to know the size and utility of the applicant’s backpack.

Every conversation is like a (mutual) job interview, where each of us is “applying” for the role of friend, lover or leader.

One of the most important tools is the respect and support of others, and our prestige.

Our obsession with news, it is noted, makes little sense based on news’ direct usefulness, but makes perfect sense if it is about maintaining the ability to converse impressively about hot topics that will count as relevant. We are interested because we expect everyone else to be interested, and need to be seen as being in the loop.

Motives that seem important to conversation, and that help explain the puzzles, but that don’t get much mention, include: Persuasion, Deception, Negotiation, Framing, Credit (first person to pull out the tool gets the credit for it), Value (relevant things are more likely to be valuable, especially in context, and seeking them is likely why we’re having the conversation), Understanding (wanting to be understood), Steering (where the conversation goes, including asking for what would be valuable to you), Attention, People plain old enjoy talking and sharing information, Reaction (how you react to information is valuable to me), Play and Practice, and generally all the motivations of the elephant even when they aren’t being explicitly named.

Also, are the puzzles even accurate?

People can and do keep track of conversational debt. If you tell me something valuable, I owe you, and vice versa. That debt is like any other debt. Little gifts that cost us nothing we don’t track carefully, big gifts that do cost us, especially ones that were requested, we track more carefully. And sometimes being allowed to talk is the gift. A given conversation need not balance, but the debts do accumulate, and they do matter. That there isn’t an exact ledger is only a puzzle for absurdly economic man.

In turn, if debt is being tracked, being eager to talk rather than listen makes sense. If you have a chance to gain credits, you’d be excited to do that, so you can spend those virtual approximate credits later, or pay back debt you’ve incurred.

The criterion of relevance is necessary for conversation to be a conversation rather than an exchange of facts. Relevant things build upon previously said things in valuable ways, and are likely to be of higher value. If I talk about what we should have for dinner and you tell me the capital of Brazil, chances are that’s both not something I especially care about right now and also not helping. 

Similarly suboptimal exchange, since info has relative value based on context and what different people care about at different times. I don’t have a ‘most valuable piece of knowledge’ to go repeating to everyone. But yes, people would benefit greatly if we paid more attention to talking about more valuable topics, and exchanging more valuable information. I am reminded of the idea of going around asking everyone “what are the most important problems in your field?” and then “why aren’t you working on them?”

The chapter ends with some claims about research preferences often being about impressiveness and prestigious association, which few reading either this or the book would argue against, other than as a matter of degree.

VIII – Art

Art isn’t about insight. Art is about sacrificing resources to show that you can afford to, and a general-purpose fitness display (and courtship display). Thus we care deeply about who made the art, in what context, and how difficult and expensive it was to create. Artists intentionally choose difficult mediums and subject matter. For example, exact reproduction was prized as art when it was hard, then discarded when reproduction became easy. We like improv over theater, and theater over movies, holding quality constant, because of difficulty and also association with prestigious folk. Lobster was peasant food when plentiful, a luxury now that it is rare.

In turn, we care deeply about our ability to discern good art from bad, lest we be misled, and others see us as therefore unfit.

I buy all that. I wonder how deeply it is linked to the fact that Choices are Bad. Raising difficulty through restriction while holding results constant equals greater satisfaction. Plus restrictions breed creativity.

It also explains why so much of what we call ‘art’ is so terrible. When insiders compete to do things other insiders recognize as difficult, you get museums full of things I would pay to not possess. You get respect paid to ‘prestige’ pictures no one enjoys. There is lots of great art and fine work that requires taste to enjoy, but there’s also lots of ‘art’ where the enjoying is about how abstractly difficult it is, and I cannot stand the stuff. Or lobster. We were right the first time on that one.

I can appreciate when context makes things difficult, but there’s also difficulty in figuring out the right easy thing to do, and the difficult thing that’s ugly and terrible is still ugly and terrible in my eyes.

And yet, I like improv and appreciate theater and live music and live sports (even if I don’t consider it usually worth the time, money and trouble enough to actually go too often, although I think I’m making a mistake not going more). Part of that is the lack of distraction, and that real life is super high resolution, and the chance to interact with others, and even the sense of being there with the high status folk. A lot of it is the ability to see skilled people tackle restrictions and added difficulty, but that’s still far from all.

The important missing element is unpredictability and non-optimization. I love watching performers think on their feet, and strategizing with them. When things are edited and configured for the best performances and exactly what sells best, they become predictable and bland. When you know something is going to be exciting, that’s not exciting; part of what makes a great game great is not knowing if it’s going to turn into a rout. Watching great improv is about it sometimes falling on its face.

IX – Charity

Anyone reading this is almost certainly familiar with Effective Altruism, and the idea that people are charitable more for the warm glow of feeling helpful and showing off that they’re helping, rather than actually doing good. At this point all of that is uncontroversial around these parts.

As careful readers of mine have no doubt noted, I have concerns about Effective Altruism. One of those concerns is the tendency of such folk to become smug and act superior, treating those who give otherwise as ‘not really caring.’ This chapter comes off as quite smug in a way I worry is off-putting. Yes, our giving cares about visibility, and peer pressure, and proximity, and relatability and mating motives. We are not simply trying to do the greatest good for the greatest number slash score all the utilons. As opposed to the Effective Altruists, who are building a culture that attempts to better line up what serves those other motives with what actually helps people.

Which is great! But that does not mean others do not care. Nor does failure to optimize X mean one does not care about X; humans are not automatically strategic. They don’t go around optimizing things. If you’re going to accuse them of not caring, don’t accuse them of not caring about people, accuse them of not caring about optimization! Cause on that, they’re guilty guilty guilty.

Definitely don’t act like caring about proximity or relatability makes you a bad person, let alone effectively guilty of negligent homicide. You don’t get or keep civilization if you don’t care about proximity or relatability, or listen too carefully to Peter Singer, and even if that wasn’t a concern, humans flat out do care about those things. At this point I consider the drowning child argument a Basilisk, and wish it was treated accordingly: as something memetically hazardous that everyone needs to overcome and defeat as part of their coming-of-age rituals. It doesn’t even point out hypocrisy because people readily admit that life does not work like that. 

X – Education

Individual students can expect their incomes to rise roughly 8 to 12 percent for each additional year of school they complete. Nations, however, can expect their incomes to rise by only 1 to 3 percent for each additional year of school completed by their citizens on average.

School is about signaling, and learning to submit. We knew that already. Film at eleven.

For the full version, see Bryan Caplan’s upcoming book, The Case Against Education.

One note I would make is about the sheepskin effect, where the last year of a college degree is much more valuable than previous years. There’s been some debate about this online lately between Bryan Caplan and Noah Smith. I agree that this is largely a signaling effect, with ‘completed all eight terms’ much more impressive than ‘completed seven of eight terms’ since you don’t know how many more terms the first student could have finished if necessary.

What the discussion misses, it seems to me, is that only after graduation do you know that the first three years were real. It is easy to become ‘a senior’ through completion of a number of credits, saving the stuff they find hardest for last or even being in terrible shape to match up with graduation requirements. I strongly suspect that a lot of people who drop out in year four are much farther from finished than they would have you believe.

XI – Medicine

Medicine is about conspicuous caring. The point is to show your care by doing all the things, on a personal and a societal level. We want to symbolically apply care, and know that such symbolic care has been applied. We actively prefer treatments with painful side effects, because they show how much we care. We want to construct a narrative where we did all that we could, and no one could blame us. There are lots of ways to improve patient outcomes that the medical system shrugs off and ignores for decades.

In this model, the fact that much of our medical care actually works is mostly a coincidence. When older medical systems used care that was actively harmful, the same behaviors were mostly observed. On the margin, our medical care does not work, and likely does harm, which is why health insurance doesn’t lead to more physical health (although not worrying about health care bills improves financial and mental health). We would do well to spend far less on health care, especially end of life care, while keeping core services that actually work like trauma care and vaccination.

Having run a medical start-up, I can report that medicine is even less about health than Robin makes it out to be. Even for themselves, with no one in their lives present to signal, people would rather consume the symbolic version of the thing. They care about how your research report looks, not about what it contains. They want to feel helped, not be helped. We foolishly had the hypothesis that people cared about their health enough to think for themselves. We rejected that hypothesis.

XII – Religion

Religion is a strange case where “X is not about Y” is often considered actively good news. God is not often considered a worthy cause (or even to exist) around these parts. Community and social systems, on the other hand, are recognized as vital by all right thinking folk. Community provides a commitment mechanism for community members and a way to create common knowledge of pro-social norms that make those communities work as places to live, support fellow members and raise families. It also serves as a badge, allowing others both in and out of your community to trust you will abide by your communities’ standards.

Instead of mocking religion as an especially false belief, the book suggests, we should recognize its strategic value in building and reinforcing social systems, norms and communities. The absurdity of religious belief is even an advantage, ensuring that the signals sent are genuine signs of commitment. Religious folk have more children, stay married longer, have more social connections, get and stay married more, tend to be happier, and have many things atheist communities struggle at great cost to recreate. Is there really that big a difference other than degree, the book asks, between the Muslim Hajj and the pledge of allegiance, or even traveling to watch your favorite sports team?

I see the appeal. I have looked at and adopted many Jewish practices and rituals for their mundane utility, with great success. The book points out this is common, and many Jews continue to follow the community norms despite being Atheists.

This attitude is summed up in this direct appeal:

Finally, we’d like to make a plea for some charity and humility, especially from our atheist readers. It’s easy for nonbelievers to deride supernatural beliefs as “delusions” or “harmful superstitions”

Nevertheless, we think people can generally intuit what’s good for them, even if they don’t have an analytical understanding of why it’s good for them. In particular, they have a keen sense for their concrete self-interest, for when things are working out in their favor versus when they’re getting a raw deal. So whenever adherents feel trapped or oppressed by their religion, as many do, they’re probably right. But in most times and places, people feel powerfully attracted to religion. They continue to participate, week after week and year after year-not with reluctance but with tremendous zeal. And we’d like to give them the benefit of the doubt that they know what’s good for them.

If we are mostly run by our elephants, and our elephants are expert Bayesian analysts with a keen understanding of what will help our reproductive fitness, that is a strong argument in favor of people knowing what is good for them. Certainly people are taking much into account when thinking about such matters, in ways they can’t articulate and that people often aren’t given credit for. In the other chapters of the book, we see people doing things that don’t seem to make sense, but turn out to be rather strategic. People are really good at this in some contexts.

It’s even true that much of the time, if we ask people what is ‘good for them,’ they will inuit remarkably accurate answers, even identifying when their own behaviors are bad for them, or other behaviors they are not doing would be good. If you think people go around having no idea what is good for them, and that people do things for no good reason, this is an important message for you.

The counterargument is that we’ve all met humans, so we know that they often have no idea what is good for them, in any context. It is not a coincidence that religion is singled out as the place the book explicitly asks us to trust others to make such decisions. Religion is a place we’ve constructed a norm that we should mostly trust people to make their own choices, unless the religion is young or too weird, and we call it a cult. Then we recognize that people can be very wrong about what is good for them.

That’s not to say that religion (setting aside the truth value of its beliefs) is usually bad for people. It seems especially not bad for people in the context of its existing communities. Given everyone around you is doing this thing, not doing the thing might be quite the bad idea.

What I am pushing back against is the more general claim that people can intuit what is good for them, to the extent that people know accurately sense when their religion is or isn’t working out. Religion is mostly an adaptation, and religious beliefs are often real whatever the original motivation for selecting those beliefs. People are Adaptation-Executers, not Fitness-Maximizers.

Usually people aren’t trying to intuit anything at all, they’re just doing what they are in the habit of doing, and what they see others in the habit of doing, following the ancient principles of fitting into a tribe, because that’s typically a recipe for local success. Often these strategies are much smarter than they look. But when what is good for people doesn’t match those instincts and habits, people don’t see it. Trying to out-think others, or intentionally coordinate with them, is a rough road.

XIII – Politics

Once again, we have puzzles: People vote without (much) regard for vote decisiveness, they vote when they know they are uninformed, they have entrenched opinions and strong emotions on political issues. If we are political ‘do-rights’ trying to enact the best policies, these actions do not make sense. Even more puzzling, we don’t seem to vote according to our narrow self-interests either.

But as we all know, choosing the best policies is not what most politics is mostly about. Politics is mostly about being in coalitions and showing loyalty to that coalition. In many times and places, members of the political outgroup are not taken kindly to, so one needs to show loyalty to the ingroup and its political viewpoints.

That doesn’t mean politics isn’t ultimately largely about policy. These coalitions involve many actors who do care deeply about certain policies, often out of narrow self-interest but also often as genuine do-rights. The policy wonks and idealists are real, and views on issues often do shift for the right reasons, not only the wrong ones. We all understand that a politics completely about alliances would result in the rapid collapse of the republic, with devastating consequences for almost everyone. Many fear that is well underway.

So we really do reward those who work for the common good and punish those who do not, in addition to worrying about alliances. Others reward those who reward the common good, and so forth. The relative power working towards policy ebbs and flows with the times, the culture, the norms. Occasionally this even results in good policy. It hasn’t been an especially good time for that, recently. But I am old enough to remember a time when this was far more of a thing.

XIV – Scorecard

Chapter                       % Claims Accepted      Y=       Z=                   X about Y more than Z?

Body Language         100%                               N/A     Info                                     N/A

Laughter                    100%                               Jokes   Playfulness                        No

Conversation            50%                                 Info    Showing Off                       Yes

Consumption           50%                                  You      Conspicuousness             Yes

Art                             100%                           Insight    Fitness Display                   No

Charity                     90%                            Helping    Fuzzies/Signaling              No

Education                100%                          Learning  Signaling                            No

Medicine                  110%                          Health      Conspicuous Caring         No

Religion                   90%                              God         Community                        No

Politics                     100%                             Policy       Alliances                           No

On average, this adds up to buying about 90% of claims. For a book making so many bold claims, that’s very good.

Ranked from most about Y to least about Y:

Food isn’t about Nutrition

Bedrooms aren’t about Sleep

Talk isn’t about Info

Consumption isn’t about You / Clothes aren’t about Comfort

Research isn’t about Progress

Politics isn’t about Policy

Consulting isn’t about Advice

Marriage isn’t about Romance

School isn’t about Learning

Medicine isn’t about Health

Art isn’t about Insight

Church isn’t about God

XV – Conclusion

X is about a lot more than Y. Often X is about Z far more. Even if X is mostly about Y on some levels, it is mostly about Z on others. But we fear punishment and social collapse if we claim or even believe this.

This is a sort of argument for modesty in the ‘believe and act like if you know what’s good for you’ sense – you should profess the same beliefs you see around you because they are chosen strategically, even if they’re false. You should do the same actions you see, because they have hidden social motives and purposes, and people will punish you for acting differently even if they don’t know why acting differently might be bad here – holding out for that explanation is not a chance they are willing to take. Nice human you have there. It would be a shame if someone were to ostracize it or lower its status.

It’s also an argument that everyone is lying to you, all the time, and you know it. Disinformation is everywhere, requiring local modeling. Trusting the ‘expert consensus’ means trusting people who are lying their asses off. All the time. Being more accurate than people who are lying their asses off all the time sounds doable. Everyone is already doing it! Our elephants instinctively adjust for this lying. It also means everyone’s effort is mostly going to other things, so again you should have a relatively easy time doing better. The question is how.

All this hypocrisy and self-deception is how we got here and have all the big brains and nice things. Without it we’d be able to accomplish things more efficiently, but we’d also lose our pro-social norms and things would fall apart. So how do we explicitly explore these concepts and get the benefits, without bringing the whole tower down on our heads?

Everyone punishes those who are not hypocrites in such ways. They also punish anyone who is caught being a hypocrite. When people figure out that no one does the thing they are supposedly doing, and everyone who knows this is lying about it, they almost always conclude there’s a norm to keep lying about it, and start lying themselves lest they be torn apart by unknown enforcement mechanisms. How do we communicate about such things explicitly without bringing this wrath down on us?

You just do it. A little. Then you realize nothing happened. No one is coming for you.

So you do it some more. Nothing happens. No one comes.

Until you shout it from the rooftops, for years. And still nothing happens.

Our instincts say such talk is dangerous, but it isn’t. What is dangerous are to speak against key norms, or the in-vogue symbols of group loyalty. There are certain things you absolutely cannot say without risk of disaster. They vary from place to place and time to time. Some are true, some are false. All of them are on the list because enough people actually say them, or they are evidence of things people actually believe.

So… don’t say them in the wrong company. That simple. You don’t need to. When they’re true, they’ll be implied. When they’re false, they won’t be.

No one expects you to go meta on the situation. Hansonian claims are fine. Vassarian claims are fine. Yudkowskian claims are fine.

You just can’t be a hypocrite about it.

Or rather, you absolutely must be a hypocrite about it, but be careful to not be caught.

There’s no need to be a hypocrite about being a hypocrite. People are watching you to see what will happen if they press a button and see what stock response you send. Will you pull out the slip of paper containing the appropriate answer? That’s what they are checking. They don’t care what your underlying logical algorithm is; that’s not very correlated with the slips of paper that come out when you press buttons, whereas your loyalty is highly correlated with those paper slips. If your logic says that what you write on those slips is about loyalty, but you pull out the right slips of paper, does that make you less loyal? Or more loyal? I’m not even sure.

The CliqueBots roam the land to strike down those who do not say the Shibboleth. They’re not looking for your logic, unless it bumps into one of the forbidden moves.

So now that you can talk about it, now what? How do we make things better?

On a personal level, we have a better world-model. We now know that medicine isn’t about health and politics usually isn’t about policy, and so on. So we can better understand what others will do, and how to motivate them – we need to give them motivations about conspicuous caring and alliances, without saying explicitly that we are doing that. We in turn can notice when our actions are conspicuous caring or alliance formation, and decide whether that’s a wise thing to do in context.

On a societal level, we now understand why we can’t ‘just’ do the medicine that improves health, or the politics that improves policy. We know more about why people are doing what they are doing, how the system ticks, what keeps it working, what might break it down. We can more usefully ask where to push on it, or how to compete against it. If we want to sell people on prediction markets, we can ask how we might frame them such that they might get buy-in. If we want to cut health care spending, we can ask how to frame such cuts as caring rather than not caring. Perhaps, for example, we should more share horror stories about over-treatment at end of life, and pressure to use it, and ask how we could care so little as to allow that. Use the system against itself.

We can also recognize that the Zs in question are not bad things. There’s nothing wrong with signaling, conspicuous caring, forming alliances, building communities and displaying one’s fitness. These are human needs. If we squeeze them out of one place, they’ll show up in another. So is searching for hypocrites. It gives us motivation to stay consistent; rationalists might not need such motivation, but most others do. Fear of looking hypocritical keeps them on their toes. We shouldn’t be the hypocrites who condemn such motives, any more than we should be the explicit hypocrites who say we’re doing medical care to show caring and playing politics to build alliances, except instrumentally or additionally. Let us not disrespect the Y that X is not about, for it is about that too!

And this allows us to listen to the elephant, and cooperate with it. It knows more than you do, and faster. It brings wisdom. We can and must be aware of it, guide it, train it, but also trust it, and play the game with everyone else. After enlightenment, chop wood, carry water. But the good wood, and the right water.




Posted in Uncategorized | 10 Comments

The Story CFAR

In addition to to my donation to MIRI, I am giving $4000 to CFAR, the Center for Applied Rationality, as part of their annual fundraiser. I believe that CFAR does excellent and important work, and that this fundraiser comes at a key point where an investment now can pay large returns in increased capacity.

I am splitting my donation and giving to both organizations for three reasons. I want to meaningfully share my private information and endorse both causes. I want to highlight this time as especially high leverage due to the opportunity to purchase a permanent home. And importantly, CFAR and its principals have provided and in the future will provide direct personal benefits, so it’s good and right to give my share of support to the enterprise.

As with MIRI, you should do your own work and make your own decision on whether a donation is a good idea. You need to decide if the cause of teaching rationality is worthy, either in the name of AI safety or for its own sake, and whether CFAR is an effective way to advance that goal. I will share my private information and experiences, to better aid others in deciding whether to donate and whether to consider attending a workshop, which I also encourage.

Here are links to CFAR’s 2017 retrospective,  impact estimate, and plans for 2018.


My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.

I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.

In the years since then, many of my friends in the community attended workshops, reporting that things have improved steadily over time. A large number of rationality concepts have emerged directly from CFAR’s work, the most central being double crux. They’ve also helped us take known outside concepts that work and helped adapt them to the context of rationalist outlooks, an example being trigger action plans. I had the opportunity recently to look at the current CFAR workbook, and I was impressed.

In February, CFAR president and co-founder Anna Salamon organized an unconference I attended. It was an intense three days that left me and many other participants better informed and also invigorated and excited. As a direct result of that unconference, I restarted this blog and stepped back into the fray and the discourse. I have her to thank for that. She was also a force behind the launch of the new Less Wrong, as were multiple other top CFAR people, including but far from limited to Less Wrong’s benevolent dictator for life Matthew Graves, Michael “Valentine” Smith and CFAR instructor Oliver Habryka.

I wanted to attend a new workshop this year at Anna’s suggestion, as I think this would be valuable on many levels, but my schedule and available vacation days did not permit it. I hope to fix this in the coming year, perhaps as early as mid-January.

As with MIRI, I have known many of the principals at CFAR for many years, including Anna Salamon, Michael Smith and Lauren Lee, along with several alumni and several instructors. They are all smart, trustworthy and dedicated people who believe in doing their best to help their students and to help those students have an impact in AI Safety and other places that matter.

In my endorsement of MIRI, I mentioned that the link between AI and rationality cuts both ways. Thinking about AI has helped teach me how to think. That effect does not get the respect it deserves. But there’s no substitute for studying the art of thinking directly. That’s where CFAR comes in.


CFAR is at a unique stage of its development. If the fundraiser goes well, CFAR will be able to purchase a permanent home. Last year CFAR spent about $500,000 on renting space. Renting the kind of spaces CFAR needs is expensive. Almost all of these needs would be covered by CFAR’s new home, with a mortgage plus maintenance that they estimate costing at most $10,000 a month, saving 75% on space costs and a whopping 25% of CFAR’s annual budget. The marginal cost of running additional workshops would fall even more than that.

In addition to that, the ability to keep and optimize a permanent home, set up for their purposes, will make things run a lot smoother. I expect a lot of gains from this.

Whether or not CFAR will get to do that depends on the results of their current fundraiser, and on what they can raise before the end of the year. The leverage available here is quite high – we can move to a world in which the default is that each week there is likely a workshop being run.


As with MIRI, it is important that I also state my concerns and my biases. The dangers of bias are obvious. I am highly invested in exactly the types of thinking CFAR promotes. That means I can verify that they are offering ‘the real thing’ in an important sense, and that they have advanced not only the teaching of the art but also the art itself, but it also means that I am especially inclined to think such things are valuable. Again as with MIRI, I know many of the principals, which means good information but also might be clouding my judgment.

In addition, I have concerns about the philosophy behind CFAR’s impact report.

In the report, impact is measured in terms of students who had an ‘increase in expected impact (IEI)’ as a result of CFAR. Impact is defined as doing effective altruist (EA) type things, either donating to EA-style organizations, working with such organizations (including MIRI and CFAR), or a career path towards on EA-aligned work, including AI safety, or leading rationalist/EA events.  151 of the 159 alumni with such impact fall into one of those categories, with only 8 contributing in other ways.

I sympathize with this framework. Not measuring at all is far worse than measuring. Measurement requires objective endpoints one can measure.

I don’t have a great alternative. But the framework remains inherently dangerous. Since CFAR is all about learning how to think about the most important things, knowing how CFAR is handling such concerns becomes an important test case.


The good news is that CFAR is thinking hard and well about these problems, both in my private conversations with them and in their listed public concerns. I’m going to copy over the ‘limitations’ section of the impact statement here:

  • The profiles contain detailed information about particular people’s lives, and our method of looking at them involved sensitive considerations of the sort that are typically discussed in places like hiring committees rather than in public. As a result, our analysis can’t be as transparent as we’d like and it is more difficult for people outside of CFAR to evaluate it or provide feedback.
  • We might overestimate or underestimate the impact that a particular alum is having on the world. Risk of overestimation seems especially high if we expect the person’s impact to occur in the future. Risk of underestimation seems especially high if the person’s worldview is different from ours, in a way that is relevant to how they are attempting to have an impact.
  • We might overestimate or underestimate the size of CFAR’s role in the alum’s impact. We found it relatively easier to estimate the size of CFAR’s role when people reported career changes, and harder when they reported increased effectiveness or skill development. For example, the September 2016 CFAR for Machine Learning researchers (CML) program was primarily intended to help machine learning researchers develop skills that would lead them to be more thoughtful and epistemically careful when thinking about the effects of AI, but we have found it difficult to assess how well it achieved this aim.
  • We only talked with a small fraction of alumni. Focusing only on these 22 alumni would presumably undercount CFAR’s positive effects. It could also cause us to miss potential negative effects: there may be some alums who counterfactually would have been doing high-impact work, but instead are doing something less impactful because of CFAR’s influence, and this methodology would tend to leave them out of the sample.
  • This methodology is not designed to capture broad, community-wide effects which could influence people who are not CFAR alums. For example, one alum that we interviewed mentioned that, before attending CFAR, they benefited from people in the EA/rationality community encouraging them to think more strategically about their problems. If CFAR is contributing to the broader community’s culture in a way that is helpful even to people who haven’t attended a workshop, then that wouldn’t show up in these analyses or the IEI count.
  • When attempting to shape the future of CFAR in response to these data, we risk overfitting to a small number of data points, or failing to adjust for changes in the world over the past few years which could affect what is most impactful for us to do.

These are very good concerns to have. Many of the most important effects of CFAR are essentially impossible to objectively measure, and certainly can’t be quantified in an impact report of this type.

My concern is that measuring in this way will be distortionary. If success is measured and reported, to EAs and rationalists, as alumni who orient towards and work on EA and rationalist groups and causes, the Goodhart’s Law dangers are obvious. Workshops could become increasingly devoted to selling students on such causes, rather than improving student effectiveness in general and counting on effectiveness to lead them to the right conclusions.

Avoiding this means keeping instructors focused on helping the students, and far away from the impact measurements. I have been assured this is the case. Since our community is unusually scrupulous about such dangers, I believe we would be quick to notice and highlight the behaviors I am concerned about, if they started happening. This will always be an ongoing struggle.

As I said earlier, I have no great alternative. The initial plan was to use capitalism to keep such things in check, but selling to the public is if anything more distortionary. Other groups that offer vaguely ‘self-help’ style workshops end up devoting large percentages of their time to propaganda and to giving the impression of effectiveness rather than actual effectiveness. They also cut off many would-be students from the workshops due to lack of available funds. So one has to pick one’s poison. After seeing how big a distortion market concerns were to MetaMed, I am ready to believe that the market route is mostly not worth it.


I believe that both MIRI and CFAR are worthy places to donate, based on both public information and my private information and analysis. Again I want to emphasize that you should do your own work and draw your own conclusions. In particular, the case for CFAR relies on believing in the case for rationality, the same way that the case for MIRI relies on believing in the need for work in AI Safety. There might be other causes and other organizations that are more worthy; statistically speaking, there probably are. These are the ones I know about.

Merry Christmas to all.


Posted in Uncategorized | 10 Comments

I Vouch For MIRI

Another take with more links: AI: A Reason to Worry, A Reason to Donate

I have made a $10,000 donation to the Machine Intelligence Research Institute (MIRI) as part of their winter fundraiser. This is the best organization I know of to donate money to, by a wide margin, and I encourage others to also donate. This belief comes from a combination of public information, private information and my own analysis. This post will share some of my private information and analysis to help others make the best decisions.

I consider AI Safety the most important, urgent and under-funded cause. If your private information and analysis says  another AI Safety organization is a better place to give, give to there. I believe many AI Safety organizations do good work. If you have the talent and skills, and can get involved directly, or get others who have the talent and skills involved directly, that’s even better than donating money.

If you do not know about AI Safety and unfriendly artificial general intelligence, I encourage you to read about them. If you’re up for a book, read this one.

If you decide you care about other causes more, donate to those causes instead, in the way your analysis says is most effective. Think for yourself, do and share your own analysis, and contribute as directly as possible.


I am very confident in the following facts about artificial general intelligence. None of my conclusions in this section require my private information.

Humanity is likely to develop artificial general intelligence (AGI) vastly smarter and more powerful than humans. We are unlikely to know that far in advance when this is about to happen. There is wide disagreement and uncertainty on how long this will take, but certainly there is substantial chance this happens within our lifetimes.

Whatever your previous beliefs, the events of the last year, including AlphaGo Zero, should convince you that AGI is more likely to happen, and more likely to happen soon.

If we do build an AGI, its actions will determine what is done with the universe.

If the first such AGI we build turns out to be an unfriendly AI that is optimizing for something other than humans and human values, all value in the universe will be destroyed. We are made of atoms that could be used for something else.

If the first such AGI we build turns out to care about humans and human values, the universe will be a place of value many orders of magnitude greater than it is now.

Almost all AGIs that could be constructed care about something other than humans and human values, and would create a universe with zero value. Mindspace is deep and wide, and almost all of it does not care about us.

The default outcome, if we do not work hard and carefully now on AGI safety, is for AGI to wipe out all value in the universe.

AI Safety is a hard problem on many levels. Solving it is much harder than it looks even with the best of intentions, and incentives are likely to conspire to give those involved very bad personal incentives. Without security mindset, value alignment and tons of advance work, chances of success are very low.

We are currently spending ludicrously little time, attention and money on this problem.

For space reasons I am not further justifying these claims here. Jacob’s post has more links.


In these next two sections I will share what I can of my own private information and analysis.

I know many principles at MIRI, including senior research fellow Eliezer Yudkowsky and executive director Nate Soares. They are brilliant, and are as dedicated as one can be to the cause of AI Safety and ensuring a good future for the universe. I trust them, based on personal experience with them, to do what they believe is best to achieve these goals.

I believe they have already done much exceptional and valuable work. I have also read many of their recent papers and found them excellent.

MIRI has been invaluable in laying the groundwork for this field. This is true both on the level of the field existing at all, and also on the level of thinking in ways that might actually work.

Even today, most who talk about AI Safety suggest strategies that have essentially no chance of success, but at least they are talking about it at all. MIRI is a large part of why they’re talking at all. I believe that something as simple as these DeepMind AI Safety test environments is good, helping researchers understand there is a problem much more deadly than algorithmic discrimination. The risk is that researchers will realize a problem exists, then think ‘I’ve solved these problems, so I’ve done the AI Safety thing’ when we need the actual thing the most.

From the beginning, MIRI understood the AI Safety problem is hard, requiring difficult high-precision thinking, and long term development of new ideas and tools. MIRI continues to fight to turn concern about ‘AI Safety’ into concern about AI Safety.

AI Safety is so hard to understand that Eliezer Yudkowsky decided he needed to teach the world the art of rationality so we could then understand AI Safety. He did exactly that, which is why this blog exists.

MIRI is developing techniques to make AGIs we can understand and predict and prove things about. MIRI seeks to understand how agents can and should think. If AGI comes from such models, this is a huge boost to our chances of success. MIRI is also working on techniques to make machine learning based agents safer, in case that path leads to AGI first. Both tasks are valuable, but I am especially excited by MIRI’s work on logic.


Eliezer’s model was that if we teach people to think, then they can think about AI.

What I’ve come to realize is that when we try to think about AI, we also learn how to think in general.

The paper that convinced OpenPhil to increase its grant to MIRI was about Logical Induction. That paper was impressive and worth understanding, but even more impressive and valuable in my eyes is MIRI’s work on Functional Decision Theory. This is vital to creating an AGI that makes decisions, and has been invaluable to me as a human making decisions. It gave me a much better way to understand, work with and explain how to think about making decisions.

Our society believes in and praises Causal Decision Theory, dismissing other considerations as irrational. This has been a disaster on a level hard to comprehend. It destroys the foundations of civilization. If we could spread practical, human use of Functional Decision Theory, and debate on that basis, we could get out of much of our current mess. Thanks to MIRI, we have a strong formal statement of Functional Decision Theory.

Whenever I think about AI or AI Safety, read AI papers or try to design AI systems, I learn how to think as a human. As a side effect of MIRI’s work, my thinking, and especially my ability to formalize, explain and share my thinking, has been greatly advanced. Their work even this year has been a great help.

MIRI does basic research into how to think. We should expect such research to continue to pay large and unexpected dividends, even ignoring its impact on AI Safety.


I believe it is always important to use strategies that are cooperative and information creating, rather than defecting and information destroying, and that preserve good incentives for all involved. If we’re not using a decision algorithm that cares more about such considerations than maximizing revenue raised, even when raising for a cause as good as ‘not destroying all value in the universe,’ it will not end well.

This means that I need to do three things. I need to share my information, as best I can. I need to include my own biases, so others can decide whether and how much to adjust for them. And I need to avoid using strategies that would be distort or mislead.

I have not been able to share all my information above, due to a combination of space, complexity and confidentiality considerations. I have done what I can. Beyond that, I will simply say that what remaining private information I have on net points in the direction of MIRI being a better place to donate money.

My own biases here are clear. The majority of my friends come from the rationality community, which would not exist except for Eliezer Yudkowsky. I met my wife Laura at a community meetup. I know several MIRI members personally, consider them friends, and even ran a strategy meeting for them several years back at their request. It would not be surprising if such considerations influenced my judgment somewhat. Such concerns go hand in hand with being in a position to do extensive analysis and acquire private information. This is all the more reason to do your own thinking and analysis of these issues.

To avoid distortions, I am giving the money directly, without qualifications or gimmicks or matching funds. My hope is that this will be a costly signal that I have thought long and hard about such questions, and reached the conclusion that MIRI is an excellent place to donate money. OpenPhil has a principle that they will not fund more than half of any organization’s budget. I think this is an excellent principle. There is more than enough money in the effective altruist community to fully fund MIRI and other such worthy causes, but these funds represent a great temptation. They risk causing great distortions, and tying up action with political considerations, despite everyone’s best intentions.

As small givers (at least, relative to some) our biggest value lies not in the use of the money itself, but in the information value of the costly signal our donations give and in the virtues we cultivate in ourselves by giving. I believe MIRI can efficiently utilize far more money than it currently has, but more than that this is me saying that I know them, I know their work, and I believe in and trust them. I vouch for MIRI.






Posted in Uncategorized | 12 Comments

Pascal’s Muggle Pays

Reply To (Eliezer Yudkowsky): Pascal’s Muggle Infinitesimal Priors and Strong Evidence

Inspired to Finally Write This By (AlexMennen at Lesser Wrong): Against the Linear Utility Hypothesis and the Leverage Penalty.

The problem of Pascal’s Muggle begins:

Suppose a poorly-dressed street person asks you for five dollars in exchange for doing a googolplex’s worth of good using his Matrix Lord powers.

“Well,” you reply, “I think it very improbable that I would be able to affect so many people through my own, personal actions – who am I to have such a great impact upon events?  Indeed, I think the probability is somewhere around one over googolplex, maybe a bit less.  So no, I won’t pay five dollars – it is unthinkably improbable that I could do so much good!”

“I see,” says the Mugger.

At this point, I note two things. I am not paying. And my the probability the mugger is a Matrix Lord is much higher than five in a googolplex.

That looks like a contradiction. It’s positive expectation to pay, by a lot, and I’m not paying.

Let’s continue the original story.

A wind begins to blow about the alley, whipping the Mugger’s loose clothes about him as they shift from ill-fitting shirt and jeans into robes of infinite blackness, within whose depths tiny galaxies and stranger things seem to twinkle.  In the sky above, a gap edged by blue fire opens with a horrendous tearing sound – you can hear people on the nearby street yelling in sudden shock and terror, implying that they can see it too – and displays the image of the Mugger himself, wearing the same robes that now adorn his body, seated before a keyboard and a monitor.

“That’s not actually me,” the Mugger says, “just a conceptual representation, but I don’t want to drive you insane.  Now give me those five dollars, and I’ll save a googolplex lives, just as promised.  It’s easy enough for me, given the computing power my home universe offers.  As for why I’m doing this, there’s an ancient debate in philosophy among my people – something about how we ought to sum our expected utilities – and I mean to use the video of this event to make a point at the next decision theory conference I attend.   Now will you give me the five dollars, or not?”

“Mm… no,” you reply.

No?” says the Mugger.  “I understood earlier when you didn’t want to give a random street person five dollars based on a wild story with no evidence behind it.  But now I’ve offered you evidence.”

“Unfortunately, you haven’t offered me enough evidence,” you explain.

I’m paying.

So are you.

What changed?


The probability of Matrix Lord went up, but the odds were already there, and he’s probably not a Matrix Lord (I’m probably dreaming or hypnotized or nuts or something).

At first the mugger could benefit by lying to you. More importantly, people other than the mugger could benefit by trying to mug you and others who reason like you, if you pay such muggers. They can exploit taking large claims seriously.

Now the mugger cannot benefit by lying to you. Matrix Lord or not, there’s a cost to doing what he just did and it’s higher than five bucks. He can extract as many dollars as he wants in any number of ways. A decision function that pays the mugger need not create opportunity for others.

I pay.

In theory Matrix Lord could derive some benefit like having data at the decision theory conference, or a bet with another Matrix Lord, and be lying. Sure. But if I’m even 99.999999999% confident this isn’t for real, that seems nuts.

(Also, he could have gone for way more than five bucks. I pay.)

(Also, this guy gave me way more than five dollars worth of entertainment. I pay.)

(Also, this guy gave me way more than five dollars worth of good story. I pay.)


The leverage penalty is a crude hack. Our utility function is given, so our probability function had to move or the Shut Up and Multiply would do crazy things like pay muggers.

The way out is our decision algorithm. As per Logical Decision Theory, our decision algorithm is correlated to lots of things, including the probability of muggers approaching you on the street and what benefits they offer. The reason real muggers use a gun rather than a banana is mostly that you’re far less likely to hand cash over to someone holding a banana. The fact that we pay muggers holding guns is why muggers hold guns. If we paid muggers holding bananas, muggers would happily point bananas.

There is a natural tendency to slip out of Functional Decision Theory into Causal Decision Theory. If I give this guy five dollars, how often will it save all these lives? If I give five dollars to this charity, what will that marginal dollar be spent on?

There’s a tendency for some, often economists or philosophers, to go all lawful stupid about expected utility and berate us for not making this slip. They yell at us for voting, and/or asking us to justify not living in a van down by the river on microwaved ramen noodles in terms of our expected additional future earnings from our resulting increased motivation and the networking effects of increased social status.

To them, we must reply: We are choosing the logical output of our decision function, which changes the probability that we’re voting on reasonable candidates, changes the probability there will be mysterious funding shortfalls with concrete actions that won’t otherwise get taken, changes the probability of attempted armed robbery by banana, and changes the probability of random people in the street claiming to be Matrix Lords. It also changes lots of other things that may or may not seem related to the current decision.

Eliezer points out humans have bounded computing power, which does weird things to one’s probabilities, especially for things that can’t happen. Agreed, but you can defend yourself without making sure you never consider benefits multiplied by 3↑↑↑3 without also dividing by 3↑↑↑3. You can have a logical algorithm that says not to treat differently claims of 3↑↑↑3 and 3↑↑↑↑3 if the justification for that number is someone telling you about it. Not because the first claim is so much less improbable, but because you don’t want to get hacked in this way. That’s way more important than the chance of meeting a Matrix Lord.

Betting on your beliefs is a great way to improve and clarify your beliefs, but you must think like a trader. There’s a reason logical induction relies on markets. If you book bets on your beliefs at your fair odds without updating, you will get dutch booked. Your decision algorithm should not accept all such bets!

People are hard to dutch book.

Status quo bias can be thought of as evolution’s solution to not getting dutch booked.


Split the leverage penalty into two parts.

The first is ‘don’t reward saying larger numbers’. Where are these numbers coming from? If the numbers come from math we can check, and we’re offered the chance to save 20,000 birds, we can care much more than about 2,000 birds. A guy designing pamphlets picking arbitrary numbers, not so much.

Scope insensitivity can be thought of as evolution’s solution to not getting Pascal’s mugged. The one child is real. Ten thousand might not be. Both scope insensitivity and probabilistic scope sensitivity get you dutch booked.

Scope insensitivity and status quo bias cause big mistakes. We must fight them, but by doing so we make ourselves vulnerable.

You also have to worry about fooling yourself. You don’t want to give your own brain reason to cook the books. There’s an elephant in there. If you give it reason to, it can write down larger exponents.

The second part is applying Bayes’ Rule properly. Likelihood ratios for seeming high leverage are usually large. Discount accordingly. How much is a hard problem. I won’t go into detail here, except to say that if calculating a bigger impact doesn’t increase how excited you are about an opportunity, you are doing it wrong.



Posted in Uncategorized | 2 Comments