Prediction Markets: When Do They Work?

Epistemic Status: Resident Expert

I’m a little late on this, which was an old promise to Robin Hanson (not that he asked for it). I was motivated to deal with this again by the launch of Augur (REP), the crypto prediction market token. And by the crypto prediction market token, I mean the empty shell of a potential future prediction market token; what they have now is pretty terrible but in crypto world that is occasionally good for a $300 million market cap. This is, for now, one of those occasions.

The biggest market there, by far, is on whether Ether will trade above $500 at the end of the year. This is an interesting market because Augur bets are made in Ether. So even though the market (as of last time I checked) says it’s 74% percent to be trading above $500 and it’s currently $480 (it’s currently Thursday, July 26, and I’m not going to go back and keep updating these numbers). When I first saw this the market was at 63%, which seemed to me like a complete steal. Now it’s at 74%, which seems more reasonable, which means the first ‘official DWATV trading tip’ will have to wait. A shame!

A better way to ask this question, given how close the price is to $500 now, is what the ratio of ‘given Ether is above $500 what does it cost’ to ‘given Ether is below $500 what does it cost’ should be. A three to one ratio seems plausible?

The weakness (or twist) on markets this implies applies to prediction markets generally. If you bet on an event that is correlated with the currency you’re betting in, the fair price can be very different from the true probability. It doesn’t have to be price based – think about betting on an election between a hard money candidate and one who will print money, or a prediction on a nuclear war.

If I bet on a nuclear war, and win, how exactly am I getting paid?

Robin Hanson, Eliezer Yudkowsky and Scott Sumner are big advocates of prediction markets. In theory, so am I. Prediction markets are a wonderful thing. If you’re not familiar with them or want to read more, a good place to start is this wiki. By giving people a monetary incentive to solve problems and share information, we can learn probabilities (what will GDP be next year?) and conditional probabilities (what will GDP be next year if we pass this tax cut bill?) and use the answers to make the best decision. This method of making decisions is called futarchy.

Formally, a prediction market allows participants to buy and sell contracts. Those contracts then pay out a variable amount of money. Typically this is either binary (will Donald Trump be elected president?), paying out 100 if the event happens and 0 if it doesn’t, or they are continuous (how many electoral college votes will Donald Trump get?) and pay proportionally to the answer. Sometimes there are special cases where the market is void and all transactions are undone, at other times strange cases have special logic to determine the payout level.

There are three types of prediction markets that have gotten non-zero traction.

The first is politics. There are markets at PredictIt and BetFair and Pinnacle Sports, and there used to be relatively deep markets at InTrade. These markets matter enough to get talked about and attract some money when they involve major events like presidential elections, but tend to be quite pathetic for anything less than that.

The second is economics. There are lots of stocks and futures and options and other such products available for purchase. Futures markets in particular are prediction markets. They don’t call themselves prediction markets, but that is one of the things they are, and the information they reveal is invaluable. It’s even sometimes used to make decisions.

The third is sports. Most televised sporting events have bookmakers offering odds and taking bets. They use their own terminology for many things, but these are the closest thing to true prediction markets out there.

What makes a successful prediction market? What makes an unsuccessful prediction market? When are they efficient? What gets people involved?

To get a thriving market, you need (at least) these five things.



If you can’t exactly define the outcome, you can’t have a prediction market. Even highly unlikely corner cases must be resolved. Thus, if you want a market on “Donald Trump is elected president of the United States in 2020” you need to know exactly what happens if he dies after the election but before inauguration, or if there is a revolt in the electoral college, or if the election is fraudulent or cancelled, or if he loses to a different person also named “Donald Trump.”  That’s not because Trump makes such issues more likely. If you were betting on Obama vs. Romney, you’d need to do all the same stuff.

In sports markets, this means writing up a multi-page document detailing what happens when a game is rained out, disputed, postponed, tied, you name it, along with all the other rules. If there’s an angle left ambiguous, you can bet (well you can’t, but if you could, you’d have good odds) that someone will try to take advantage of it eventually. That leaves everyone mad and ruins good business relationships. It’s important to have clear rules and stick to them.

One of the first markets on Augur asked, “Will England defeat Croatia in the World Cup?” Which I immediately recognized as a really bad wording, because it was ambiguous. If the game had gone to overtime, or even to penalty kicks, and England had advanced, what happens? In a real sportsbook, generally bets default to regulation time only, so the match would be ruled a draw, and the answer would be “No” even if England later won.

That’s not acceptable. All it takes is one corner case to get people yelling at each other, and drive them away. When they do happen, the sportsbook is wise to eat the loss, even if that means paying both sides, and then fix its procedures. That’s one benefit of having a central authority to blame.

I’m also including in this the requirement that you can be confident you can collect if you win. As one sportsbook’s slogan puts it, sweat the game, not the payout. Bets with people who might not pay you require huge edges. They’re about accepting the risk to land the whale. They don’t do much for price discovery, and are for professionals only.


Quick Resolution

The faster the market pays out, the more interest you’ll get. Markets that tie up money for weeks or months, let alone years, see decisive declines in participation. Before the season, there are markets for which teams will win the championship, or how many wins each team will have. This seems great, since if you’re right you can have a huge advantage, and it gives you an interest in games for much or all of the season.

Despite that, you will see less money wagered on any given season win total than for a random game played by that team. Usually by an order of magnitude. That’s how valuable quick resolution is. Even in March Madness, where everyone fills out office brackets, the bulk of the real wagering goes game by game. There are lots of propositions, and there’s value there, but only for small amounts. Betting your bracket before the tournament, despite brackets being something presidents do to show they’re in touch, isn’t a thing for serious money.

Thus, markets on small events like individual games are bigger, and more efficient, than markets on bigger and more interesting things like entire seasons or even a playoff series. The primary purpose of the long-term markets isn’t to make money; it’s to provide a service so people can see what odds have been assigned to various outcomes.

Major events like presidential elections have enough inherent interest to still see solid markets, but only barely. There’s a lot of interest in what the odds are, but the volumes traded are quite thin, so much so that it is in the interest of partisans to trade in order to move the price and thus change the political narrative.

Economic markets are the only place longer-term markets prosper.

Note that if the market is sufficiently liquid, it can act as if it is short term, provided the prices will move quickly enough, since participants can then exit their positions.


Probable Resolution

Trading in a prediction market ties up capital, creates risk and requires optimization pressure. I need to pay attention to the market, both to decide what fair value is and then to go about maximizing and making good trades.

If that market is conditional, and trades were only valid if those conditions were met, we have a problem: I’ve wasted my time, money and risk capacity, and gotten nothing in return.

One of the markets I liked a lot as a gambler was called the Home/Away line in MLB. The idea was, you added up all the runs scored by the home teams and compared them to all the runs scored by the away teams, and bet on which would be higher, or on the sum of all runs scored that day, which was called The Grand Salami. There was lots of value in these lines because people were using very simple heuristics, and if you did first-level statistics on runs scored in games and how distributions add up, you could get a big edge.

What was continuously frustrating was that often one game would get rained out, cancelling all your wagers. Often I’d have locked in large profits, and they’d be lost.

This wasn’t enough to keep me from betting, because I got my money back within a day and the edge was huge. But when funds were tight, it shifted those funds towards other things, and every time I thought about looking at the Home/Away line, my brain fired back ‘are you sure you want to bother?’ so I only cared when my edge was large.

Gamblers actively prefer betting on odds that can’t tie, e.g. betting on a football team -3.5 or -2.5 rather than -3.0, because the -3 line ties 10% of the time. The bookmaker agrees!

If you are instead tying up your money for weeks, months or even years, and instead of a 10% chance of rain somewhere there’s a 50% or even 90% chance the event doesn’t fire, that’s much worse. If your’e dealing with a hyper-complex Hansonian death trap of a conditional market where it’s 99%+ to not happen, even with good risk measurement tools that don’t tie up more money than necessary, no one is going to want to put in the work and tie up the funds.


Limited Hidden Information

Insider trading of securities is illegal. This seems at odds with price discovery. If I know something you don’t know, then my not trading on it makes the price less accurate. One might suggest that allowing insiders to trade would make the price more efficient.

The problem is that it drives people away.

If other people know something important I don’t, then my trades are giving them a way to pick my pocket. When I look at the price and see it is wrong, my prior is ‘there is something I don’t know’ rather than ‘there is something they don’t know or understand’. I’m the one making the mistake. I’m the sucker. So I walk away.

Thus, while the individual trades of insiders make the market more efficient, they punish others trying to share their information and analysis with the market. This is bad. Bad enough to kill outright markets with too much risk of insider information.

The first season of Survivor, there was a market on who would win. The production crew found out. Then there was no market.

Another important case: If a person with a large role in choosing the outcome can bet in the market, you might not want to risk betting against him. Or bet at all.

When there is a big injury risk in a game, the market dies until the issue is resolved. When the issue is resolved, trading picks back up no matter the outcome.

Even reduction of uncertainty as such can be important. Before important events like elections often money will ‘sit on the sideline’ until the outcome is known. This can even result in bad outcomes driving prices up. We may not like the new boss, but at least we now know who he is and can go about our business.

In my experience with prediction markets, important hidden information other traders could know acts as an outright veto on the market. It might not do that if the market had enough ‘natural’ trading volume, but that’s a high bar to clear.


Sources of Disagreement and Interest

Also known as, Suckers at the Table.

Any market, like a poker table, requires sources of disagreements and profits. Without a sucker at the table, why participate in the market? Remember, if you can’t spot the sucker in your first half hour at the table, then you are the sucker.

Ideally, you want either a direct subsidy to the market, or natural buyers and sellers.

If someone has a reason to trade even at a not so great price, for example airlines or countries hedging against moves in oil prices, then everyone can compete to make money off of that. The same would go if someone wanted to hedge against a political event, or to bet for or against their favorite team on principle – either to make it interesting, or to get what I used to half-jokingly call ‘compensation for our pain’ when the Mets inevitably lost again.

Another class of ‘natural’ traders are gamblers or noise traders, who demand liquidity for no particular reason. They too can be the sucker.

If people who want to learn fair probabilities subsidize the market, like donors subsidize the NGDP futures markets Scott Sumner helped create, that also works.

And of course there’s always the people who think they know something, and are sadly mistaken.

What traders need, more than anything else, is the ability to tell a story for why their trade is a good idea. To do that, they need to know why they have the opportunity to make this trade. What do they know that others don’t know? What mistake do they think people are making?

For sports, politics and economics, everyone has an opinion, so it is easy for them to get the idea that they have the advantage.

The genius of a binary bet on Ether prices that trades in Ether is that there are a lot of angles where one can think you know something the market doesn’t fully know, and lots of mistakes you can think other people are making. It’s easy to think of many different angles and approaches one could take. One can trade short term, or trade long term, do arbitrage or use it for leverage. Another could be doing it as a form of speech, or an experiment, and the group that can reach the market is doubtless quite biased.

It’s easy to make that leap to ‘I know why he’s willing to give me this trade’ and even to ‘I know exactly what mistake he is making.’ It’s a great choice for a big initial market.


Summary and Conclusion

Prediction markets rely on attracting a variety of participants, including both ‘losers’ who have natural reasons to participate, and ‘winners’ who will be attracted by that value.

Any critical issue can kill a prediction market dead, or even an entire prediction market ecosystem.

If your market isn’t well-defined, arguments over price become arguments over the rules, which turn into very angry participants. If this happens even a small percentage of the time, it drives everyone away.

If your market doesn’t resolve quickly, and quickly is on the order of days or at most a few weeks, it needs to be massively liquid and refer to real world questions people have natural exposures to, to create participation. It ties up cash and doesn’t offer the rush of a good gamble. No one wants to bet on an obscure outcome years from now.

If your market is unlikely to resolve, participants will find other uses for their time and money. The chance here has to be small, well under 50%, and much lower if time to resolution isn’t quick. Years-long markets that are unlikely to trigger are going to have severe issues.

If your market has potential hidden information, that is a tax on everyone who participates, who are prey to adverse selection. Everyone must worry that the market knows what they don’t know, and that them liking one side of a trade means the person on the other side has a secret; there are even traders in such markets that follow a strategy of ‘find the naively correct bet, and bet the other way,’ which is known (or should be known) as The Constanza.

If your market doesn’t draw natural interest and offer sources of disagreement, to create a foundation of participation and liquidity, or at least bribe the participants with explicit subsidies, there’s nothing to build on and no interest.

In addition to these threats, such markets face regulatory and legal hurdles, and face various ethical concerns. If you offer one market that seems to mimic a regulated trade, such as an option on a stock, or that sounds distasteful, such as the so-called ‘assassination markets,’ that can be all anyone will see when they look at your offerings. Even though such concerns, frankly, are mostly quite stupid, they’re real and people care about this a lot. They’ve gotten basically every past attempt at prediction markets (other than bookmakers and professional economic trading platforms) shut down.

Active curation is necessary to deal with many of these issues, and to provide simple ease of use and ease of finding what one is looking for and would be interested in.

Surgical use of prediction markets for key information points remains a great idea, and in many cases people love a good bet. But we shouldn’t get too ambitious, and keep an eye on the practical needs of participants.











Posted in Uncategorized | 10 Comments

Who Wants The Job?

Over this last month, my wife and I searched for and hired a new nanny, as ours had decided to learn programming and move to The Bay.

We ended up with a wonderful woman we found through a personal recommendation on a local mailing list. Her previous employer posted saying, hire this woman, she’s fantastic, and this was indeed the case.

Before we realized we’d found her, various people encouraged us to post a job listing on a website, so we did so on Sittercity.

That went… less well.

We got over one hundred applications.

The majority of them had profiles that did not match the requirements of the job. About half were only available for a few months or for part time work. Many others wouldn’t work with multiple children, or had higher salary requirements than we listed.

The vast majority of them had major spelling and/or grammar errors in their profiles. Also in their messages to us.

From the remaining profiles, we reached out to a few dozen applicants.

The majority of them did not respond to us at all. 

Of those who did respond, several did not answer the phone for the initial interview and provided no explanation.

Of those who did answer, several were actively rude on the phone. 

Of those who were not rude on the phone, several did not engage with the questions being asked or show any interest in the job,

Of those who passed that screening and were asked to come for an in person interview, fully half of them failed to show up, most with no warning or cancellation. In all such cases, they didn’t contact us again. In other cases, they cancelled, but failed to make any real effort to reschedule.

As a result of all this, we only ended up doing two in person interviews. Because it turns out that getting people to show up, at all, is super hard. One of them seemed acceptable in a punch. The other we hired.

The vast majority of people who were on a job site, seeking a job, were not capable of tasks like: Write a profile page in English without major mistakes. Respond promptly when an employer contacts you to respond to your application. Talk politely on the phone and sound like you are listening and care about the job. Show up to your interview.

Yes. Standards are that low. 

You are much more employable than you think.

If you’re wondering why employers say it’s so hard to hire people despite getting a hundred applications for every position? That’s why.

If you’re wondering why many people can’t find work? I can’t help but wonder if it’s because they can’t do the very basic things even at high leverage points like the interview process. Things like showing up and responding to emails. Being on time. Being polite. Making sure the profile you show the world doesn’t have major errors and matches the jobs you’re applying to. Acting like you actually want the job.

Those are standards that 98% or so of applications we got failed to live up to. Presumably the same people, failing to live up to them, apply again and again, failing those standards again and again, wondering why they can’t get hired.



Posted in Uncategorized | 27 Comments

Simplicio and Sophisticus

Previously (Slate Star Codex): The Whole City is Center

Epistemic Status:

Image result for two spiderman meme

Note that after writing a lot of this, I checked and Sniffnoy anticipated a lot of this in the comments, but I think both takes are necessary.

There are many useful points to Scott’s philosophical dialogue, The Whole City is Center, between Simplicio and Sophisticus. I want to point out an extra one I think is important.

Here’s a short summary of some key points of disagreement they have.

Simplicio claims that there are words people use to describe concepts, and we should use those words to describe those concepts, even if those words have unfortunate implicaitons. Say true things about the world. Larry is lazy.

Sophisticus says no, if those words have unfortunate implications we shouldn’t use them. And in many cases, where the unfortunate implications are inevitable because people have those implications about the concept being described, we shouldn’t use any word at all to describe the concept. Larry can be counted on not to do things. But we shouldn’t treat lazy as a thing, because people think being lazy is bad and there’s no utility in thinking Larry is bad.

Simplicio says we should use whatever techniques work, regardless of whether they are negative reinforcement, positive reinforcement, before the act, after the act, too big, too small, you name it, if that’s the system that works. And if people’s natural instincts are to do things that work best as a system, but are sometimes ‘overkill’ or have unfortunate side effects in a particular case, you should accept that.

Sophisticus says no. Studies show negative reinforcement reinforcement doesn’t work, so don’t do it. Studies show harsher prisons don’t deter people so don’t use them. You should only use exactly what is needed to cause a direct effect in each situation. Or, if you need to use deterrence, what the evidence says will actually deter people.

Sophisticus says, we should look upon motivations like ‘I want this person to suffer’ with horror, and assume something has gone horribly wrong. (He makes no comment on feeling ‘I want this particular person to be happy’, which doesn’t come up.)

Simplicio says, if having seemingly unreasonable desires in some situations, including potential future situations, is the way persons and groups get better results, stop looking at it as some crazy or horrible thing. People’s motivations are messy, they have lots of weird side effects like loving kittens (I would note, so much so that I am punished for not loving them, basically because not having bad side effects of a thing is evidence of not having the thing itself). Going all ‘these instincts seem superficially nice so we’re going to approve, and these instincts seem superficially not nice so we’re going to disapprove’ seems wrong.

Sophisticus says, that by refusing to use concepts like lazy, he has a value disagreement with Simplicio and those who do use the lazy concept. Because those people embrace the implications.

Simplicio says no, this isn’t about value disagreement.

But then, near the end, Sophisticus catches Simplicio by saying he’s refusing in context to use the term ‘value difference’ because he doesn’t like its implications, and insisting only upon some Platonic ideal version of value difference. Which, Sophisticus says, makes him a hypocrite! Rather than point out either that no, it doesn’t, or maybe it does and you get non-zero points for noticing but asking for people not to ever be a hypocrite is not a valid move, he instead gets so embarrassed he flees town, and only redeems himself ten years later by pointing out what happens if you reject the unfortunate implications of the term ‘city center.

The most important lesson is, as Sniffnoy observes, the characters have the wrong names. Simplicio should be Sophisticus. Sophisticus should be Simplicio.

(I will continue to refer to them by Scott’s names here.)

Sophisticus wants to solve the world by getting rid of all the things he doesn’t like, and all the things he can’t properly quantify. He only accepts actions that are based on fully described and measured reasons. He will accept second or third order causes and consequences, but only and exactly those with well-described and quantified causal pathways.

Then he says that such actions are intelligent, sophisticated and advanced. They reject the irrational, the non-scientific. So they denigrate people who think otherwise with labels like Simplicio, and pretend that word doesn’t have unfortunate implications. Because it’s never all right to label people, in ways that have unfortunate and false implications (e.g. that a person is simple or stupid) unless you catch someone labeling people.

Simplicio accepts that the world is complex, and that our systems for dealing with it are approximations and sets of rules and values that won’t always do the locally optimal thing, and that doesn’t mean they’re wrong. Simplicio is comfortable with the idea that correlations and associations exist even when we don’t like them.

Sophisticus is what Nassim Taleb calls the Intellectual, Yet Idiot (IYI). By doing things that are more abstract, and discarding most of the valid and useful information and relationships, they fool themselves and others into thinking that they are smarter and more sophisticated. Simplicio is advocating for Taleb’s typical grandmother, who has learned what actually works and survives, even if she doesn’t understand all the reasons or implications.

Sophisticus is vastly simplifying the world.

He simplifies the world by cutting out the parts he does not like, and the parts he does not understand.

This allows him to create a model of the world. That’s great! That’s super useful! I love me some models, and you can’t have models without throwing a lot of stuff out. Often the model gives much better answers despite this, and allows us to learn much and make better decisions.  What makes a model great is that when you get rid of all the fuzziness, you get rid of a lot of noise, and you can manipulate and do math to what is left. Over time, you can add more stuff back into the model, and make it more sophisticated.

When you start thinking in models, or like a rationalist, or an economist, either in general or about a particular thing, that kind of thinking starts out deeply, deeply stupid. You must count on your other ways of thinking to contain the damage and point out the mistakes, to avoid taking these stupid conclusions too seriously, rather than as additional perspectives, as points of departure and future development, and places to learn. It goes way beyond Knowing About Biases Can Hurt People.

Drop stuff from your model, and you fail to understand or optimize for those things. If you then optimize based on your model, the things you left out of the model will be left out, and sacrificed, because they’re using optimization pressure and atoms that can be used for something else. The results might or might not be an improvement. As the optimizations get more extreme, we should expect bigger disruptions and sacrifices of key excluded elements, so that had better be worth it.

One danger is that many people who develop the models either do so because they are really bad at navigating without models, or because they realized how bad everyone is at navigating without models. This provides motivation to work on the models even if they aren’t yet any good, but it also increases temptation to forget that the model is a map and not the territory.

I think this is related to how those who found a business are as a group completely delusional about their chances of success, but also that founding a business is a generally very good idea. Motivating the long term investment and endurance of high costs only works in such cases, even if many more people would be better off in the long run if they did it.

The struggle is, how does one combine these two approaches. Build up one’s models and toolboxes, to allow systematic thinking, while not losing the power of what you’re ignoring, and slowly incorporating that stuff into your systematic thinking. Otherwise, no matter how simplistic the average person might be, you risk being even more so.





Posted in Uncategorized | 11 Comments

Why Destructive Value Capture?

Previously: Front Row Center

I got a lot of push-back from suggesting that there was a way for theaters to improve their customer experience and value proposition at low cost (get rid of the seats that are so close to the screen they cause neck strain), and that theaters should do that.

The push-back didn’t argue that the method wouldn’t improve the customer experience at low cost. There were a few who suggested an alternate high-cost solution that improves the experience more (use high-quality and assigned seating at a substantially higher price point), and which some places have implemented. No one argued that, where the higher-cost solution didn’t make sense. my incremental suggestion wouldn’t improve the customer experience versus status quo, at relatively low cost.

They also didn’t raise the reasonable argument that getting people to do things at all, especially slightly non-standard things that might look bad on superficial metrics during the pitch meeting, is hard. People don’t think about things, they don’t do things, they don’t optimize, and so on. One could reasonably argue this isn’t worth the effort.

Instead, everyone argued that, unless they were forced to do so, theaters shouldn’t implement the suggestion. Because it would cost them money – they couldn’t sell those few terrible seats, and forcing people to come early increases ad and concession revenue.

That’s interesting. And weird.

The proposition creates value. One comment from Quixote estimates $1.67 in customer time-value is saved in exchange for the loss of $0.10 in ad revenue.

The proposition improves the customer experience. It generates movie-going habits, loyalty and goodwill.

Not implementing the proposition is a destructive value capture. In order to get a little revenue, an order of magnitude more value is destroyed.

Destructive value capture is normal. In order to capture value, some value is typically destroyed. But when you’re destroying most of the value you withdraw from the system, you should be suspicious mistakes are being made. At a minimum, it’s worth asking on a deeper level why this is happening. What could justify it? What failure mode are we in? How does it come to be, why does it persist, is there a way we can solve it or minimize it? We shouldn’t shrug and mutter something about capitalism. We should treat this as a major failure, and brainstorm potential barriers even if they don’t apply in this case.

Can’t Raise the Price

If you’re charging $15 to see a movie, then destroying $1.50 in value to generate $0.15 in additional income, why aren’t you just not doing that, and instead charging $15.25 to see the movie?

What might stop this from being a good solution?

What if movie was free? Moving from free to not free is a huge change, even if the additional cost is small. This could drive people away and be hugely value destructive.

What if this introduced an additional collection point? You’d need to ask someone for money an additional time to make up the additional cost, and that could be value destructive.

What if this disrupted a standardized price or crosses a key threshold? Suppose everyone knows that movies cost $15, and there would be a strong reaction against a price of $15.05, because it’s different, or because it makes it hard to give exact change.

What if the market encouraged sorting purely by price? Imagine a world like with plane tickets, where you go to Kayak or Orbitz or what not, and there is strong default pressure to buy the cheapest tickets without noticing extra charges.

What if regulation prevented higher prices? That which is forbidden is not allowed. Price controls often cause perverse reactions.

Those would be good reasons. All clearly do not apply. Movies aren’t free (or if you have MoviePass, they would stay free). Movies have a collection point. Movies don’t have a standardized prices or a strong price-sorting search mechanism, and prices are rarely at a key threshold.

Other reasons might apply somewhat, but still seem weak.

What if this would be a price increase and that would be bad? Thus, the bad event of ‘prices went up’ could matter even if the new price isn’t much different from the old price, so you can’t do that often. A tiny increase might be impractical.

That’s fair. But the increase could be put into a later, larger increase, or if that’s too big a burden, one could wait on implementation until the next price increase.

What if higher prices decrease customer experience, so they’re more expensive than they look? 

I grant this is likely true for some, but the effect size should be small.

What if this is a pure bad when demand is low, such as at a matinee, and complexity cost prevents price discrimination? 

Again, this seems true but effect size is small. Some places price discriminate by time but the complexity cost stops the majority. So even though removing the seats costs nothing when demand is low, raising the price at those times is net bad.

Would a price increase send the wrong message? Would people then worry about the health of your company, or your industry? Would it thus push down stock prices or reduce your ability to raise money?

It might, indeed. It also might do the opposite. I don’t think this is what’s going on here.

All of that is seeking solutions to the easy out: raising prices. Or, if prices are already higher than they should be, lower them to where they should otherwise be, then raising them back.

Let’s take away that easy out, and say one of the good reasons applied. You can’t raise the price and demand exceeds supply.

This is pretty terrible even if you don’t then do value capture. Destructive value allocation is bad enough, via making people wait on lines or make commitments or virtue signal or what have you – anything where the auction involves incinerating rather than redistributing the bids, often all-pay auctions at that. One can think of this as balancing supply and demand by making quality of the supply sufficiently worse. 

Thus we have two mostly distinct problems. We need to pay for the creation and maintenance of nice things without destroying what makes them nice. And we need to do efficient allocation of those nice things, that balances supply and demand and gets the product to the right people.

Letting the price float is the best way to do both, but what happens when you can’t do it? Are we now stuck with terrible seating and massive deadweight loss? What about other situations where the price is stuck? A life lived under advertising’s increasingly long and intrusive shadow? Or worse, the evil bastard children of microtransactions and free to play games?

We seem to be headed that way. I think there are promising answers, which I hope to explore further. That starts with defaulting to price adjustment, and finding creative ways to do price adjustment, and viewing destruction of value as a failure rather than normality or ‘the way of business.’



Posted in Uncategorized | 7 Comments

Front Row Center

Epistemic Status: Lightweight

Related: Choices are BadChoices Are Really Bad

Yesterday, my wife and I went out to see Ocean’s Eight (official review: as advertised). The first place we went was a massively overpriced theater (thanks MoviePass!) with assigned seating, but they were sold out (thanks MoviePass!) so we instead went to a different overpriced theater without assigned seating, and got tickets for a later show. We had some time, so we had a nice walk and came back for the show.

When we got back, there was nowhere for us to sit together outside of the first two rows. They’re too close, up where you have to strain your neck to see the screen. My wife took the last seat we could find a few rows behind that, and I got a seat in the second row. It was fine, but I’d have much preferred to sit together.

It was, of course, our fault for showing up on time rather than early to a sold out screening. I mention it because it’s a clean example of how offering less can provide more value.

The theater should, if they don’t want to do assigned seating, rip out the first two rows.

At first this seems crazy. Many people prefer sitting in the first two rows to being unable to attend the show, so the seats create value while increasing profits. What’s the harm?

The harm is introducing risk, and creating an expensive auction.

The risk is that if you go to the movies, especially the movies you most want to see, you’ll be stuck in the first two rows. So when you buy a ticket and go upstairs, you might get a bad experience. If the show is sold out, that might be better, as you can buy a different ticket or none at all.

The auction is worse. Seats are first come, first serve. So if it’s important to get served first, you need to come first. If it’s very important to not be last, to avoid awful seats, you need to come early, and so does everyone else, bidding up the price of not-last the same way you’d bid up being first.

With no awful seats, those who care a lot about better seats will still come early, but most people care a lot less. So everyone can come substantially earlier, and not feel pressure. Many will show at the last minute, and be totally fine.

The deadweight loss in time, of adding those forty extra seats, is massive, distributed throughout the theater. Everyone feels pressure to get there early even when they already have a ticket, so even if their seat is good, they stressed out about their seat, and not only burned time but feel bad about being pressured.

Avoiding time-based auctions and signals, or at least minimizing the value at stake in them, is an important and underappreciated problem.



Posted in Uncategorized | 8 Comments

Simplified Poker Conclusions

Previously: Simplified PokerSimplified Poker Strategy

Related (Eliezer Yudkowsky): Meta Honesty: Firming Honesty Around Its Edge Cases

About forty people submitted programs that used randomization. Several of those random programs correctly solved for the Nash equilibrium, which did well.

I submitted the only deterministic program.

I won going away.

I broke even against the Nash programs, utterly crushed vulnerable programs, and lost a non-trivial amount to only one program, a resounding heads-up defeat handed to me by the only other top-level gamer in the room, fellow Magic: the Gathering semi-pro player Eric Phillips.

Like me, Eric had an escape hatch in his program that reversed his decisions (rather than retreating to Nash) if he was losing by enough. Unlike me, his actually got implemented – the professor decided that given how well I was going to do anyway, I’d hit the complexity limit, so my escape hatch was left out.

Rather than get into implementation details, or proving the Nash equilibrium, I’ll discuss two things: How few levels people play on, and the motivating point: How things are already more distinct and random than you think they are, and how to take advantage of that.

Next Level

In the comments to the first two posts, most people focused on finding the Nash equilibrium. A few people tried to do something that would better exploit obviously stupid players, but none that tried to discover the opponents’ strategy.

The only reason not to play an exploitable strategy is if you’re worried someone will exploit it!

Consider thinking as having levels. Level N+1 attempts to optimize against Levels N and below, or just Level N.

Level 0 isn’t thinking or optimizing, so higher levels all crush it, mostly.

Level 1 thinking picking actions that are generically powerful, likely to lead to good outcomes, without considering what opponents might do. Do ‘natural’ things.

Level 2 thinking considers what to do against opponents using Level 1 thinking. You try to counter the ‘natural’ actions, and exploit standard behaviors.

Level 3 counters Level 2. You assume your opponents are trying to exploit basic behaviors, and attempt to exploit those trying to do this.

Level 4 counters Level 3. You assume your opponents are trying to exploit exploitative behavior, and acting accordingly. So you do what’s best against that.

And so on. Being caught one level below your opponent is death. Being one level ahead is amazing. Two or more levels different, and strange things happen.

Life is messy. Political campaigns, major corporation strategic plans, theaters of war. The big stuff. A lot of Level 0. Level 1 is industry standard. Level 2 is inspired, exceptional. Level 3 is the stuff of legend.

In well-defined situations where losers are strongly filtered out, such as tournaments, you can get glimmers of high level behavior. But mostly, you get it by changing the view of what Level 1 is. The old Level 2 and Level 3 strategies become the new ‘rules of the game’. The brain chunks them into basic actions. Only then can the cycle begin again.

Also, ‘getting’ someone with Level 3 thinking risks giving the game away. What level should one be on next time, then?

Effective Randomization

There is a strong instinct that whenever predictable behavior can be punished, one must randomize one’s behavior.

That’s true. But only from another’s point of view. You can’t be predictable, but that doesn’t mean you need to be random.

It’s another form of illusion of transparency. If you think about a problem differently than others, their attempts to predict or model you will get it wrong. The only requirement is that your decision process is complex, and doesn’t reduce to a simple model.

If you also have different information than they do, that’s even better.

When analyzing the hand histories, I know what cards I was dealt, and use that to deduce what cards my opponent likely held, and in turn guess their behaviors. Thus, my opponent likely has no clue either what process I’m using, how I implemented it, or what data I’m feeding into it. All of that is effective randomization.

If that reduces to me always betting with a 1, they might catch on eventually. But since I’m constantly re-evaluating what they’re doing, and reacting accordingly, on an impossible-to-predict schedule, such catching on might end up backfiring. It’s the same at a human poker table. If you’re good enough at reading people to figure out what I’m thinking and stay one step ahead, I need to retreat to Nash, but that’s rare. Mostly, I only need to worry, at most, if my actions are effectively doing something simple and easy to model.

Playing the same exact scenarios, or with the same exact people, or both, for long enough, both increases the amount of data available for analysis, and reduces the randomness behind it. Eventually, such tactics stop working. But it takes a while, and the more you care about long histories in non-obvious ways, the longer it will take.

Rather than be actually random, instead one adjusts when one’s behavior has sufficiently deviated from what would look random, such that others will likely adjust to account for it. That adjustment, too, need not be random.

Rushing into doing things to mix up your play, before others have any data to work with, only leaves value on the table.

One strong strategy when one needs to mix it up is to do what the details favor. Thus, if there’s something you need to occasionally do, and today is an unusually good day for it, or now an especially good time, do it now, and adjust your threshold for that depending on how often you’ve done it recently.

A mistake I often make is to choose actions as if I was assuming others know my decision algorithm and will exploit that to extract all the information. Most of the time this is silly.

This brings us to the issue of Glomarization.


Are you harboring any criminals? Did you rob a bank? Is there a tap on my phone? Does this make me look fat?

If when the answer is no I would tell you no, then refusing to answer is the same as saying yes. So if you want to avoid lying, and want to keep secrets, you need to sometimes refuse to answer questions, to avoid making refusing to answer too meaningful an action. Eliezer discussed such issues recently.

This section was the original motivation for writing the poker series up now, but having written it, I think a full treatment should mostly just be its own thing. And I’m not happy with my ability to explain these concepts concisely. But a few thoughts here.

The advantage of fully explicit meta-honesty, telling people exactly under what conditions you would lie or refuse to share information, is that it protects a system of full, reliable honesty.

The problem with fully explicit meta-honesty is that it vastly expands the necessary amount of Glomarization to say exactly when you would use it. 

Eliezer correctly points out that if the Feds ask you where you were last night, your answer of ‘I can neither confirm or deny where I was last night’ is going to sound mighty suspicious regardless of how often you answer that way. Saying ‘none of your goddamn business’ is only marginally better. Also, letting them know that you always refuse to answer that question might not be the best way to make them think you’re less suspicious.

This means both that full Glomarization isn’t practical unless (this actually does come up) your response to a question can reliably be ‘that’s a trap!’.

However, partial Glomarization is fine. As long as you mix in some refusing to answer when the answer wouldn’t hurt you, people don’t know much. Most importantly, they don’t know how often you’d refuse to answer. 

If the last five times you’ve refused to answer if there was a dragon in your garage, there was a dragon in your garage, your refusal to answer is rather strong evidence there’s a dragon in your garage.

If it only happened one of the last five times, then there’s certainly a Bayesian update one can make, but you don’t know how often there’s a Glamorization there, so it’s hard to know how much to update on that. The key question is, what’s the threshold where they feel the need to look in your garage? Can you muddy the waters enough to avoid that?

Once you’re doing that, it is almost certainly fine to answer ‘no’ when it especially matters that they know there isn’t a dragon there, because they don’t know when it’s important, or what rule you’re following. If you went and told them exactly when you answer the question, it would be bad. But if they’re not sure, it’s fine.

One can complement that by understanding how conversations and topics develop, and not set yourself up for questions you don’t want to answer. If you have a dragon in your garage and don’t want to lie about it or reveal that it’s there, it’s a really bad idea to talk about the idea of dragons in garages. Someone is going to ask. So when your refusal to answer would be suspicious, especially when it would be a potential sign of a heretical belief, the best strategy is to not get into position to get asked.

Which in turn, means avoiding perfectly harmless things gently, invisibly, without saying that this is what you’re doing. Posts that don’t get written, statements not made, rather than questions not answered. As a new practitioner of such arts, hard and fast rules are good. As an expert, they only serve to give the game away. ‘

Remember the illusion of transparency. Your counterfactual selves would need to act differently. But if no one knows that, it’s not a problem.






Posted in Uncategorized | 8 Comments

Simplified Poker Strategy

Previously in Sequence (Required): Simplified Poker

I spent a few hours figuring out my strategy. This is what I submitted.

If you start with a 2, you never want to bet, since your opponent will call with a 3 but fold with a 1. So we can assume no one who bets ever has a 2. But you might want to call a bet.

If you start with a 1, you never call a bet, but sometimes want to bet as a bluff.

If you start with a 3 in first position, sometimes you may want to check to allow your opponent to bet with a 1. If you have a 3 in second position, you have no decisions.

Thus, a non-dominated strategy can be represented by five probabilities: The chance you bet with a 1 in first position, chance you bet with a 3 in first position, chance you bet with a 1 in second position, chance you call with a 2 in first position, and chance you call with a 2 in second position. Call a set of these five numbers a strategy.

There were likely to be a few players bad enough to bet with a 2 or perhaps make the other mistakes, but I chose for complexity reasons not to worry about that, assuming I’d still do something close to optimal. If I was confident complexity was free, I’d have included a check to see if we ever caught the opponent doing something crazy, and adjust accordingly.

If you know the opposing strategy, what to do is obvious. Thus, I defined a function called ‘best response’ that takes a strategy, and outputs the strategy that maximizes against that strategy.

My goal was to derive the opponents’ strategy, then play the best response to that strategy.

As a safeguard against opponents who were anticipating such a strategy, I included an escape hatch: If at any point, my opponent got ahead by 10 or more chips, assume they were a level ahead of me, and playing the best response to what I would otherwise do. So derive what that is, and play the best response to that!

That skipped over the key puzzle, which is figuring out what the opponent is doing. On the first turn, I guessed opponents would pursue reasonable mixed strategies: bet a 1 about a third of the time, bet a 3 in first position about two thirds of the time, call with a 2 about half the time. I represented this with a virtual hand history that I included until I had enough real ones.

On subsequent turns, I looked at the hand history.

If the opponents’ card was revealed, that was a pure data point – if we knew they bet with a 1, that’s a hand where they did that.

If the opponents’ card wasn’t revealed, but only one card made any sense, I assumed they had that card. Thus, if I bet with a 1 and they fold, I assume they had a 2.

If the opponents’ card wasn’t revealed, and they could have had either card because you bet a 3 and they folded, or they bet and you folded a 2, that’s trickier. The probability of them having each card in that spot depends on their strategy. And again, there was a (unknown soft) complexity limit.

My solution was to assume that in each unique starting position (your position plus your card) half the time my opponent would draw the higher of the two cards I hadn’t drawn, and half the time he’d draw the lower one. So half the time I have a 2 in first position, he has a 3, half the time he has a 1.

That was definitely not ideal, and I don’t remember exactly how I did it, but it definitely did the thing it was designed to do: Identify exploitable agents lightning fast, and do something reasonable against reasonable ones. Trying to optimize the details of this type of approach is an interesting puzzle, both with and without a complexity limitation.





Posted in Uncategorized | 8 Comments