Apply for Emergent Ventures

Original Post (Marginal Revolution): Emergent Adventures: A New Project to Help Foment Enlightenment

Today, two new philanthropic projects announced funding.

Jeff Bezos

In the first project, Jeff Bezos gave two billion dollars to fund services for poor families and the homeless. This will be his fourth most valuable charitable project, behind The Washington Post, Blue Origin and of course the world’s greatest charity He is truly a great man.

In exchange for giving two billion dollars to help those in need, he was roundly criticized throughout the internet for not giving away more of his money faster. Every article I’ve seen emphasizes how little of his wealth this is, and how much less generous this is than Warren Buffet or Bill Gates. The top hit when I googled was a piece entitled “Jeff Bezos donates money to solve problem he helped create,” because he’s a capitalist and exploits the workers, don’t you know? Never mind that he’s the biggest source of new consumer surplus we have.

This isn’t quite as big an abomination as the Peter Singer claim that someone who gives away almost all of their vast fortune is basically the worst person ever for not giving away the rest of it. Or even that they’re the worst person in the world for giving away that money in a slightly less than optimal fashion. Won’t someone think of the utils?

But it’s not that far behind.

That doesn’t mean we don’t wish he’d do more, or that we don’t want to help him optimize to have the best impact. It’s important to help everyone play their best game, and I could definitely think of better uses for the money. The hot takes using that angle will doubtless follow shortly.

But, seriously: If you see someone donating two billion dollars to help the less fortunate and you can’t on net say anything nice about it the least you can do is to shut up.

Tyler Cowen’s Emergent Ventures

Now on to the more exciting project that I suspect will have more lasting impact, despite having funding three orders of magnitude smaller.

Emergent Ventures is brilliant. I heartily endorse this service or product.

One of Tyler’s greatest strengths is his ability and willingness to consume and integrate vast amounts of information. He already reads not only a number of books that boggles my mind, but also everything that arrives in his inbox, which is doubtless a criminally underused way to introduce important things into the public discourse. This is how he has a well thought out interesting answer on almost any question you can ask him, as I’ve observed at two in-person audience Q&A sessions.

Now he’s offering the world a golden opportunity. I beseech you to take full advantage of it!

By applying you get to put a 1500 word proposal in front of a brilliant polymath who will at a minimum seriously think about your proposal, and likely offer you feedback. Maybe a dialogue will start. He’ll also incorporate your ideas into his brain, and then perhaps on to his readership.

That’s if you’re not funded. If you are funded, not only will you get Free Money, you’ll get his support helping you succeed. That’s huge.

Emergent Ventures recognizes at least five important facts about the world.

Fact one: Serious attempts at high upside projects (aka ‘moonshots’) where people attempt to do a big thing have ludicrously good rates of return. It is totally fine to not only have most of them fail, but to have most of them have been bad ideas so long as some good ones are mixed in.

Fact two: Having to fit such proposals into boxes that can be approved by bureaucratic systems that in Tyler’s words would ‘keep the crazy off my desk’ are orders of magnitude more damaging to this than most people realize. Such systems have to be game theoretically defended against attempts to extract money. This forces there to be concrete systems guarding the money, so people get paid for ‘hours worked’ and such – lots is wasted to documentation. These systems force applicants to check off tons of boxes, and to pass multiple layers of convincing people, including convincing them that others will become convinced, forcing all projects to become trapped in signaling games and in looking normal and reputable and credible. Even when you could push through something special, there’s little way to know that without trying. So there’s a huge push to do normal-looking things and to end up with tons of time, effort and money wasted.

Fact three: The judgment of a smart individual human can do a better job, and can use ‘watch out for patterns’ and general paying attention to prevent becoming too exploited. There’s no reason to focus on things being non-profits, or not being attempts to make money, or having formal measurable goals at all, if you trust your judgment and are willing to accept mistakes.

Fact four: If the cost of application and the cost of documentation, before and after the grant, is dramatically lowered, then a lot of stuff that wasn’t viable and wouldn’t have even have been considered, is now on the table.

Fact five: People coming to you with short (max 1500 words) descriptions of their best ideas for what is worth doing in the world creates an awesome feed of cool new ideas and proposals and people, leading to food for thought, to new collaborations and connections, and to unknown unknown upsides. Yes, there will also be a lot of ‘money, please’ in there, but in my experience much less than you would naively expect.

You can apply here. I encourage you to do so.







Posted in Uncategorized | 1 Comment

On Robin Hanson’s Board Game

Previously: You Play to Win the GamePrediction Markets: When Do They Work?Subsidizing Prediction Markets

An Analysis Of (Robin Hanson at Overcoming Bias): My Market Board Game

Robin Hanson’s board game proposal has a lot of interesting things going on. Some of them are related to calibration, updating and the price discovery inherent in prediction markets. Others are far more related to the fact that this is a game. You Play to Win the Game.

Rules Summary

Dollars are represented by poker chips.

Media that contains an unknown outcome, such as that of a murder mystery, is selected, and suspects are picked. Players are given $200 each. At any time, players can exchange $100 for a contract in all possible suspects (one of which will pay $100, the rest of which will pay nothing).

A market is created for each suspect, with steps at 5, 10, 15, 20, 25, 30, 40, 50, 60 and 80 percent. At any time, each step in the market either contains dollars equal to its probability, or it has a contract good for $100 if that suspect is guilty. At any time, any player can exchange one for the other – if there’s a contract, they can buy it for the listed probability. If there’s chips there, you can exchange a contract for the chips. Whoever physically makes the exchange first wins the trade.

At the end of the game, the winning contract pays out, and the player with the most dollars wins the game.

Stages of Play

We can divide playing Robin’s game into four distinct stages.

In stage one, Setup, the source material we’ll be betting on is selected, and the suspects are generated.

In stage two, the Early Game, players react to incremental information and try to improve their equity, while keeping an eye out for control of various suspects.

In stage three, the Late Game, players commit to which suspects they can win with and lock them up, selling off anything that can’t help them win.

In stage four, Resolution, players again scramble to dump now-worthless contracts for whatever they can get and to buy up the last of the winning contracts. Then they see who won.


Not all mysteries will be good source material. Nor do you obviously want a ‘certified good’ source. That’s because knowing the source material creates a good game, is a huge update. 

A proper multiple-suspects who-done-it that keeps everyone in suspense by design keeps the scales well-balanced, ensuring that early resolutions are fake outs. That can still make a good game, but an even more interesting game carries at least some risk that suspects will be definitively eliminated early, or even the case solved quickly. Comedy routines sometimes refer to the issue where they arrest someone on Law & Order too early in the episode, so you know they didn’t do it!

When watching sports, a similar dilemma arises. If you watch ‘classic games’ or otherwise ensure the games will be good, then the first half or more of the game is not exciting. Doing well early means the other team will catch up. So you want to choose games likely to be good, but not filter out bad games too aggressively, and learn to enjoy the occasional demolition.

The setup is also a giveaway, if it was selected by someone with knowledge of the material. At a minimum, it tells us that the list is reasonably complete. We can certainly introduce false suspects that should rightfully trade near zero from the start, to mix things up, and likely should do so.

One solution would be to have an unknown list of contracts at the start, and introduce the names as you go along. This would also potentially help with preventing a rush of trades at the very start.

In this model, you can always exchange $100 for a contract on each existing suspect, and a contract for ‘Someone You Least Suspect!’ Then, when a new suspect is introduced, everyone with a ‘Someone You Least Suspect!’ contract gets a contract in the new suspect for free for each such contract they hold. There are several choices for how one might introduce new suspects. They might unlock at fixed times, or players could be allowed to introduce them by buying a contract.

The complexity cost of hiding the suspects, or letting them be determined by the players, seems too high for the default version. It protects the fun of the movie and has some nice properties, but for the base game you almost certainly want to lay out the suspects at the start. This gives a lot away, but that’s also part of the game.

For the first few games played, it probably makes sense to choose mysteries ‘known to be good’ such as a classic Agatha Christie.

The game would presumably come with a website that allowed you to input a movie, show or other media, and output a list of suspects. It would also want to advise players on whether their selection was a good choice, or suggest good choices based on selected criteria. Both will need to be balanced to avoid giving too much away, as noted above; I’ll talk more about the general version of this problem another time.

If you are in charge of setup, I would encourage including at least one suspect that obviously did not do it, in a way that is easy to recognize early. This prevents players from assuming that all suspects will remain in play the whole time, and rewards those paying attention early. Keep people on their toes.

The Early Game

The market maker is intentionally dumb, although in default mode they are smart enough to know who the suspects are. All suspects start out equal.

There are a bunch of good heuristics, many of which should be intuitive to many viewers of mysteries, that create strong trading opportunities right away. To state the most basic, the earlier a suspect first appears on the screen, the more likely they are to have done the deed. So the moment one of the suspects appears – ‘That’s Bob!’ – everyone should rush to buy Bob, and perhaps sell everyone else if trading costs make that a good idea. How far up to buy him, or sell others, is an open question.

That will be the first of many times when there will be an ‘obvious’ update. There will also be non-obvious updates. Staying physically close to the board, chips and/or contracts ready to go, is key to make sure you get the trade first. This implies that making a race depend on the physical exchange of items might be a problem. Letting it be verbal (e.g. whoever first says ‘I buy Bob’) prevents that issue, but risks ambiguity.

What characterizes the early game, as opposed to the late game, is that the focus is on ‘make good trades’ rather than on winning. There’s no reason to worry too much about who owns how many of each contract, unless someone is invested heavily in one particular suspect. We can think of that as a player choosing to enter the endgame early.


Robin notes an interesting phenomenon, that players got caught up in the day trading and neglected to watch the mystery. Where should the smart player direct the bulk of their attention?

That depends upon your model of murder mysteries.

One model says that murder mysteries are ‘fair’. Clues are introduced. If you pay attention to those clues, you can figure out who did it before the detective does. When the detective solves the mystery, you can verify that the solution is correct once you hear their logic. If you can solve the mystery first, you can sell every worthless contract and buy all the worthwhile contracts. Ideally, that should be good enough to win the game, especially if you execute properly, selling and buying in balance without giving away that you believe you’ve solved the mystery.

Another related model says that murder mysteries follow the rules of murder mysteries, and that this often is good enough to narrow down or identify the killer. That way-too-famous-for-his-role actor is obviously the killer. Another would-be suspect was introduced at the wrong time, so she’s out. A third could easily have done it, but that wouldn’t work with the thematic elements on display.

A third model says that the detective, or others in the movie, have a certain credibility. Thus, when Sherlock Holmes says that Bob is innocent, that is that. Bob is innocent. You don’t need to know why. Evidence otherwise might not mean much, but there’s someone you can trust.

Functionally, these three are identical once you know what factors you’r reacting to. They say that (some kinds of) evidence count as evidence, and resulting updates are real. The more you believe this, the more you should pay attention to the movie. This includes trying early on to figure out what type of movie this is. Be Genre Savvy! Until you know what rules apply, don’t worry too much about day trading, unless people are going nuts.

A fourth model says that the mystery was chosen for a reason, and written to keep up suspense, so nothing you learn matters much beyond establishing who the suspects are. The game already did that for you. Unless the game followed my advice and included an obviously fake suspect or two, to punish players who only look at trading.

If you believe this model, and don’t think there is news and there aren’t fake subjects (or that they will be sufficiently obvious you’ll know anyway, if only by how others talk and act) then you won’t put as much value on watching the movie. If trading has good action, trading might be a better bet.

A fifth model that can overlap with the previous models says that others will watch the movie and process that information, so there’s no need to watch yourself if there are enough other players. You might think that there is then a momentum effect, where players are unwilling to trade aggressively enough on new information. Or you might think that players overreact to new information, especially if you’re a forth-model (nothing matters, eat at Arby’s) kind of theorist.

If you feel others can be relied on to react to news, you might trade on news even if you don’t think it matters, because others will trade after you, and you can then cash out at a quick profit. Just like in the real markets.

Or you might concentrate on arbitrage. Robin observed that players would focus on buying good suspects rather than selling poor suspects, and this often resulted in probabilities that summed to more than 100%. This offers the chance for risk-free profits, plus the chance to place whatever bet you like best while cashing in.

In my mind, the question boils down to where the game will be won and lost. Is there enough profit in day trading to beat the person who placed the largest bet on the guilty party? What happens in the endgame?

The End Game

A player enters the endgame when they attempt to ensure that they win if a particular suspect is guilty.

This is not as difficult as it looks, and could be quite difficult to fight against. Suppose I want to bet on Alice being the culprit. I could sell all other suspects and buy her contracts. As a toy example, lets say there are four suspects, and lets say I decide to butcher my executions. I sell the others for $20 and $15 each, and buy Alice for $25, $30 and $40.

If the game ends and Alice is guilty, I made $105 selling worthless contracts, and made $195 buying Alice contracts, for a net profit of $295. If she’s innocent, I collect nothing, so I paid $105 for Alice contracts and made $105 selling other contracts, so I’m just out my initial $200 and die broke. That’s really, really terrible odds if I chose Alice at random!

But if it’s a 10 person game and I do that, even if I chose at random, 25% of the time Alice is guilty. Can someone else make more than $295 to beat me?

If after I finish, others return all the prices to normal, then someone else could profit from my initial haste, then execute the same trades I did at better prices. If that happens, I’m shut out.

That works if you jump the gun, and enter the endgame too early. That’s true even if Alice is the most likely suspect.

In particular, others now need to make a choice. Lets say I went all in on Alice. There are three basic approaches on how to respond:

  1. Abandon Alice. If Alice is guilty, you’ve lost. So it’s safe to assume Alice is innocent, and sell any Alice contracts at their new higher prices, especially once you’re broke and no longer can buy any more. If this encourages someone else to take option 2 and also move in on Alice, even better, that’s one less person who can beat you if Alice is innocent. The majority of players should do this.
  2. Attack Alice. If others are abandoning Alice in droves, her price might collapse even beyond $25 as people rush to sell. You can then pick contracts up cheap, sell other contracts at better prices, and have a strictly better position.
  3. Arbitrage. Try to make as much money off the situation as possible, without committing to a direction. If people are being ‘too strategic’ and too eager to get where they want to go, rather than focusing on getting the best price, then by making good trades (sell Alice when I buy too quickly, buy when others sell too quickly) and forcing others to get worse prices, I can end up with more value, then decide later what to do.

If you only engage in arbitrage, and others commit to suspects, you’ll be in a lot of trouble unless you’ve already made a ton, because you won’t have anyone to trade against. Your only option becomes to trade with the market, which limits how much you can get on the suspect you finally decide to go with, even if the mystery is solved while the market is open, and you’re the only one left with cash.

The good and bad news is that’s unlikely to happen, as others will also ‘stick around’ with flexible portfolios. That means that you won’t be able to make that much when the mystery gets solved, but it does mean you can divide the spoils. If six players commit to suspects while four make good trades, not only are two of the six already shut out, it’s likely the remaining four can coordinate (or just do sensible trades) to win if two or three of the suspects are found guilty, and sometimes should be able to nail all four.

When you have four (or six) suspects and ten players, there are not enough suspects for everyone to own one, and there certainly aren’t enough for anyone to own two. That means that even if a suspect looks likely to be guilty, if you know you can’t win that scenario, you’ll be dumping, and that means at least seven of ten people are dumping any given suspect if they understand the game.

The logical response to this is to stay far enough ahead on your suspect that you clearly win if they’re guilty, and if you get a good early opportunity to dump other contracts you should definitely do that. Good trades are generally good, and those trades just got even better, especially if everyone focuses on buying rather than selling. What you don’t want to do is overpay, or run out of cash (and/or run out of things you can sell).

Thus, I might buy the Alice $20, $25 and Alice $30 contracts, and start selling contracts on suspects I think are trading rich. What I’m worried about is competition – I don’t want other players buying Alice contracts, so if they do, I’ll make sure they don’t get size off by buying at least at their price, and I’ll make sure to stay ahead of them on size. I’ll also think about whether the remaining players are sophisticated enough to sell what they have, even at lousy prices; if they are, I’ll be careful to hold a bunch of cash in reserve. If there are ten players, I can expect there to exist 16-25 Alice contracts, and I want to be sure not to run out of money.

Rank Ordering

This suggests each player has a few different goals.

You want to accumulate contracts in suspects you ‘like’ (which mostly means the ones you think are good bets), so you can get ‘control’ of one or more of them. Control means that if they did it, you win.

You want to get rid of contracts in the suspects you don’t like. The trick here is that sometimes the price will go super high (relative to the probability they did it) as multiple players compete to gain ‘control’ of the suspect. Other times, the price will collapse because there is only one bidder for control of that suspect. If one player gets a bunch of contracts, and is in good overall shape, then no one else will compete.

That in turn might drive the price so low – $10 or even $5 – that the value of their portfolio shrinks a lot, tempting another player to enter, but doing so would drive the price up right away, so it often doesn’t make sense to compete. If Bob is buying up Alice contracts and Carol now buys one at $10, who is going to sell one now? Much better to wait to see if the price goes higher, which in turn puts Bob back in control. The flip side of that is, if Carol can buy a $10 and a $15 contract, and force Bob to then pay $20, Carol can sell back to Bob at a profit. It’s a risky bluff if others are actively selling, but it can definitely pay off.

The key in these fights is who has more overall portfolio value, plus the transaction costs of moving into more contacts. If Carol can make $100 trading back and forth in other contracts, Bob is going to have a tough time keeping control, and mostly has to hope that Carol chooses to go after a different suspect. By being in as good shape as possible, Bob both is more likely to win the fight, and (if others realize this) more likely to avoid the fight.

With a lot of players engaged in active day trading, and aren’t strategically focused, transaction costs could be low. If they’re sufficiently low, then it could be a long time before it is hard to buy and sell what you want at a reasonable price, postponing the end game until quite late. The more other players are strategically focused, and strategy determines price, the harder it is to trade, the more existing positioning matters and the less you can try to day trade for profit other than anticipating a fight over a suspect, or a dump of them.

Rich Player, Poor Player

Suppose you’re a poor player. You made some trades, and they didn’t work out. Perhaps you held on to a suspect or two too long, and others dumped them, either strategically or for value. Perhaps you had a hunch, got overexcited, and others disagreed, and now you’re looking foolish. Now you only have (let’s say) $120 in equity, down from $200.

You Play to Win the Game. How do you do that? There are more players than suspects, several of whom have double your stake. So you’ll need to find a good gambit.

A basic gambit would be to buy up all the contracts you can of a suspect everyone has dismissed. Even if there are very good reasons they seem impossible and are trading at $5, you can still get enough of them to win if it works out, and you might have no competition since players in better shape have juicier targets. Slim chance beats none.

But if even that’s out of the question, you’ll have to rebuild your position. You will need to speculate via day trading. Any other play locks in a loss. You find yourself in a hole, and have no choice but to keep digging. Small arbitrage won’t work. Your best bet is likely to watch the screen and listen, and try to react faster than everyone else in the hopes that the latest development is huge or seen as huge, then turn around and sell your new position to others to make a quick buck. Then hope there’s still enough twists to do this again.

If the endgame has arrived, and rich players are sitting on or fighting for all the suspects, you’ve lost. Your best bet is to consolidate into cash, and hope some suspect crashes down to $5 for you.

Now suppose you’re a rich player. You have $300 in equity. How do you maximize your chance of winning?

The basic play is to corner the market on the most likely suspect, or whoever you think is most likely. If you make a strong move in, you should be able to scare off competition, and even if you don’t do so, you can use that as an opportunity to make more profit if they drive the price up. At some point, others will have to dump, and you can afford to give them a good price if you have to. It’s hard to win a fight when outgunned. The key is not to engage too much too soon, as this risks letting a third player take advantage of an asset dump later. So you’ll want to hold some cash for that, if possible. Remember that you’ll need something like 12-14 contracts to feel safe from a dump, depending on how much equity you’ve built, if you’re out of cash. That shuts out other players.

The advanced play, if you’re sufficiently far ahead, is to try to win on multiple suspects. That’s super hard. Even if you had $400 in equity, if you divide it in half, there are still multiple other players over $200. It seems unlikely you can get control of multiple worthwhile suspects. There’s no point in trying for multiple bargain basement suspects at the expense of one good one, even if it works. So is there any hope here?

I think there is, in the scenario where there is a clear prime suspect.

In this scenario, the prime suspect was bid high early on. Given Robin’s notes about player behavior and tendency to push prices too high, and the battle for control of the suspect, prices might get very high very quickly. There also may be players who will refuse to sell their contracts in the prime suspect, because they don’t realize that they’re shut out of winning in that case. Either they’re maximizing expected value rather than chance of winning, or they don’t realize the problem, or both.

This could open up an opportunity where the ‘net profit’ on the prime suspect isn’t that high for any player. Suppose they start at $25, and everyone starts with their two contracts. They then trade at $30, $40, $50 and $60 in a row, not all to the same player. So there’s minimal chances to buy contracts that make you that much money. If you buy an $80 contract your maximum profit is $20, which is easy to beat by day trading.

So what you can do is go for the block. Hopefully you helped drive the price up early, which is part of how you got your equity. Then contracts only really traded at $60 and $80. So even if the suspect is guilty, someone who moved in on this without day trading first is not going to end up with many contracts. You start with $200, so lets say they end up with 3 contracts and a little cash.

It’s not crazy for you to sell the suspect at the top, do some successful day trading, and then have over $300 in cash. You could win without any contracts that pay out, if you know you’re the most successful day trader and no one can have that many contracts.

That’s a better position than having 5 or 6 contracts in the prime suspect, since you still have cash if they’re innocent. The trick is then having that be enough to win on another suspect as well, or splitting your efforts by holding onto contracts elsewhere. Tricky. But perhaps not impossible, especially if people are dumping contracts at $5.

At a minimum, what you can do is be in a strong position to respond to new developments, and be able to choose which other suspect to back later in the game if you now think they’re more likely, while still winning if the situation doesn’t change. That’s very strong.

A final note is that it is legal, in the game, to trade with another player without going through the market. This could be used to buy out a players’ position in a suspect, shifting control of that suspect, and avoid the issue where once a player starts dumping a position, the price will collapse, as well as the ability of other players to ‘intercept’ the transfer and ruin the buyers’ attempt to accumulate a new position and take control. Thus, players should learn that if they have a bunch of contracts and want out, they should check for a bulk buyer, and if they want in they should consider doing the same. The risk of course is that you tip your hand, which makes doing it on the buy side less attractive.

Flexible Structures

It’s also worth noting that you can extend the idea easily to other prediction markets, and to an online version.

You could trade on the outcome of a sporting event, or an election, or any other real-world prediction market, using the same rules. You could play a board game, and also play the contract game on the outcome of your board game. That gives players something to do between turns and extra things to think about, and gives extra players or eliminated players something to do.

You could trade over a series of outcomes or events (for example, all the football games played today, or both the winner of the game and the combined number of points scored, or even obscure stuff too like number of punts or what not) in order to reward more trading ‘for value’ and place less emphasis on being right. Or just keep track of funds between games, watch multiple shows, and reward the overall winner.

That raises the question of what we can learn about prediction markets from the game.

Market Accuracy and Implications

Early in the game, market prices should roughly reflect fair probabilities of being guilty. Anyone who jumps the gun for strategic positioning will lose out to a more patient player. That won’t stop players from being overeager, and bidding suspects up too high, but as Robin noted that opens the door for others to do arbitrage and sell the contracts back down to reasonable prices.

Later in the game, prices will grow increasingly inaccurate as players jockey for position, and let strategic considerations override equity considerations.

This is a phenomenon we see in many strategic games. Early in the game, players mostly are concerned about getting good value, as most assets are vaguely fungible and will at some point be useful or necessary. As the game progresses, players develop specialized strategies, and start to do whatever they need to do to get what they need, often at terrible exchange rates, and dump or ignore things they don’t need, also at terrible exchange rates.

If we wanted to improve accuracy, we’d need to make the game less strategic and more tactical, by rewarding players who maximize expected profits. There’s a dumb market that is handing out Free Money when news occurs. We’d like players to battle for a share of that pie, rather than competing for control of suspects. If the game was played over many rounds, the early rounds would mostly focus on expected value and doing good trades. If the game was played for real money, and settled in actual dollars, then we’d definitely have a lot more accurate pricing!

If a market has traders purely motivated by expected value and profit, then its pricing will be as good as the pricing ability of the traders.

If a market has a few ‘motivated’ traders, or noise traders, that are doing something for reasons other than expected value, that is good. You need a source of free money to make the market work. Thus, the existence of the bank, as a source of free money, is great, because it motivates the game. You can imagine a version of the game where players can only trade when they agree to it. There would still be trades, since the prize for winning should overcome frictions and adverse selection, but volume of trading would plummet.

If a market has a few people who have poor fair values, that works like motivated traders.

If a market has too many traders who have poor fair values, or in context they have fair values that are not based on the expected payout, then relationships break down. There’s now profit for those who bet against them, but that doesn’t mean there’s enough money in the wings to balance things out. At some point that money is exhausted, and no one paying attention has spare funds. Prices stop being accurate, to varying degrees.

In particular, this illustrates that if those managing the money have payouts that are non-linear functions of their profits, then very weird things will happen. If I get fired for missing my benchmark, and so do my colleagues, but we don’t get extra fired for missing them by a lot, then this will lead us to discount tail risks. In the game, this takes the form of dumping suspects you can’t control – if Alice did it, you’ve already lost, and third prize is you’re fired. There are many other similar scenarios. If we want accurate prices, we need traders to have linear utility functions, or reasonable approximation thereof.


This game sounds like a lot of fun, and seems to have lots of opportunities for deep tactical and strategic play, for bluffing, and to do things that players should find fun. I really hope that it gets made. You can take one or more of many roles – arbitrager, mystery solver, genre savvy logician, momentum, value, tactical or strategic, or just hang out and watch the fun and/or the mystery, and if you later get a hunch, you can go for it.

I hope to talk to a few friends of mine who have small game companies, in the hopes one of them can help. Kickstarter ho?

If anyone out there is interested in the game and making it happen, please do talk to Robin Hanson about it. I’m sure he’d be happy to help make it a reality. And if you’re looking to play, I encourage you to give it a shot, and report back.
















Posted in Uncategorized | Leave a comment

You Play to Win the Game

Previously (Putanumonit): Player of Games

Original Words of Wisdom:

Quite right, sir. Quite right.

By far the most important house rule I have for playing games is exactly that: You Play to Win the Game.

That doesn’t mean you always have to take exactly the path that maximizes your probability of winning. Style points can be a thing. Experimentation can be a thing. But in the end, you play to win the game. If you don’t think it matters, do as Herm Edwards implores us: Retire.

It’s easy to forget, sometimes, what ‘the game’ actually is, in context.

The most common and important mistake is to maximize expected points or point differential, at the cost of win probability. Alpha Go brought us many innovations, but perhaps its most impressive is its willingness to sacrifice territory it doesn’t need to minimize the chances that something will go wrong. Thus it often wins by the narrowest of point margins, but in ways that are very secure.

The larger-context version of this error is to maximize winning or points in the round rather than chance of winning the event.

In any context where points are added up over the course of an event, the game that matters is the entire event. You do play to win each round, to win each point, but strategically. You’re there to hoist the trophy.

Thus, when we face a game theory experiment like Jacob faced in Player of Games, we have to understand that we’ll face a variety of opponents with a variety of goals and methods. We’ll play a prisoner’s dilemma with them, or an iterated prisoner’s dilemma, or a guess-the-average game.

To win, one must outscore every other player. Our goal is to win the game.

Unless or until it isn’t. Jacob explicitly wasn’t trying to win at least one of the games by scoring the most points, instead choosing to win the greater game of life itself, or at least a larger subgame. This became especially clear once winning was beyond his reach. At that point, the game becomes something odd – you’re scoring points that don’t matter. It’s not much of a contest, and it doesn’t teach you much about game theory or decision theory.

It teaches you other things about human nature, instead.

A key insight is what happens when a prize is offered for the most successful player of one-shot prisoner’s dilemmas, or a series of iterated prisoner’s dilemmas.

If you cooperate, you cannot win. Period. Someone else will defect while their opponents cooperate. Maybe they’ll collude with their significant other. Maybe they’ll lie convincingly. Maybe they’ll bribe with out-of-game currency. Maybe they’ll just get lucky and face several variations on ‘cooperate bot’. Regardless of how legitimate you think those tactics are, with enough opponents, one of them will happen.

That means the only way to win is to defect and convince opponents to cooperate. Playing any other way means playing a different game.

When scoring points, make sure the points matter.

These issues will also be key to the next post as well, where we will analyze a trading board game proposed by Robin Hanson.


Posted in Uncategorized | 19 Comments

Unknown Knowns

Previously (Marginal Revolution): Gambling Can Save Science

A study was done to attempt to replicate 21 studies published in Science and Nature. 

Beforehand, prediction markets were used to see which studies would be predicted to replicate with what probability. The results were as follows (from the original paper):

Fig. 4: Prediction market and survey beliefs.

Fig. 4

The prediction market beliefs and the survey beliefs of replicating (from treatment 2 for measuring beliefs; see the Supplementary Methods for details and Supplementary Fig. 6 for the results from treatment 1) are shown. The replication studies are ranked in terms of prediction market beliefs on the y axis, with replication studies more likely to replicate than not to the right of the dashed line. The mean prediction market belief of replication is 63.4% (range: 23.1–95.5%, 95% CI = 53.7–73.0%) and the mean survey belief is 60.6% (range: 27.8–81.5%, 95% CI = 53.0–68.2%). This is similar to the actual replication rate of 61.9%. The prediction market beliefs and survey beliefs are highly correlated, but imprecisely estimated (Spearman correlation coefficient: 0.845, 95% CI = 0.652–0.936, P < 0.001, n = 21). Both the prediction market beliefs (Spearman correlation coefficient: 0.842, 95% CI = 0.645–0.934, P < 0.001, n = 21) and the survey beliefs (Spearman correlation coefficient: 0.761, 95% CI = 0.491–0.898, P < 0.001, n = 21) are also highly correlated with a successful replication.


That is not only a super impressive result. That result is suspiciously amazingly great.

The mean prediction market belief of replication is 63.4%, the survey mean was 60.6% and the final result was 61.9%. That’s impressive all around.

What’s far more striking is that they knew exactly which studies would replicate. Every study that would replicate traded at a higher probability of success than every study that would fail to replicate.

Combining that with an almost exactly correct mean success rate, we have a stunning display of knowledge and of under-confidence.

Then we combine that with this fact from the paper:

Second, among the unsuccessful replications, there was essentially no evidence for the original finding. The average relative effect size was very close to zero for the eight findings that failed to replicate according to the statistical significance criterion.

That means there was a clean cut. Thirteen of the studies successfully replicated. Eight of them not only didn’t replicate, but showed very close to no effect.

Now combine these facts: The rate of replication was estimated correctly. The studies were exactly correctly sorted by whether they would replicate. None of the studies that failed to replicate came close to replicating, so there was a ‘clean cut’ in the underlying scientific reality. Some of the studies found real results. All others were either fraud, p-hacking or the light p-hacking of a bad hypothesis and small sample size. No in between.

The implementation of the prediction market used a market maker who began anchored to a 50% probability of replication. This, and the fact that participants had limited tokens with which to trade (and thus, had to prioritize which probabilities to move) explains some of the under-confidence in the individual results. The rest seems to be legitimate under-confidence.

What we have here is an example of that elusive object, the unknown known: Things we don’t know that we know. This completes Rumsfeld’s 2×2. We pretend that we don’t have enough information to know which studies represent real results and which ones don’t. We are modest. We don’t fully update on information that doesn’t conform properly to the formal rules of inference, or the norms of scientific debate. We don’t dare make the claim that we know, even to ourselves.

And yet, we know.

What else do we know?



Posted in Uncategorized | 10 Comments

Chris Pikula Belongs in the Magic Hall of Fame

Epistemic Status: Fixing old mistakes, appreciating great old deeds.

(Note to those who do not play Magic: While this is nominally about Magic, I find these points to be of general interest, and wrote the post with that in mind.)

Chris Pikula belongs in the Magic Hall of Fame. Vote for him.

His results, while better than they look, fall short. I do worry about the precedent that will set. But it doesn’t matter. He belongs in the Hall. Vote for him.

Why? Because he did something for the game that needs to be honored and remembered as much as possible. Something that is more important than we realize, and which we need to keep fighting for. Something the world needs more than anything.

All groups, and all people, must decide who to reward, honor and ally with, and who to ignore, shame and oppose.

The default state of the world, the default state of any group, or culture, or industry or profession, is to want to choose strong allies, with high status, so they can help us. We reward what we believe succeeds and can help us. Knowing this, we all strive to emulate and signal these same characteristics, and to judge others as we observe them being judged.

This allows for multiple equilibria.

In the good equilibrium, good behavior like being nice, honoring your commitments, helping others and contributing to the community is recognized and rewarded. Bad behavior like cheating, lying, backstabbing and bullying is punished. Since good behavior is rewarded, (almost) everyone strives to exhibit good behavior. Bad behavior is seen not only as wrong, but also as stupid and weak. The system is (hopefully) stable, as reinforcing this good behavior is also good behavior and rewarded, while failing to do so or undermining it is also bad behavior and punished.

In the bad equilibrium, good behavior like being nice, honoring your commitments, helping others and contributing to the community is recognized and punished. Bad behavior like cheating, lying, backstabbing and bullying is rewarded. Since bad behavior is rewarded, (almost) everyone strives to exhibit bad behavior. Good behavior is seen as stupid and weak. It therefore also gets thought of as wrong. To succeed, one must not only engage in bad behavior, but make others believe that you do so, while nominally pretending to authorities and/or the naïve public that you’re not doing that. The system is again stable. If you don’t exhibit and associate with winners who do winning things, then you’re a loser, and we need to shun you and punish you for such bad behavior. Only the wicked succeed.

The more success is dependent on the judgments of others, the more stable, extreme and perverse such systems can be, with a variety of goals and optimization pressures. Usually such systems reward some mix of the good, the bad and the just plain weird.

I have observed in the past that much of business operates in the bad equilibrium. As do many other major aspects of our world. At the risk of mentioning politics, what we observe today is a deliberate attempt by someone I need not name, to move us from one equilibrium to a much worse one, from cooperation to needless conflict, from honor to dishonor, to judging people as winners and losers, as tough and weak, even their version of smart and stupid, and to see the world as zero sum rather than positive sum. To move us from a not especially great equilibrium to something much, much worse.

Chris Pikula did something that almost never happens. He moved the Magic: The Gathering community from the bad equilibrium to the good equilibrium.

In the early days of Magic, cheating was the order of the day. The rules didn’t punish it much – once I was at a local tournament, and Steve Mahoney Schwartz complained that he ‘forgot to cheat’ at Nationals because the punishment for being caught was having to undo the cheat. The players tolerated it. More than tolerate it, they honored those who were good at it. People looked up to cheaters and known all-around terrible people like Mike Long and Mark Justice. Those in charge knew and actively promoted them as our stars!

Rather than operating on intent and good faith, the rules operated on technicalities. Rules lawyering was epidemic. Opponents would constantly try to trick you into saying the wrong thing, letting go of a card, or otherwise win the game through lying and trickery.

It was not a few bad apples. It was half the apples. Training for the Pro Tour was largely about defending yourself against cheating. You learned how to make sure your opponent didn’t stack their deck or palm cards. You constantly counted their hand, made sure you had their life total in ironclad form so they couldn’t lie about it. You learned all the right terms to say to make sure you didn’t suddenly fail to block, or take mana burn, or pass up your ability to cast a spell. I’d estimate that at least a quarter of my optimization pressure during a tournament was making sure I didn’t get cheated or rules lawyered.

Even the honest major teams had spies and scouts around the world trying to figure out what the other major teams were working on, and steal their technology or alert others to the threat to gain positional advantage. This burned me and my teams badly multiple times.

It says a lot about how awesome Magic was and is, that we didn’t all just take our decks and go home. We endured it all.

Slowly, things got better. Cheaters got called out more and more. They got caught more often. The rules started punishing real cheating more, while punishing harmless mistakes less and rewarding rules lawyering less. Even more important, if you had been cheating, people made sure everyone knew, and everyone started shunning you for it. Cheating is bad and you should feel bad, even for associating with a known cheater. The best players know each other, we treat each other with honor and respect, and we work together to create a great competition and culture.

We take that for granted now. It wasn’t then.

Others are better positioned to tell the story of how he did that, but that story needs to be told, more often and in more detail. The world needs to know that it was done here, that it can be done, and know how to go about doing it.

Today, there are still a few bad apples. We must remain on guard. But when I sit down to play a match, until I have reason to be suspicious, I assume my opponent is an honest player there to compete with honor.

Still, there remain strong marginal rewards for doing better in tournaments. Some will fail to resist that temptation, as crazy as it is. When that happens, we must continue to catch them and give them their due. The good equilibrium is self-reinforcing, but we can lose it.

To help remember what happened and how it happened, to help keep what he helped create, and to encourage and assist such quests in other realms, we should elect Chris Pikula to the Hall of Fame. It is time.

When I see people talk about voting people who they believe are cheaters into the Hall of Fame, it boggles my mind. I don’t care what the person has accomplished. I don’t care what the person would have accomplished without the cheating. Doesn’t matter. If you think a player is or was a cheater, do not vote for them. Ever. Ever. Ever. Period. End of story.

There are a number of other Hall-worthy resumes on this year’s ballot, if you judge the players clean. Three players each have five PT top eights and a strong overall resume of results, and I see several other potentially worthy resumes as well. Whether or not you think each of those players have the character and integrity necessary for the Hall of Fame, is up to you. And you can and should form your own opinion about each of them. But if you think the answer is no, then I don’t care what they did. Don’t vote for them.




Posted in Uncategorized | 6 Comments

Subsidizing Prediction Markets

Epistemic Status: Sadly Not Yet Subsidized

Robin Hanson linked to my previous post on prediction markets with the following note:

I did briefly mention subsidization, as an option for satisfying the fifth requirement: Sources of Disagreement and Interest, also known as Suckers At The Table. The ultimate sucker is an explicit, intentional one. It can serve that roll quite well, and a sufficiently large subsidy can make up for a lot. Any sufficiently large sucker can do that – give people enough profit to chase, and suddenly not being so well-defined or so quick or probable to resolve, or even being not safe from key insider information, starts to sound like it is worth the risk.

Suppose one wants to create a subsidized prediction market. Your goal presumably is to get a good estimate for the probability distribution of an event, and to do so without paying more than necessary. Secondary goals might include building up interest and a marketplace for this and future prediction markets, and getting a transparently robust result, so others or even the media are more likely to take the outcome seriously. What is the best way to go about doing this?

Before looking at implementation details, I’ll look at the five things a prediction market needs.


Well Defined

The most cost-efficient subsidy for a market is to ensure that the market is well defined. Someone has to make sure everyone understands exactly what happens under every scenario, and that someone is you. Careful wording and consideration of corner cases is vital. Taking the time to do this right is a lot more efficient than throwing money at the problem, especially trying to build a system and brand over time.

If you’re going to subsidize a market, step one is to write good careful rules, make sure people understand them, and to commit to making it right for everyone if something goes wrong, if necessary by paying multiple sides as if they had won. This is potentially quite expensive even if it rarely happens, so it’s hard to budget for it, and it feels bad in the moment so often people don’t pull the trigger. Plus, if you do it sometimes, people will argue for it all the time.

But if you’re in it to win it, this is where you start.


Quick Resolution

Once you’ve got your definitions settled, your next job is to pay the winners quickly once the event happens. People care about this more than you can possibly imagine. The difference between paying out five seconds after the final play, and five minutes after the final play, is a big game to many. Make them wait an hour and they’ll be furiously complaining on forums. When the outcome is certain, even if it hasn’t actually happened yet, it’s often a great move to pay people in advance. People love it. Of course, occasionally someone does something like pay out on bets on Hillary Clinton two weeks early, in which case you end up paying both sides. But great publicity, and good subsidy!

Another key service is to make sure your system recognizes when a profit has been locked in, or risk has been hedged, and does not needlessly tie up capital.

This one is otherwise tough to work around. If you want to know what happens twenty years from now, nothing is going to make resolving the question happen quickly. You can help a lot by ensuring that the market is liquid. If I buy in at 50% now, then a year from now the market is at 75% and is liquid enough, I can take my profits in one year rather than twenty. That’s still a year, and it’s still unlikely the price will ‘catch up’ fully to my new opinion by that time. It helps a lot, though.


Probable Resolution

It is a large feel bad, and a real expense, when capital is tied up and odds look good but then the event doesn’t happen, and funds are returned. It hurts most when you’ve pulled off an arbitrage, and you win money on any result.

If, when this happens, you subsidize people for their time and capital, they’d be much more excited to participate. I think this would have a stronger effect than a similar subsidy to the market itself, once you get enough liquidity to jump start trading. Make sure that if money gets tied up for months or years, that it won’t be for nothing.


Limited Hidden Information

If your goal is to buy the hidden information, you might be all right with others not being interested in your market, as long as the subsidy brings the insiders to the table to mop up the free money. That approach is quite expensive. If the regular traders are still driven away, you’ll end up paying a lot to get the insiders to show their hand, because they can’t make money off anyone else. Even insiders start to worry that others have more and better inside information than they do, which could put them at a disadvantage. So it’s still important to bring in the outsiders.

One approach is to make the inside information public. Do your own investigations, require disclosure from those participating in the events themselves, work to keep everyone informed. That helps but when what you want to get at is the inside information it only goes so far.

That means that when this is your problem, and you can’t fix it directly through action and disclosure, you’re going to have to spend a lot of money. The key is to give that money to the outsiders as much as possible. They are the ones you need at the table, to get yourself a good market. The insiders can then prey on the outsiders, but that’s much better than preying on you directly.

The counterargument, especially if you don’t need to show liquidity or volume, is that if you buy the information directly there’s less noise, so perhaps you want to design the system to get a small number of highly informed traders and let everyone else get driven away. In cases where the outsiders would be pure noise, where the insiders outright know the answer, and where getting outsiders to be suckers that take a loss isn’t practical, that can be best.


Disagreement and Interest

This one’s easy. You are paying a subsidy, so you’re the sucker. Be loud about it so everyone knows you’re the sucker, and then they can fight to cash in. Excellent.

The other half, disagreement, is still important. Many people, whose analysis and participation you want, still benefit from a story that explains why they are being paid to express an opinion, rather than fighting to be slightly more efficient at capturing the subsidy. And of course, if no one disagrees about the answer, then your subsidy was wasted, since you already knew the answer!

In light of those issues, what are the best ways to subsidize the market?

Option 0: Cover Your Basics

Solve the issues noted above. Choose a market people want to participate in to begin with. Ensure there are carefully written rules with no ambiguity, that any problems there are covered. Make sure you’ll get things resolved and paid quickly, that capital won’t be tied up one minute longer than necessary. When possible, disclose all the relevant information, on all levels. If things don’t resolve, it would be great if you could compensate people for their time and capital.

And also, make sure everyone is confident the winners will be paid! Nothing kills a market like worrying you can’t collect if you win. That’s often as or more important even than providing strong, reliable liquidity.

If you can improve your interface, usability, accessibility, user’s tax liability or anything like that, definitely do that. If your market design is poor, such as having the wrong tick size, make sure to fix that. Tick sizes that are too small discourage the providing of liquidity to the market, and are in my experience a bigger and more common mistake than ticks that are too big.

Finally, waive the fees. All of them. Deposit fees, withdraw fees, trading fees, you name it. At most, there should be a fee when taking liquidity that is paid entirely to the trader providing liquidity. People hate paying fees a lot more than they like getting subsidies. They won’t cancel out.

With that out of the way, what are your options for the main subsidy?

Option 1: Be a Market Maker and Provide Liquidity Directly

As the subsidizer of the market, commit to being the market maker with well-defined rules.

The standard principle is, let everyone know that there will always be $X of liquidity available on both sides, and at a fixed cost of Y% price difference between your bid and your offer. So for example, you might agree to offer $1,000 on each side with a difference of 5% at all times, starting with a 48% bid and a 53% offer. You’d then adjust as you did trades.

A simple rule to protect yourself from unlimited downside is if you do a trade for some percent of your liquidity, you adjust your price that percentage of its width. So in this example, if someone took 40% of your offer, you’d adjust by 40% of 5%, which is 2%, and now have a 50% bid and a 55% offer. If you follow such a rule, your maximum loss is what it takes to move the odds to 0% or 100% (and if you let people keep trading until the event is done, you will take that loss). People trading against you in opposite directions can make you money, but can’t cost you money.

For convenience, you can post additional bids and offers so that if someone wants to move the odds a lot, they can see what liquidity they would get from you, and have the option to take it all at once. You’ll lose money every time the fair probability changes, but that’s why they call it a subsidy, an this encourages people to show their information quickly and efficiently.

There are ways to make that smarter, so you can lose less (or make more!) money while offering better liquidity, which will be left as an exercise to the reader. Generally they sacrifice simplicity and transparency in order to make the subsidy ‘more efficient.’ The danger is that if the subsidy is attempting to ensure a sucker is at the table, it does not do that if it stops being the sucker, or it becomes too hard to tell if it is one or not.

Then again, the dream is to offer a subsidy that doesn’t cost you anything, or even makes you money! Market making can be highly profitable when done skillfully, while also building up a marketplace.

Option 2: Take Liquidity

If you provide liquidity, others will take advantage, but in some ways you make it harder to provide liquidity. If you take liquidity, you make it more profitable to provide it, at the risk of making the market look less liquid.

It also loses money. The more clear you are about what you are up to, the better.

There are a few fun variants of this, if you’re all right with the expense.

One strategy is to take periodically liquidity in both directions. At either fixed or random intervals, examine the order books in the market. If they meet required conditions (e.g. there is at least $X on the bid and offer within Y% of each other) then you hit the bid and lift the offer for $Z.

This costs you money, since your trades net out at a loss. If someone else was both the best bid and best offer, they made money.

That’s the idea. You’re directly subsidizing people to aggressively provide liquidity.

Traders compete to be on the bid and offer to trade with you, the virtual customer, which in turn gives those with an opinion a liquid market to trade against. Sometimes people get far too aggressive providing in such situations, and those trying to capture the subsidy end up losing money because they make bad trades against others, especially if they don’t then hedge.

You can also do this in a more random or unbalanced fashion. If you flip a coin each day and decide whether to be a buyer or a seller, that will cause the price to temporarily become ‘unfair’ to satisfy your demand – you’ll get a bad price. But that creates a trading opportunity for others. It can also make the results hard to interpret, which is a risk.

Option 3: Subsidize Trading / Give Free Money

Often you’ll see crypto exchanges do this as a promotion, offering a prize to whoever trades the most of some coin. By paying for trades, you’re encouraging exactly what you want.

Except that you’re probably not doing that. Remember Goodhart’s Law.

The problem is ‘wash’ trading, where people trade with each other or themselves without taking on positions. This is bad on every level. It misleads everyone about the volume and price, and doesn’t help at all with finding out the answer to the question the market is trying to answer. The last thing you want to do is encourage it!

For that reason, subsidizing trading itself is a dangerous game. But it can be done, if you’re careful with the design.

Many online sites have tried this in the form of the classic ‘deposit bonus’ or even the free play. Anyone can sign up and get Free Money in exchange for engaging in a minimum amount of trading activity. And of course, most of the time, a deposit to match, if the offer is more than a small ‘free play.’ In for-profit markets the goal is to have the required activity make up for the subsidy, then hopefully hook the customer to keep them trading. There are always those looking to game these offerings if you leave them vulnerable.

That can work for you. Getting those same people, who are often quite creative and clever, thinking about how to come out ahead in your system can be a big win if your end goal isn’t profit! So long as you make it sufficiently difficult to do wash trading or sign up for tons of copies of the bonuses, you can give them a puzzle worth maximizing (from their perspective) and effectively rent their labor to see what they think of the situation.

Option 4: Subsidize Market Making

You can also subsidize market making activity, as an alternative to doing the job yourself and butchering it. That’s activity you can’t fake, provided you set the rules carefully. Paying people who provide rather than take liquidity is good, and often paying for real two-sided market making activity is better. As always, make sure you’re not vulnerable to wash trading or other forms of collusion.

Option 5: Advertising

People can’t trade what they aren’t thinking about or don’t know about.

Putting It All Together

Which of these strategies is most efficient and what circumstances change that answer?

It’s expensive to change or clarify your rules and conditions once trading has begun, so invest in doing that first. Other quality of life improvements are great, but take a back seat to establishing good liquidity.

I list Option 0 first because it’s things you definitely should do if you’re taking the operation seriously, but that doesn’t mean you always do all of them first before the direct subsidy. It’s great if you can, but often you need to establish liquidity first.

If ‘no liquidity’ is the pain point and bad experience, there isn’t much that will overcome that. There’s no market. So if you don’t have liquidity yet, providing at least a reasonable amount, or paying someone else to do it, is the best thing you can do. Just throw something out there and see what happens. This makes intuitive sense all around – as an easy intuition pump, if you want to know if something is more likely than not, offering someone a 50/50 bet on it is a great way to get their real opinion.

Once liquidity isn’t a full deal breaker, it’s time to go with Option 0, then return to increasing the subsidy and spreading the word.

What form should the direct subsidy take?

I’d advise to continue to take away bad experiences and barriers first.

The best subsidy is paying to produce reliable, safe and easy to use software, getting ironclad rules in place, being ready to handle deposits, withdraws, evaluation of results and other hassles. Make sure people can find your markets and set up the markets people want to find.

Next best is to avoid fees. People hate fees more than they love subsidies. Yes, you can trick people with deposit bonuses and then charge them a lot on their trades, but the best way to get away with that is bake the fees into the trade prices, so it doesn’t look like a fee.

At a minimum, you shouldn’t be charging fees for deposits or withdraws, or for providing liquidity in the market.

Next up, make trades cost net zero fees. Either charge nothing to provide or to take liquidity, or charge a fee to take liquidity but pay it to those who provide.

After that, my opinions are less confident, but here’s my best guess.

If that’s still not good enough, provide liquidity. Either pay someone else to be a market maker, or provide the service yourself. I like the idea of a ‘dumb’ market maker everyone knows is dumb, and that operates with known rules that hamstring it. If you’re looking to provide a subsidy, this is a great way to do that. A smarter market maker is cheaper, and can provide better liquidity, but is less obviously a target. As the market matures, you’ll want to transition to something smarter. Thin markets want obviously dumb providers.

Once you’ve done a healthy amount of that, then you’ll want to give away Free Money. Give people some cash in exchange for participating in the market at all, or trading a minimum amount. Or give people bonuses on deposited funds so long as they use them to trade, or similar.

You have to watch for abuse. If you can respond to abuse by changing the system, it’s fine to be vulnerable to abuse in theory, and even allow small amounts of it. If you’re going to release a cryptographic protocol you can’t alter, you’ll need to be game theoretically robust, so this won’t be an option, and you’ll have to retreat to taking liquidity.

Taking liquidity seems less likely to motivate the average potential participant, and costs you weirdness points, but does provide a strong incentive for the right type of trader. The best reason I can think of to use such a strategy is that it is robust to abuse. That’s a big game if you can’t respond dynamically to unfriendly players.

At the end of the day, your biggest barriers are that people’s attention is limited, complexity is bad, opportunity cost is high and people don’t do things. I keep meaning to get around to bothering with HyperMind and/or PredictIt, and keep not doing it, and I’m guessing I am far from alone in that. Subsidy can get people excited and make markets work that wouldn’t otherwise get off the ground. What I think they can’t do at reasonable cost is fix fundamental problems. If you don’t have a great product behind the subsidy, it’s going to be orders of magnitude more expensive to motivate participation.




Posted in Uncategorized | 2 Comments

Tidying One’s Room

Previously (Compass Rose): Culture, Interpretive Labor, and Tidying One’s Room

Epistemic Status: A bit messy

She’s tidied up and I can’t find anything! All my tubes and wires, careful notes!” – Thomas Dolby, She Blinded Me With Science

From Compass Rose:

Why would tidying my room involve interpretive labor? 

It turns out, every item in my room is a sort of crystallized intention, generally past-me. (We’ve all heard the stories of researchers with messy rooms who somehow knew where everything was, and lost track of everything when someone else committed the violent act of reorganizing the room, thus deindexing it from its owner’s mind.) As I decided what to do with an item, I wanted to make sure I didn’t lose that information. So, I tried to Aumann with my past self – the true way, the way that filters back into deep models, so that I could pass my past self’s ideological turing test. And that’s cognitively expensive.

It’s generally too aggressive to tidy someone’s room without their permission, unless they’re in physical danger because of it. But to be unwilling to tidy my own room without getting very clear explicit permission from my past self for every action – or at least checking in – is pathologically nonaggressive.

From my wife, upon seeing the draft up to this point:

You know, in the time it takes you to write this, you could actually tidy your room.

Proof that the subject of cleaning, and cleaning that which does not belong to you, can escalate quickly in aggressiveness!

There are a few dynamics I’d like to talk about here. I won’t (today) be relating them back to Ben’s larger questions of how generally to deal with the intentions of the environment, instead choosing a more narrow scope.



Your past self left you an ideological Turing test, of a sort, by leaving items in seemingly random locations.

Good news! I have the cheat sheet.

Close, but wrong: “I’ll remember that it’s there and I’m too lazy to optimize its location further.”

Usually correct: “I’m done with this, I should put it somewhere. This is somewhere.”

Don’t give your past self too much credit.

Most things are (hopefully) where they are because you put them there on purpose. That’s where they ‘live’. If they’re not in a permanent location, they’re probably in an arbitrary location.

One should think about intention behind the current location of a thing if and only if the location was clearly chosen intentionally. 

If the location doesn’t seem random, this is probably why: “I predicted I’d look here for this item in the future. This is where I seemed to have indexed it.”

Ben worried he needed to pass an ITT against his past self before he could alter his past self’s wishes.

I think that’s backwards. Past you’s work is done. The key ITT is against your future self!



Whether or not a location was chosen carefully, it has the great advantage of memory. If you put something somewhere, there’s a good chance that’s where you will look for it. If you put it there regularly, that chance is better still.

This is why ‘tidying’ someone’s room for them is an act of aggression. 

If I’m the one who put a thing somewhere, I could figure out where it is by remembering where I put it, or asking where I had it last (which my family called ‘The Papa Josh method’ as if it wasn’t universal, but specific names are still useful, and Papa Josh was apparently kind of an annoying jerk about it). I could also pass the ideological Turing test of my past self and figure out where I would have chosen to put it.  Since, philosophical objections aside, I am me, my chances are often very good.

If I have a strong indexing of an item to a location, I’ll instinctively put it back in the same location, confident I can find it in the future. My ability to automatically look in the right place, and find it now, is good evidence of that. If it was hard to find, I should probably move it. Over time, this improves indexing.

If someone else puts the object somewhere, I now have to figure out where someone else would place the object. Over time, if they keep doing this, I’ll figure out where they put it, but when a new person starts cleaning a location, chaos reigns. What is logical to them is not what is logical to you.

An especially nasty trap is when you’re not sure if you know where an object is, so you check, it is where you look for it, then put it back in a different location. Oh good, you think, I have it, I’ll now put it over here. Classic mistake. If an object is in the first place you look, and you need to find it soon, put it back exactly where you found it! If an object isn’t in the first place you look, put it in the first place you looked! You’ll look there again.

Otherwise, what you are doing is systematically taking things you can find, and moving them to locations where you might not find them. Whereas if you fail to find them, you won’t move them, and they’ll stay not found. This is why you can’t find the remote – it keeps moving randomly until it finds a place where you can’t find it, then stays there until you figure that one out. Repeat.

It took way too many times when the only thing I needed was reliably in the wrong pocket for me to figure out how this works.



As a child in the days before the internet, I would keep stacks of sports and gaming magazines in my room. In order to quickly locate the one I wanted, I’d spread them out so part of the cover was visible on each copy, allowing a quick visual scan.

Then someone would, against my will, come in and ‘clean’ the room, stacking them all into one pile with no way to tell which one was which.

So the moment I came back, I’d undo the pile and spread them back out again, since the pile was almost optimizing for lack of legibility.

I’d complain about this all the time, and make my wishes clear, and the stack would reassemble twice a week anyway.

Space, especially visual space, is a resource. Using it draws things to your attention. That’s good if you want to find them! It also threatens to distract. It gives the appearance of clutter, and threatens to clutter the mind.



It is tempting to ‘tidy’ one’s room, to give appearance of tidiness, or to clear necessary space, by accumulating debt. You shove things aside or into closets, rather than putting them in a place that is helpful. Even sorting things into seemingly organized piles is still debt, if you don’t know the indexing and won’t be able to find them. At some point you’ll be paying search costs.

If you are not careful, this debt will accumulate, and interest on it will add up. It is hard to get motivated to pay down such debts, even when returns are good.

It is also tempting to ‘tidy’ that which does not need to be fixed, or to let this task distract you as a way to procrastinate other tasks.

My solution is simple. Any time you look for something, you give yourself a reasonable amount of time to find it. If after that time you cannot find it, but you are confident it is there to be found, you stop looking for the item and instead clean the room (at least) until you find the item. This inevitably finds the item and creates equilibrium – the more you need to clean, the more likely you are to do so. If you can always find everything, then everything is fine.



Posted in Uncategorized | 16 Comments