There's an interesting debate in progress between Nate Silver and Matt Yglesias on the merits of introducing prediction markets for climate change. Nate is enthusiastic about Robin Hanson's proposal that such markets be developed, Matt is concerned about manipulation of prices by coal and oil interests, and Nate thinks that these concerns are a bit overblown and could be overcome by creating markets that have broad participation and high levels of liquidity.
Nate's argument is roughly as follows: the broader the participation and the greater the volume of trade, the more expensive it will be for an individual or organization to consistently manipulate prices over a period of months or years. If this argument is correct, then markets with limited participation and low volume (such as the Iowa Electronic Markets) should be less efficient at aggregating information than markets with relatively broad participation and much higher volume (such as Intrade). The logic of this argument is so compelling that I was once certain it must be true. But after watching these two markets closely during the 2008 election season, I became convinced that it was IEM rather than Intrade that was sending the more reliable signals, and for some very interesting and subtle reasons.
First of all, let's think for a minute about how one might determine which of two markets is aggregating information more efficiently. We can't just look at events that occurred and examine which of the two markets assigned such events greater probability, because low probability events do indeed sometimes occur. If we had a very large number of events (as in weather forecasting) then one could construct calibration curves to compare markets, but the number of contracts on IEM is very small and this option is not available. So what do we do?
Fortunately, there is a reliable method for comparing the efficiency of the two markets, by looking for and exploiting cross-market arbitrage opportunities. Here's how it works. Open an account in each market with the same level of initial investment. There is a limit of $500 on initial account balances at IEM, so let's take this as our initial investment also at Intrade. Next, look for arbitrage opportunities: differences in prices for the same asset across markets that are large enough for you to make a certain profit, net of trading fees (these are zero on IEM but not on Intrade). Such opportunities do arise, and sometimes last for hours or even days: here's an example. Act on these opportunities, by selling where the price is high and buying where it is low. When prices in the two markets converge, reverse these trades: buy where you initially sold and sell where you initially bought. You will not make much money doing this, since the price differences in general will be small. But what you will do is transfer funds across accounts without making a loss.
How does this help in answering the question of which market is efficient? After a few weeks or months have passed, your overall balance will have grown slightly, but will now be unevenly distributed across markets. The market in which you have made more money is the one that is less efficient. This is because on average, prices in the less efficient market will move towards those in the more efficient one, and when you reverse your arbitrage position, the profit you will make will be concentrated in the market in which the price has moved most.
Let me state for the record that I did not, in fact, carry out this experiment although I think it would be a good (and probably publishable) research project. But I did try to see informally which market was better predicting future prices in the other, and came to the conclusion that it was IEM. This surprised me, and I started to wonder about the reasons why a small, illiquid market with severe restrictions on participation and account balances could be more efficient.
There are two possible reasons. First, Intrade was highly visible in the news media, and changes in prices were regularly reported on blogs across the political spectrum. A fall in the price of a contract could signal weakness in a campaign, generate pessimism about its viability, and result in a collapse in fundraising. Propping up the price during a difficult period therefore made a lot of sense, and could pay for itself several times over with its impact on donations. Dollar for dollar, it was probably a much better investment than television advertising in prime time. I'm not suggesting that the campaigns themselves did or encouraged this, but it does seem likely that some well-financed supporters took it upon themselves to help out in this way.
The second reason is more interesting. The extent of participation and the volume of trade in a market are not determined simply by the market design; they also depend on the availability of profit opportunities, which itself depends in part on the extent of attempted manipulation. There is an active user's forum on Intrade, and it was clear at the time that a small, smart group of traders were on the lookout for mispriced assets, well aware that such mispricing could arise out of political enthusiasm (as in the nominee contract for Ron Paul) or through active manipulation (as in the Obama and McCain contracts discussed by Nate here).
In other words, the breadth of participation and the volume of trade will be higher when market manipulation is suspected than when it is not. If the climate change futures market is assumed to be efficient, it will probably attract fewer traders and lower volumes of investment. So Nate's solution - the design of a market with high participation and liquidity in order to generate efficiency - contains at its heart a paradox. It is inefficiency that will generate high participation and liquidity should such a market come into existence.
I do believe that the introduction of prediction markets for climate change is a good idea. But I would like to see similar contracts offered across multiple markets, including at least one like the IEM in which participation is limited with respect to both membership and initial balance. This will allow us to carry out an ongoing evaluation of the reliability of market signals, as well as the effectiveness of different market designs.
Update (12/12): Thanks to Paul Hewitt for an extended discussion of this post, and to Chris Masse for linking both here and to Paul's commentary.
---
Update (12/12): Thanks to Paul Hewitt for an extended discussion of this post, and to Chris Masse for linking both here and to Paul's commentary.
Hi Rajiv...
ReplyDeleteI found your post very interesting and thought provoking. I have a blog on prediction markets at torontopm.wordpress.com. I have just posted my review of your post with my comments. If you are interested, I have written several posts on the current debate.
I'd be most interested in your comments on some of the topics raised on my blog.
Paul Hewitt, CA
pshewitt@rogers.com
Paul, thanks for your comments (both here and on your blog). I look forward to reading your other posts on the topic.
ReplyDelete