Saturday, June 19, 2010

On Tail Risk and the Winner's Curse

Richard Thaler used to write a wonderful column on anomalies in the Journal of Economic Perspectives. Here's an extract from a 1988 entry on the winner's curse:
The winner's curse is a concept that was first discussed in the literature by three Atlantic Richfield engineers, Capen, Clapp, and Campbell (1971). The idea is simple. Suppose many oil companies are interested in purchasing the drilling rights to a particular parcel of land. Let's assume that the rights are worth the same amount to all bidders, that is, the auction is what is called a common value auction. Further, suppose that each bidding firm obtains an estimate of the value of the rights from its experts. Assume that the estimates are unbiased, so the mean of the estimates is equal to the common value of the tract. What is likely to happen in the auction? Given the difficulty of estimating the amount of oil in a given location, the estimates of the experts will vary substantially, some far too high and some too low. Even if companies bid somewhat less than the estimate their expert provided, the firms whose experts provided high estimates will tend to bid more than the firms whose experts guessed lower... If this happens, the winner of the auction is likely to be a loser.
Thaler goes on to point out that the winner's curse would not arise if all bidders were rational, for they would take into account when bidding that conditional on winning the auction, the valuation of their experts is likely to have been inflated. But he also presents evidence (from laboratory experiments as well as field data on offshore oil and gas leases and corporate takeovers) that bidders are not rational to this degree, and that the winner's curse is therefore an empirically relevant phenomenon. Many observers of the free agent market in baseball would agree.
In Thaler's description, the winner's curse arises despite the fact that bidder estimates are unbiased: their valuations are correct on average, even though the winning bid happens to come from someone with excessively optimistic expectations. Someone familiar with this phenomenon would therefore never conclude that all bidders are excessively optimistic simply by observing the fact that winning bidders tend to wish that they had lost.
By the same token, when firms like BP and AIG are revealed to have underestimated the extent to which their actions exposed them (and numerous others) to tail risk, one ought not to presume that they were acting under the influence of a psychological propensity to which we are all vulnerable. Those who had more realistic (or excessively pessimistic) expectations regarding such risks simply avoided them, and by doing so also avoided coming to our attention.
And yet, here is the very same Richard Thaler arguing that a behavioral propensity to accept "risks that are erroneously thought to be vanishingly small" was responsible for both the financial crisis and the oil spill:
The story of the oil crisis is still being written, but it seems clear that BP underestimated the risk of an accident. Tony Hayward, its C.E.O., called this kind of event a “one-in-a-million chance.” And while there is no way to know for sure, of course, whether BP was just extraordinarily unlucky, there is much evidence that people in general are not good at estimating the true chances of rare events, especially when human error may be involved.
There is certainly a grain of truth in this characterization, but I feel that it misses the real story. As the analysis underlying the winner's curse teaches us, those with the most optimistic expectations will take the greatest risks and suffer the most severe losses when the low probability events that they have disregarded eventually come to pass. But tail risks are unlike auctions in one important respect: there can be a significant time lag between the acceptance of the risk and the realization of a catastrophic event. In the interim, those who embrace the risk will generate unusually high profits and place their less sanguine competitors in the difficult position of either following their lead or accepting a progressively diminishing market share. The result is herd behavior with entire industries acting as if they share the expectations of the most optimistic among them. It is competitive pressure rather than human psychology that causes firms to act in this way, and their actions are often taken against their own better judgment. 
This ecological perspective lies at the heart of Hyman Minsky's analysis of financial instability, and it can be applied more generally to tail risks of all kinds. As an account of the (environmental and financial) catastrophes with which we continue to grapple, I find it more compelling and complete than the psychological story. And it has the virtue of not depending for its validity on systematic,  persistent, and largely unexplained cognitive biases among professionals in high stakes situations.
Both James Kwak and Maxine Udall have also taken issue with Thaler's characterization (though on somewhat different grounds). James also had this to say about behavioral economics more generally:
Don’t get me wrong: I like behavioral economics as much as the next guy. It’s quite clear that people are irrational in ways that the neoclassical model assumes away... But I don’t think cognitive fallacies are the answer to everything, and I don’t think you can explain away the myriad crises of our time as the result of them.
I agree completely. As I said in an earlier post, I can't help thinking that too much is being asked of behavioral economics at this time, much more than it has the capacity to deliver.

---

Update (6/20). In a response to this post, Brad DeLong makes two points. First, he observes that those who underestimate tail risk can make unusually high profits not just in the interim period before a catastrophic event occurs, but also if one averages across good and bad realizations:
To the extent that the optimism of noise traders leads them to hold larger average positions in assets that possess systemic risk, their average returns will be higher in a risk-averse world--not just in those states of the world in which the catastrophe has not happened yet, but quite possibly averaged over all states of the world including catastrophic states.
This is logically correct, for reasons that were discussed at length in Brad's 1990 JPE paper with Shleifer, Summers and Waldmann. But (as I noted in my comment on his post) I don't think the argument applies to the risks taken by BP and AIG, which could easily have proved fatal to the firms. One could try to make the case that even with bankruptcy, the cumulative dividend payouts would have resulted in higher returns than less exposed competitors, but the claim seems empirically dubious to me.

Brad's second point is that my distinction between the ecological and psychological approaches is unwarranted, and that the two are in fact complementary. Here he quotes Charles Kindleberger:
Overestimation of profits comes from euphoria, affects firms engaged in the production and distributive processes, and requires no explanation. Excessive gearing arises from cash requirements that are low relative both to the prevailing price of a good or asset and to possible changes in its price. It means buying on margin, or by installments, under circumstances in which one can sell the asset and transfer with it the obligation to make future payments. As firms or households see others making profits from speculative purchases and resales, they tend to follow: "Monkey see, monkey do." In my talks about financial crisis over the last decades, I have polished one line that always gets a nervous laugh: "There is nothing so disturbing to one’s well-being and judgment as to see a friend get rich."
The Kindeberger quote is wonderful, but the claim is about interdependent preferences, not cognitive limitations. I don't doubt that cognitive limitations matter (I started my post with the winner's curse after all) but I was trying to shift the focus to interactions and away from psychology. In general I think that the Minsky story can be told with very modest departures from rationality, which to me is one of the strengths of the approach.

6 comments:

  1. Similar to the tactical symmetry problem in venture capital.

    Briefly, suppose two startups with identical resources simultaneously spot a natural monopoly opportunity. Rational agents would invest up to half the present value of this monopoly over its lifetime, resulting in a lifetime NPV of around zero for the industry.

    Real-world agents will instead wildly overspend, each betting on success, resulting in negative industry NPV.

    As a result, it is critical for the VC to identify not just a strategic moat, but a significant initial tactical advantage before investing, so that the contest is not symmetric.

    ReplyDelete
  2. Rajiv,

    One can get herding with fully rational agents, as Chari and Kehoe showed in 2004 in JET. One needs noise, but not noise traders as such. The problem is that gang does not carry this insight over into their DSGE models.

    ReplyDelete
  3. ^ The problem is that those herds don't survive in DSGE. If you come-up with a DSGE model where these exist - you'll easily get it into Econometrica

    ReplyDelete
  4. One feature that the BP and AIG situations share to a much greater extent than the oil lease auction context is the ability to externalize losses.

    Any claims based compensation process will cause some harm in an oil spill based disaster like the one BP caused through what in hindsight ended up being excessive risk taking, to go uncompensated in marginal cases.

    Presumably, BP would not have ceased to drill even if it knew the exact ex ante probability of the disaster we're experiencing now. It is in the off shore drilling business and is one of the most profitable companies in the world as a result. But, it might have procured more insurance.

    Also, in the BP case, the de facto other party (the public) was represented by the Mineral Management Service which was notably dysfunctional at the critical moments in the Deepwater Horizon approval and regulatory process.

    AIG's losses from taking what in hindsight ended up being excessive risk taking was externalized through a government bailout, but if they had not been externalized that way, they would have been externalized through the bankruptcy process with creditors taking the losses.

    Any time that it is known that losses will be partially externalized, while gains will be internalized, there is an incentive to engage in excessive risk. This is why limited liability entities were initially allowed only by legislative decree and why pre-FDIC banks had such stunningly high failure rates (typically 50% or so in periods including a financial panic about a hundred times higher than in the post-FDIC era).

    AIG too had an ineffectual regulator (the Fed), and failed to disclose its exposure to the counterparties to its transactions. Had its exposure been adequately disclosed, its bond rating might not have been as high and its counterparties might have sought to diversify their risks.

    Another feature true in the BP and AIG scenarios and in almost every other large publicly held company that has been in existence for many years is that the downside risk to the individual human beings actually making the decisions was very modest - a kept job and a smaller bonus in a good scenario, a lost job in a bad scenario, and a lost job and a monetary loss that is a tiny fraction of the loss caused (or potential upside gain) in a worst case scenario.

    At both the institutional and the individual level, unbalanced incentives and ineffectual counterparts in the negotiation process lead rational actors to make risky choices.

    Like Tail Risk and the Winner's Curse, none of these theories require behavioral economics adjustments on the part of the bad actors; just a certain amount of carelessness by the future victims that may be rational in the big picture for those victims given the costs of vigilance.

    ReplyDelete
  5. Andrew, thanks for your comment (and the many others you have posted - I read them all even if I can't always respond). I agree with much of what you say, but I don't view counterparty risk as an externality. This is something that is accepted voluntarily by those exposed to it, and should be capitalized into CDS spreads. Both AIG and its counterparties substantially underestimated this risk, which allowed AIG to corner the market.

    ReplyDelete
  6. Rajiv - obviously you're preaching to the converted here but "irrational" explanations of the crisis don't convince me. Your post motivated me to put together some of my thoughts on the subject here http://www.macroresilience.com/2010/06/24/agent-irrationality-and-macroeconomics/ .

    Not much that you haven't mentioned before on this blog but just one point that I feel doesn't get enough attention - part of the problem with the rational vs irrational debate is that the definition of rationality assumed by both camps is a utopian ideal. I find Gerd Gigerenzer's work on this topic and his insistence on an ecological rationality rather than homo economicus incredibly convincing and can't recommend his recent book "Rationality for Mortals: How People Cope with Uncertainty" highly enough.

    ReplyDelete