Saturday, July 03, 2010

Innovation, Scaling, and the Industrial Commons

When Yves Smith makes a strong reading recommendation, I usually take notice. Today she directed her readers to an article by Andy Grove calling for drastic changes in American policy towards innovation, scaling, and job creation in manufacturing. The piece is long, detailed and worth reading in full, but the central point is this: an economy that innovates prolifically but consistently exports its jobs to lower cost overseas locations will eventually lose not only its capacity for mass production, but eventually also its capacity for innovation:
Bay Area unemployment is even higher than the... national average. Clearly, the great Silicon Valley innovation machine hasn’t been creating many jobs of late -- unless you are counting Asia, where American technology companies have been adding jobs like mad for years.

The underlying problem isn’t simply lower Asian costs. It’s our own misplaced faith in the power of startups to create U.S. jobs... Startups are a wonderful thing, but they cannot by themselves increase tech employment. Equally important is what comes after that mythical moment of creation in the garage, as technology goes from prototype to mass production. This is the phase where companies scale up. They work out design details, figure out how to make things affordably, build factories, and hire people by the thousands. Scaling is hard work but necessary to make innovation matter.
The scaling process is no longer happening in the U.S. And as long as that’s the case, plowing capital into young companies that build their factories elsewhere will continue to yield a bad return in terms of American jobs...

There’s more at stake than exported jobs... A new industry needs an effective ecosystem in which technology knowhow accumulates, experience builds on experience, and close relationships develop between supplier and customer. The U.S. lost its lead in batteries 30 years ago when it stopped making consumer-electronics devices. Whoever made batteries then gained the exposure and relationships needed to learn to supply batteries for the more demanding laptop PC market, and after that, for the even more demanding automobile market. U.S. companies didn’t participate in the first phase and consequently weren’t in the running for all that followed...

How could the U.S. have forgotten [that scaling was crucial to its economic future]? I believe the answer has to do with a general undervaluing of manufacturing -- the idea that as long as “knowledge work” stays in the U.S., it doesn’t matter what happens to factory jobs... I disagree. Not only did we lose an untold number of jobs, we broke the chain of experience that is so important in technological evolution... our pursuit of our individual businesses, which often involves transferring manufacturing and a great deal of engineering out of the country, has hindered our ability to bring innovations to scale at home. Without scaling, we don’t just lose jobs -- we lose our hold on new technologies. Losing the ability to scale will ultimately damage our capacity to innovate.
Grove recognizes, of course, that companies will not unilaterally change course unless they face a different set of incentives, and that this will require a vigorous industrial policy:
The first task is to rebuild our industrial commons. We should develop a system of financial incentives: Levy an extra tax on the product of offshored labor. (If the result is a trade war, treat it like other wars -- fight to win.) Keep that money separate. Deposit it in the coffers of what we might call the Scaling Bank of the U.S. and make these sums available to companies that will scale their American operations. Such a system would be a daily reminder that while pursuing our company goals, all of us in business have a responsibility to maintain the industrial base on which we depend and the society whose adaptability -- and stability -- we may have taken for granted... Unemployment is corrosive. If what I’m suggesting sounds protectionist, so be it... If we want to remain a leading economy, we change on our own, or change will continue to be forced upon us.
Neither Grove's diagnosis nor his proposed solutions will persuade those who are convinced that protectionism of any kind is folly. I am not entirely convinced myself, and suspect that he may be underestimating the likelihood (and consequences) of cascading retaliatory actions and a collapse in international trade. But the argument must be taken seriously, and anyone opposed to his proposals really ought to come up with some alternatives of their own.

---

Update (7/4). In an email (posted with permission) Yves adds:
On the one hand, you are right, any move towards protectionism (or even permitted-within-WTO pushback against mercantilist trade partners) could very quickly get ugly. But the flip side is I wonder if we have a level of global integration that is inherently unstable (both for Rodrik trilemma reasons, international economic integration with insufficient government oversight creates political problems, plus the Reinhart/Rogoff finding that high levels of international capital flows are associated with financial crises). If so, we may have a short run (messiness of reconfiguration) v. long term (costs of really big financial crises) tradeoff.
This is a good point. The purpose of my post was to highlight Grove's analysis of the symbiotic relationship between innovation and scaling (which I think is both interesting and valid), and to challenge those who are opposed to his reform proposals to explain how they would deal with the situation in which we find ourselves. Passive tolerance of mass unemployment, widening income inequality, and withering innovative capacity is not an option.

---

Update (7/4). Tyler Cowen is predictably dismissive of Grove's article, but (less predictably) seems not to have read it very closely. What Grove means by scaling is the process by means of which "technology goes from prototype to mass production" as companies "work out design details, figure out how to make things affordably, build factories, and hire people by the thousands." This is not about increasing returns to scale as economists normally use the term (declining average costs as a function of output). So Tyler's claim that "at best, given the logic of [Grove's] argument, this would imply a tax only on the increasing returns industries" is not correct. And I cannot imagine what he means when he says that the "big exporting success these days is Germany, which has less "scale" than does the United States." Less scale in what sense? Population or per-capita income differences between the two countries are entirely irrelevant here. Is he trying to say that Germany engages in less scaling (and hence more offshoring) than does the United States? This would be relevant, but is empirically dubious.

Like Tyler, I am not convinced that Grove's policy proposals are wise. But his analysis of the relationship between innovation and scaling and the need for a policy response really does deserve to be read with more care.

---

Update (7/6). Tim Duy follows up with a characteristically detailed and thoughtful post. His bottom line:
Something more than cyclical forces is weighing on the American jobs machine. Here I have tried to extend the Grove/Smith/Sethi discourse with additional focus on absolute declines in manufacturing jobs and distressing declines in capacity growth rates. These trends may be critically important in understanding the dismal performance of US labor markets. If they are in fact critical, they raise serious questions about US trade policy – questions that few in Washington want to address. Given the extent to which manufacturing capacity has already been offshored, those questions go far beyond the recently announced tiny shift in Chinese currency policy. Simply put, accepting the importance of manufacturing capacity and the possibility that offshoring has had a much more deleterious impact on the US economy than commonly accepted would require a significant paradigm shift in the thinking of US policymakers. If you scream “protectionist fool” in response, then you need to have a viable policy alternative that goes beyond the empty rhetoric of “we need to teach better creative thinking skills in schools.” That answer is simply too little too late.
It's worth reading the entire post to see the data and reasoning that drives him to this conclusion.

---

I'll be away at a (very interesting) conference for the next couple of days and will be slow to respond to comments and emails.

Friday, July 02, 2010

Market Microstructure and Capital Formation

In an earlier post I argued that recent changes in technology have altered the distribution of trading strategies in asset markets, with information extracting strategies becoming more prevalent at the expense of information augmenting strategies. Specifically, there has been a dramatic increase in the market share of strategies based on rapid responses to market data using algorithms and co-location facilities. One consequence is that the data itself becomes less reliable over time, resulting in greater price volatility and occasional severe disruptions. The flash crash of May 6 was a striking example. 
While my focus has been on market stability, this kind of transformation in microstructure probably has a number of other important effects. In recent testimony before the joint CFTC-SEC committee on emerging regulatory issues, David Weild has argued that one of these consequences is on the size distribution of publicly traded companies, and on capital formation more generally:
There has been a computer arms race unleashed on Wall Street by changes in regulation and technology... [This] is displacing fundamental investing with computer‐trading based strategies and has created new forms of systemic risk, a loss of investor confidence, and a disastrous decline in primary (IPO) capital formation and the number of publicly listed companies in the United States.

From 1997 to Year End 2009 there has been a 40% decline in the number of publicly listed (i.e., NYSE, AMEX and NASDAQ) companies in the United States. On a GDP weighted basis, we have seen a more than 55% decline in the number of publicly listed companies. Today’s market structure has lost the ability to support small capitalization companies and initial public offerings (IPOs) on the scale necessary to help drive the US economy. The U.S. now annually delists twice as many companies as it lists and this trend has been going on since the advent of electronic trading... the unemployment crisis in the United States has been partly caused by changes to debt and equity capital market structure and the events of May 6 may give us an opportunity to come to grips with the notion that we have entered into an era where trading interests are eclipsing fundamental investment and economic interests.

Fundamental investing, or so‐called “information increasing” activities, are being displaced by trading, or so‐called “information mining” activities. The growth in indexing and ETFs may be exacerbating this problem.

In addition, stock market structure today is geared for large‐capitalization stocks with typically symmetrical order books but disastrous for the vast majority of small‐capitalization stocks with asymmetrical order books (where there is not naturally an offsetting buy order to match against a sell order and vice versa)... The “Flash Crash” was an example of where even normally liquid securities went to a state of “asymmetry” and price discovery broke down...
[Until] all trades, quotes and other messages in all interrelated markets are tagged and traceable to the trading venue, broker and ultimate investor, and disclosed to the market, markets will not be perceived as fair... With full tagging, tracking and reporting and the application of posttrade analysis and test bed techniques such as Agent‐Based Models, regulators and market participants will... once and for all be in a position to judge the impact of other participants and to regulate and plan accordingly...
It may be time to admit that what works for large, naturally visible companies, is the antithesis of what is needed by small companies and it is these small companies that are essential to grow our markets, reduce unemployment, restore US competitiveness and drive the US economy.
I am not aware of any academic research that links market microstructure to the size distribution of publicly listed companies in the manner suggested here, and I am grateful to David for for bringing his testimony and supporting documents to my attention. The issue is clearly of considerable importance and deserving of greater scrutiny.

---

Update (7/2). In an email (posted with permission) David adds:
I did a presentation to the ISEEE (International Stock Exchange Executives Emeriti) at the end of April.  The audience consisted of about 25 mostly former senior stock exchange executives... I was taken aback by the reaction of people from places like the Zurich Stock Exchange, Australian, New Zealand, Bovespa and others who were of the opinion that these electronic market structures (specifically, compressed spread-trading centric electronic continuous auction markets) are hurting primary capital formation in many of their countries as well.

For me, having run strategy for investment banking, research, institutional sales and trading at a major Wall Street firm, it is pretty simple - If one can't make money supporting small cap stocks, one won't support small cap stocks...
This has had two effects:
  1. The investment banks tell issuers that they have to do a much larger ($75 million) IPO; minimum IPO sizes have increased at much faster than the rate of inflation.   
  2. Aftermarket support for IPOs has withered because issuers lose money providing it (unless the companies are much larger).
It is commonly argued that the rise of algorithmic trading has resulted in increased liquidity, although this claim is by no means universally accepted. David (if I understand him correctly) is arguing that even if liquidity has increased for some classes of securities, it has declined for others, with detrimental net effects on capital formation.

Happiness and the World Cup

Tyler Cowen considers the question of which team's victory in the World Cup would result in the greatest overall happiness, and concludes (based on the number and intensity of fans) that it would be Brazil. As far as the immediate effects of a victory are concerned, this is probably about right. But could there not also be consequences for global economic growth and financial stability? 
Hein Schotsman of ABN AMRO has looked at these broader economic effects and comes to the conclusion that a victory by a large economy currently running a significant trade surplus would be best. This leads him to the one obvious candidate:
According to a detailed analysis of the 32 countries in this year’s tournament, Mr. Schotsman is convinced that a win by the Germans would boost the global economy. Here’s how: Germany is among the world’s biggest economies and has a large trade surplus. A win by the Germans would boost domestic confidence and spending, thus increasing imports from other countries.

“A German victory will result in a relatively big dent in the German trade surplus, which is best for the stability of the world economy. This is just what is badly needed after the credit crisis,” Mr. Schotsman said in a report released Tuesday called Soccernomics 2010.
Maybe so. But as far as my own happiness is concerned, I would like to see Argentina prevail against Germany tomorrow. Lionel Messi has been the player of the tournament so far and I would hate to see his team eliminated.
---
I thank Ingela Alger for alerting me to this story and sending me references. For those not fully fluent in Dutch, Schotsman's paper may be upload to Google Translate for a reasonably comprehensible rendering.

Tuesday, June 29, 2010

On Blogs and Economic Discourse

I was making my way back from a conference yesterday and completely missed the uproar over Kartik Athreya's provocative essay on economics blogs. Athreya argued, in effect, that most such blogging is done by ill-informed hacks who ought to be ignored while properly trained experts (such as himself) are left in peace to do the difficult work of making progress in the field. The original post has been taken down but (as a telling reminder that no public statement can subsequently be made private in this day and age) a copy may be viewed here.

The response from the accused was swift and brutal (see Thoma, DeLong, Sumner, Rowe, Cowen, Kling, Avent, Yglesias and Wilkinson for a sample). I don't want to pile on, and there's little I can add to what others have already said. But I'd like to take this opportunity to reiterate and expand upon a couple of points that I have made in previous posts about the rapidly changing role of blogs in economic discourse.

My view of the matter is almost diametrically opposed to that of Athreya: I consider these changes to be both irreversible and potentially very healthy. In a post commemorating the birthdays of two excellent economics blogs, I made this point as follows (see also Andrew Gelman's follow-up):
The community of academic economists is increasingly coming to be judged not simply by peer reviewers at journals or by carefully screened and selected cohorts of students, but by a global audience of curious individuals spanning multiple disciplines and specializations. Voices that have long been silenced in mainstream journals now insist on being heard on an equal footing. Arguments on blogs seem to be judged largely on their merits, independently of the professional stature of those making them. This has allowed economists in far-flung places with heavy teaching loads, or those who pursued non-academic career paths, to join debates. Even anonymous writers and autodidacts can wield considerable influence in this environment, and a number of genuinely interdisciplinary blogs have emerged...
This has got to be a healthy development. One might persuade a referee or seminar audience that a particular assumption is justified simply because there is a large literature that builds on it, or that tractability concerns preclude reasonable alternatives. But this broader audience is not so easy to convince. Persuading a multitude of informed, thoughtful, intelligent readers of the relevance and validity of one's arguments using words rather than formal models is a far more challenging task than persuading one's own students or peers. If one can separate the wheat from the chaff, the reasoned argument from the noise, this process should result in a more dynamic and robust discipline in the long run.
In fact, the refereeing process for blog posts is in some respects more rigorous than that for journal articles. Reports are numerous, non-anonymous, public, rapidly and efficiently produced, and collaboratively constructed. It is not obvious to me that this process of evaluation is any less legitimate than that for journal submissions, which rely on feedback from two or three anonymous referees who are themselves invested in the same techniques and research agenda as the author.

I suspect that within a decade, blogs will be a cornerstone of research in economics. Many original and creative contributions to the discipline will first be communicated to the profession (and the world at large) in the form of blog posts, since the medium allows for material of arbitrary length, depth and complexity. Ideas first expressed in this form will make their way (with suitable attribution) into reading lists, doctoral dissertations and more conventionally refereed academic publications. And blogs will come to play a central role in the process of recruitment, promotion and reward at major research universities. This genie is not going back into its bottle.

---

Update (6/30). Andrew Gelman follows up with a long and thoughtful post on the role of blogs in academic research across different fields:
Sethi points out that, compared to journal articles, blog entries can be subject to more effective criticism. Beyond his point (about a more diverse range of reviewers), blogging also has the benefit that the discussion can go back and forth. In contrast, the journal reviewing process is very slow, and once an article is published, it typically just sits there...

Can/should the blogosphere replace the journal-sphere in statistics? I dunno. At times I've been able to publish effective statistical reactions in blog form... or to use the blog as a sort of mini-journal to collect different viewpoints... And when it comes to pure ridicule... maybe blogging is actually more appropriate than formally writing a letter to the editor of a journal.

But I don't know if blogs are the best place for technical discussions. This is true in economics as much as in statistics, but the difference is that many people have argued (perhaps correctly) that econ is already too technical, hence the prominence of blog-based arguments is maybe a move in the right direction...

Statistics, though, is different... even the applied stuff that I do is pretty technical--algebra, calculus, differential equations, infinite series, and the like... Can this sort of highly-technical material be blogged? Maybe so. Igor Carron does it, and so does Cosma Shalizi--and both of them, in their technical discussions, clearly link the statistical material to larger conceptual questions in scientific inference and applied questions about the world. But this sort of blogging is really hard--much harder, I think, than whatever it takes for an economics professor with time on his or her hands to regularly churn out readable and informative blogs at varying lengths commenting on current events, economic policy, the theories of micro- and macro-economics, and all the rest...

On the other hand, the current system of scientific journals is, in many ways, a complete joke. The demand for referee reports of submitted articles is out of control, and I don't see Arxiv as a solution, as it has its own cultural biases. I agree with Sethi that some sort of online system has to be better, but I'm guessing that blogs will play more of a facilitating informal discussions rather than replacing the repositories of formal research. I could well be wrong here, though: all I have are my own experiences, I don't have any good general way of thinking about this sort of sociology-of-science issue.
One minor point of clarification: I did not say (or mean to imply) that blogs would replace journals as the primary repositories of academic research. My point was simply that blogs are fast becoming an integral part of the research infrastructure and that, looking ahead, many innovative ideas will find initial expression in this format before being subject to further development along more traditional lines.

Tuesday, June 22, 2010

Gamesmanship and Collective Reputation

I've often wondered why diving is so prevalent in football. Even if one manages to fool a referee occasionally, the act is captured on video for all to see and inevitably hurts the reputation of the player and his team. Quite apart from the resulting ridicule, there are also long term costs on the field. Referees are more likely to be suspicious when they see players with tarnished reputations tumbling like bowling pins with little apparent contact. Some legitimate fouls may not be called as a result, and there's always the possibility that a player may be cautioned or sent off for unsportsmanlike conduct. So the whole culture of diving, and the fact that it has been embraced so thoroughly by certain teams while being avoided and frowned upon by others, has always been a bit of a puzzle to me.
In a fascinating article, Andrea Tallarita provides some rationalization for this behavior. He explains that diving is a part of a broad range of calculated tactics that are used to get into an opponent's head, inducing frustration, loss of concentration and overreaction. Zidane's costly headbutt of Materazzi in the 2006 World Cup final is the most famous of many examples. Here's how Tallarita explains the approach: 
Perhaps nothing has been more influential in determining the popular perception of the Italian game than furbizia, the art of guile... The word ‘furbizia’ itself means guile, cunning or astuteness. It refers to a method which is often (and admittedly) rather sly, a not particularly by-the-book approach to the performative, tactical and psychological part of the game. Core to furbizia is that it is executed by means of stratagems which are available to all players on the pitch, not only to one team. What are these stratagems? Here are a few: tactical fouls, taking free kicks before the goalkeeper has finished positioning himself, time-wasting, physical or verbal provocation and all related psychological games, arguably even diving... Anyone can provoke an adversary, but it takes real guile (real furbizia) to find the weakest links in the other team’s psychology, then wear them out and bite them until something or someone gives in - all without ever breaking a single rule in the book of football. 
Viewed in this light, the prevalence of diving starts to make a bit more sense. Even if one doesn't win the immediate foul or penalty, the practice can unsettle an opponent and induce errors. And a reputation for diving can cause an opponent to avoid even minimal, routine contact. This is gamesmanship, pure and simple.
But if gamesmanship is so rewarding, why are some teams reluctant to embrace it? Why do the Spanish play such a clean version of the game and consider these tactics to be beneath them, while their closest neighbors, the Italians and Portuguese, have no such qualms? Here is Tallarita's explanation:
Ultimately, these differences come from two irreconcilable visions of the game. The Spanish style understands football as something like a fencing match, a rapid and meticulous art of noble origins where honour is the brand of valour. To the Italians, football is more like an ancient battle, a primal and inclement bronze-age scenario where survival rules over honour.
But this just begs the question: why are the visions of the game so different in nations that are geographically and culturally so close? I think that the answer (or at least part of it) lies in the fact that once a collective reputation has been established, it becomes individually rational for new entrants to the group to act in ways that preserve it. This mechanism was explored in a very interesting 1996 paper by Jean Tirole in which he explains why "new members of an organization may suffer from an original sin of their elders long after the latter are gone." 
The reason why the past behavior of the group affects the incentives of current and future members is that past behavior is not perfectly observable at the level of the individual. Groups consist of overlapping cohorts, with older members mixed in with newer ones. Those older members who have behaved "badly" in the past and thus ruined their reputations have no incentive to behave "well" currently. But suspicion also falls on the newer members, who cannot be perfectly distinguished from the older ones. This suspicion alters incentives in such a manner as to make it self-fulfilling. Even if the entire group would benefit from a change in reputation, this may be impossible to accomplish. Lifting the reputation of the group would require several cohorts to behave well despite being presumed to behave badly, and this is a sacrifice that does not serve their individual interests.
While I have used Tirole's model here to account for variations across teams in their levels of gamesmanship, his own motivation is much broader: he is interested in understanding variations across societies in levels of corruption and differences among firms in their reputation for product quality. And one can think of numerous other examples in which history has saddled a group with a reputation that is hard to shake because doing so requires significant and sustained collective sacrifices from current and future members.

---

Update (6/25). An excellent comment (as usual) by Andrew Oh-Willeke:
The notion that cultural founder effects have great institutional legacies also has strong implications for bankruptcy policy and for policy related to government bureaucracies.

It suggests that completely shutting down one organization, even if it will be replaced by a new organization doing the same thing with the same technology should often be preferred to trying to reorganize existing organizations, because the failure of the troubled firm or bureaucratic unit may be a problem with organizational culture that would otherwise persist, rather than more "objective" factors.

This might also suggest that seemingly absurd economic development strategies, like Attaturk's law mandating that all men wear bowler hats, may have more merit to them than they seem to at an obvious level. The example Malcolm Gladwell used of this phenomena was the increased safety record that was observed at Korean Airlines when flight crews started to use English rather than Korean.
I hope to say more about this in a subsequent post.

An alternative (and perhaps complementary) perspective on heterogeneity in behavior across teams comes from Cyril Hedoin at Rationalité Limitée, who argues that there are major differences across national leagues in gamesmanship norms, sustained by the sanctioning of those who fail to conform to local expectations.

I'm in Istanbul for a conference at the moment and will be slow to respond to emails and comments for a few days.

Sunday, June 20, 2010

The Diving Champions of the (Football) World

Aside from early losses by Germany and Spain, the biggest surprise of the World Cup so far is probably the inability of Italy (the reigning champions) to win either of their first two games. First they drew with Paraguay, ranked 31st in the world, and then again today against 78th ranked New Zealand.
In both cases the Italians came back from a goal behind, and in the latter game did so on the basis of a dubious penalty. De Rossi's spectacular dive after getting his shirt gently tugged by Smith was a wonder to behold, revealing yet again that the Italians are undisputed masters of the simulated foul. Even the Wikipedia entry on the art of diving acknowledges this:
Diving (or simulation - the term used by FIFA) in the context of association football is an attempt by a player to gain an unfair advantage by diving to the ground and possibly feigning an injury, to appear as if a foul has been committed. Dives are often used to exaggerate the amount of contact present in a challenge. Deciding on whether a player has dived is very subjective, and one of the most controversial aspects of football discussion. Players do this so they can receive free kicks or penalty kicks, which can provide scoring opportunities, or so the opposing player receives a yellow or red card, giving their own team an advantage. The Italian national football team have been well known to use this tactic... In fact, their victory at the 2006 FIFA World Cup has been overshadowed by the sheer volume of controversial dives.
While the anecdotal (and video) evidence against Italy is strong, it would be useful to have a statistical measure of diving on the basis of which international comparisons could be made. One possibility is to use data on fouls suffered. For instance, in the latest game, Italy was fouled 23 times while New Zealand suffered just 10 fouls. Either New Zealand is an unusually aggressive (or clumsy) team, or a number of the "fouls" suffered by Italy were simulated.
Since data on fouls committed and suffered is readily available for all World Cup games, it should be possible to sort all this out statistically. Suppose that in any game, the total number of fouls suffered by a team depends on three factors: its propensity to dive (without detection), the opponent's propensity to foul, and idiosyncratic factors independent of the identity of the teams. Then, with a rich enough data set, it should be possible to identify the diving propensity of each team. There are subtleties that could confound the analysis, but a good forensic statistician should be able to handle these. Perhaps Nate Silver will take up the challenge?
In the meantime, for a lesson on how not to dive, enjoy this legendary "posthumous" effort by Gilardino in a 2007 game between AC Milan and Celtic:

Saturday, June 19, 2010

On Tail Risk and the Winner's Curse

Richard Thaler used to write a wonderful column on anomalies in the Journal of Economic Perspectives. Here's an extract from a 1988 entry on the winner's curse:
The winner's curse is a concept that was first discussed in the literature by three Atlantic Richfield engineers, Capen, Clapp, and Campbell (1971). The idea is simple. Suppose many oil companies are interested in purchasing the drilling rights to a particular parcel of land. Let's assume that the rights are worth the same amount to all bidders, that is, the auction is what is called a common value auction. Further, suppose that each bidding firm obtains an estimate of the value of the rights from its experts. Assume that the estimates are unbiased, so the mean of the estimates is equal to the common value of the tract. What is likely to happen in the auction? Given the difficulty of estimating the amount of oil in a given location, the estimates of the experts will vary substantially, some far too high and some too low. Even if companies bid somewhat less than the estimate their expert provided, the firms whose experts provided high estimates will tend to bid more than the firms whose experts guessed lower... If this happens, the winner of the auction is likely to be a loser.
Thaler goes on to point out that the winner's curse would not arise if all bidders were rational, for they would take into account when bidding that conditional on winning the auction, the valuation of their experts is likely to have been inflated. But he also presents evidence (from laboratory experiments as well as field data on offshore oil and gas leases and corporate takeovers) that bidders are not rational to this degree, and that the winner's curse is therefore an empirically relevant phenomenon. Many observers of the free agent market in baseball would agree.
In Thaler's description, the winner's curse arises despite the fact that bidder estimates are unbiased: their valuations are correct on average, even though the winning bid happens to come from someone with excessively optimistic expectations. Someone familiar with this phenomenon would therefore never conclude that all bidders are excessively optimistic simply by observing the fact that winning bidders tend to wish that they had lost.
By the same token, when firms like BP and AIG are revealed to have underestimated the extent to which their actions exposed them (and numerous others) to tail risk, one ought not to presume that they were acting under the influence of a psychological propensity to which we are all vulnerable. Those who had more realistic (or excessively pessimistic) expectations regarding such risks simply avoided them, and by doing so also avoided coming to our attention.
And yet, here is the very same Richard Thaler arguing that a behavioral propensity to accept "risks that are erroneously thought to be vanishingly small" was responsible for both the financial crisis and the oil spill:
The story of the oil crisis is still being written, but it seems clear that BP underestimated the risk of an accident. Tony Hayward, its C.E.O., called this kind of event a “one-in-a-million chance.” And while there is no way to know for sure, of course, whether BP was just extraordinarily unlucky, there is much evidence that people in general are not good at estimating the true chances of rare events, especially when human error may be involved.
There is certainly a grain of truth in this characterization, but I feel that it misses the real story. As the analysis underlying the winner's curse teaches us, those with the most optimistic expectations will take the greatest risks and suffer the most severe losses when the low probability events that they have disregarded eventually come to pass. But tail risks are unlike auctions in one important respect: there can be a significant time lag between the acceptance of the risk and the realization of a catastrophic event. In the interim, those who embrace the risk will generate unusually high profits and place their less sanguine competitors in the difficult position of either following their lead or accepting a progressively diminishing market share. The result is herd behavior with entire industries acting as if they share the expectations of the most optimistic among them. It is competitive pressure rather than human psychology that causes firms to act in this way, and their actions are often taken against their own better judgment. 
This ecological perspective lies at the heart of Hyman Minsky's analysis of financial instability, and it can be applied more generally to tail risks of all kinds. As an account of the (environmental and financial) catastrophes with which we continue to grapple, I find it more compelling and complete than the psychological story. And it has the virtue of not depending for its validity on systematic,  persistent, and largely unexplained cognitive biases among professionals in high stakes situations.
Both James Kwak and Maxine Udall have also taken issue with Thaler's characterization (though on somewhat different grounds). James also had this to say about behavioral economics more generally:
Don’t get me wrong: I like behavioral economics as much as the next guy. It’s quite clear that people are irrational in ways that the neoclassical model assumes away... But I don’t think cognitive fallacies are the answer to everything, and I don’t think you can explain away the myriad crises of our time as the result of them.
I agree completely. As I said in an earlier post, I can't help thinking that too much is being asked of behavioral economics at this time, much more than it has the capacity to deliver.

---

Update (6/20). In a response to this post, Brad DeLong makes two points. First, he observes that those who underestimate tail risk can make unusually high profits not just in the interim period before a catastrophic event occurs, but also if one averages across good and bad realizations:
To the extent that the optimism of noise traders leads them to hold larger average positions in assets that possess systemic risk, their average returns will be higher in a risk-averse world--not just in those states of the world in which the catastrophe has not happened yet, but quite possibly averaged over all states of the world including catastrophic states.
This is logically correct, for reasons that were discussed at length in Brad's 1990 JPE paper with Shleifer, Summers and Waldmann. But (as I noted in my comment on his post) I don't think the argument applies to the risks taken by BP and AIG, which could easily have proved fatal to the firms. One could try to make the case that even with bankruptcy, the cumulative dividend payouts would have resulted in higher returns than less exposed competitors, but the claim seems empirically dubious to me.

Brad's second point is that my distinction between the ecological and psychological approaches is unwarranted, and that the two are in fact complementary. Here he quotes Charles Kindleberger:
Overestimation of profits comes from euphoria, affects firms engaged in the production and distributive processes, and requires no explanation. Excessive gearing arises from cash requirements that are low relative both to the prevailing price of a good or asset and to possible changes in its price. It means buying on margin, or by installments, under circumstances in which one can sell the asset and transfer with it the obligation to make future payments. As firms or households see others making profits from speculative purchases and resales, they tend to follow: "Monkey see, monkey do." In my talks about financial crisis over the last decades, I have polished one line that always gets a nervous laugh: "There is nothing so disturbing to one’s well-being and judgment as to see a friend get rich."
The Kindeberger quote is wonderful, but the claim is about interdependent preferences, not cognitive limitations. I don't doubt that cognitive limitations matter (I started my post with the winner's curse after all) but I was trying to shift the focus to interactions and away from psychology. In general I think that the Minsky story can be told with very modest departures from rationality, which to me is one of the strengths of the approach.