Sunday, July 11, 2010

Rationality and Fragility in Financial Markets

In a recent paper on financial innovation and fragility, Gennaioli, Shleifer and Vishny argue that investors (and often also financial intermediaries) are hobbled by certain systematic cognitive biases that cause them to neglect unlikely events when assessing asset values. They argue that such "local thinking" results in the creation and excessive issuance of engineered securities that are widely believed to be close substitutes for more traditional safe assets, but turn out to be much riskier than initially anticipated. This psychological regularity, they believe, accounts for a number of historical episodes of financial instability:
Many recent episodes of financial innovation share a common narrative. It begins with a strong demand from investors for a particular, often safe, pattern of cash flows. Some traditional securities available in the market offer this pattern, but investors demand more (so prices are high). In response to demand, financial intermediaries create new securities offering the sought after pattern of cash flows, usually by carving them out of existing projects or other securities that are more risky. By virtue of diversification, tranching, insurance, and other forms of financial engineering, the new securities are believed by the investors, and often by the intermediaries themselves, to be good substitutes for the traditional ones, and are consequently issued and bought in great volumes. At some point, news reveals that new securities are vulnerable to some unattended risks, and in particular are not good substitutes for the traditional securities. Both investors and intermediaries are surprised by the news, and investors sell these “false substitutes,” moving back to the traditional securities with the cash flows they seek. As investors fly for safety, financial institutions are stuck holding the supply of the new securities (or worse yet, having to dump them as well in a fire sale because they are leveraged). The prices of traditional securities rise while those of the new ones fall sharply.
The authors claim that this sequence of events describes not only the recent experience with collateralized debt obligations and money market funds, but also earlier episodes of financial innovation, including prepayment tranching of collateralized mortgage obligations in the 1980s.
In order to explore precisely the implications of local thinking in the context of financial innovation, the authors construct a model based on a number of stark, simplifying assumptions. There are two assets: a traditional safe security and a risky asset that has three possible terminal payoffs. The worst case outcome for the risky asset is also the least likely to occur (this is a crucial assumption). Investors are homogeneous and highly risk averse. Financial innovation takes the form of separating the cash flows from the risky asset into two components: a "safe" security that earns the the worst case payoff regardless of the actual outcome, and a risky residual claim. Under rational expectations this innovation is welfare improving, and the quantity of the substitute issued is precisely such that all such claims would be covered even if the worst case loss were to materialize. That is, the substitute security really is safe.
Under local thinking, the least likely event (which is also the worst case outcome) is simply neglected, and beliefs about the other two outcomes are correspondingly inflated. The intermediate outcome is now (mistakenly) perceived to be the worst, and a greater quantity of the substitute security is issued than could be honored if the actual worst case outcome were to be realized. Now suppose that some bad news arrives, conditional on which the objective probabilities of the three outcomes are altered in such a manner as to make the intermediate outcome the least likely. Local thinking then causes investors to become excessively pessimistic: the worst case outcome not only becomes suddenly salient, but the less disastrous intermediate outcome is neglected and the decline in the price of the asset previously thought to be safe is greater than it would be under rational expectations.
The development of a theoretical framework within which common elements of various historical episodes can be examined is clearly a worthwhile exercise. But what troubles me about this paper (and much of the behavioral finance literature) is that the rational expectations hypothesis of identical, accurate forecasts is replaced by an equally implausible hypothesis of identical, inaccurate forecasts. The underlying assumption is that financial market participants operating under competitive conditions will reliably express cognitive biases identified in controlled laboratory environments. And the implication is that financial instability could be avoided if only we were less cognitively constrained, or constrained in different ways -- endowed with a propensity to overestimate rather than discount the likelihood of unlikely events for example. 
This narrowly psychological approach to financial fragility neglects two of the most analytically interesting aspects of market dynamics: belief heterogeneity and evolutionary selection. Even behavioral propensities that are psychologically rare in the general population can become widespread in financial markets if they result in the adoption of successful strategies. As a result, asset prices disproportionately reflect the beliefs of investors who have been most successful in the recent past. There is no reason why these beliefs should consistently conform to those in the general population.
I have argued previously for the further development of this ecological perspective on financial instability, and similar themes have been explored elsewhere; see especially Macroeconomic Resilience and David Murphy. As I said in an earlier post, a bit too much is being asked of behavioral economics at this time, more than it has the capacity to deliver.

---

Update (7/11). David Murphy follows up with characteristic clarity:
I would even go further, because this argument neglects the explicitly reflexive nature of market participant’s thinking. (Call it social metacognition if you really want some high end jargon.) Traders can both absolutely understand that a behavioral propensity is rare and likely to lead to catastrophe and behave that way: they do this because they believe that other market participants will too, and behaving that way if others do will make money in the short term. Even if you think that it is crazy for (pick your favourite bubblicious asset) to trade that high, providing you also believe others will buy it, then it makes sense for you to buy it along with the crowd. Moreover, worse, you may well believe that they too think it is crazy: but all of you are in a self-sustaining system and the first one to get off looks the most foolish (for a while). Most people are capable of spotting a bubble if it lasts long enough: the hard part is timing your exit to account for the behaviour of all the other smart people trying to time their exit too.
I agree completely. There are many examples of prominent fund managers trying to grapple with this problem during the bubble in technology stocks a decade ago. This is why markets can (approximately) satisfy what James Tobin called information arbitrage efficiency while failing to satisfy fundamental valuation efficiency.

Saturday, July 03, 2010

Innovation, Scaling, and the Industrial Commons

When Yves Smith makes a strong reading recommendation, I usually take notice. Today she directed her readers to an article by Andy Grove calling for drastic changes in American policy towards innovation, scaling, and job creation in manufacturing. The piece is long, detailed and worth reading in full, but the central point is this: an economy that innovates prolifically but consistently exports its jobs to lower cost overseas locations will eventually lose not only its capacity for mass production, but eventually also its capacity for innovation:
Bay Area unemployment is even higher than the... national average. Clearly, the great Silicon Valley innovation machine hasn’t been creating many jobs of late -- unless you are counting Asia, where American technology companies have been adding jobs like mad for years.

The underlying problem isn’t simply lower Asian costs. It’s our own misplaced faith in the power of startups to create U.S. jobs... Startups are a wonderful thing, but they cannot by themselves increase tech employment. Equally important is what comes after that mythical moment of creation in the garage, as technology goes from prototype to mass production. This is the phase where companies scale up. They work out design details, figure out how to make things affordably, build factories, and hire people by the thousands. Scaling is hard work but necessary to make innovation matter.
The scaling process is no longer happening in the U.S. And as long as that’s the case, plowing capital into young companies that build their factories elsewhere will continue to yield a bad return in terms of American jobs...

There’s more at stake than exported jobs... A new industry needs an effective ecosystem in which technology knowhow accumulates, experience builds on experience, and close relationships develop between supplier and customer. The U.S. lost its lead in batteries 30 years ago when it stopped making consumer-electronics devices. Whoever made batteries then gained the exposure and relationships needed to learn to supply batteries for the more demanding laptop PC market, and after that, for the even more demanding automobile market. U.S. companies didn’t participate in the first phase and consequently weren’t in the running for all that followed...

How could the U.S. have forgotten [that scaling was crucial to its economic future]? I believe the answer has to do with a general undervaluing of manufacturing -- the idea that as long as “knowledge work” stays in the U.S., it doesn’t matter what happens to factory jobs... I disagree. Not only did we lose an untold number of jobs, we broke the chain of experience that is so important in technological evolution... our pursuit of our individual businesses, which often involves transferring manufacturing and a great deal of engineering out of the country, has hindered our ability to bring innovations to scale at home. Without scaling, we don’t just lose jobs -- we lose our hold on new technologies. Losing the ability to scale will ultimately damage our capacity to innovate.
Grove recognizes, of course, that companies will not unilaterally change course unless they face a different set of incentives, and that this will require a vigorous industrial policy:
The first task is to rebuild our industrial commons. We should develop a system of financial incentives: Levy an extra tax on the product of offshored labor. (If the result is a trade war, treat it like other wars -- fight to win.) Keep that money separate. Deposit it in the coffers of what we might call the Scaling Bank of the U.S. and make these sums available to companies that will scale their American operations. Such a system would be a daily reminder that while pursuing our company goals, all of us in business have a responsibility to maintain the industrial base on which we depend and the society whose adaptability -- and stability -- we may have taken for granted... Unemployment is corrosive. If what I’m suggesting sounds protectionist, so be it... If we want to remain a leading economy, we change on our own, or change will continue to be forced upon us.
Neither Grove's diagnosis nor his proposed solutions will persuade those who are convinced that protectionism of any kind is folly. I am not entirely convinced myself, and suspect that he may be underestimating the likelihood (and consequences) of cascading retaliatory actions and a collapse in international trade. But the argument must be taken seriously, and anyone opposed to his proposals really ought to come up with some alternatives of their own.

---

Update (7/4). In an email (posted with permission) Yves adds:
On the one hand, you are right, any move towards protectionism (or even permitted-within-WTO pushback against mercantilist trade partners) could very quickly get ugly. But the flip side is I wonder if we have a level of global integration that is inherently unstable (both for Rodrik trilemma reasons, international economic integration with insufficient government oversight creates political problems, plus the Reinhart/Rogoff finding that high levels of international capital flows are associated with financial crises). If so, we may have a short run (messiness of reconfiguration) v. long term (costs of really big financial crises) tradeoff.
This is a good point. The purpose of my post was to highlight Grove's analysis of the symbiotic relationship between innovation and scaling (which I think is both interesting and valid), and to challenge those who are opposed to his reform proposals to explain how they would deal with the situation in which we find ourselves. Passive tolerance of mass unemployment, widening income inequality, and withering innovative capacity is not an option.

---

Update (7/4). Tyler Cowen is predictably dismissive of Grove's article, but (less predictably) seems not to have read it very closely. What Grove means by scaling is the process by means of which "technology goes from prototype to mass production" as companies "work out design details, figure out how to make things affordably, build factories, and hire people by the thousands." This is not about increasing returns to scale as economists normally use the term (declining average costs as a function of output). So Tyler's claim that "at best, given the logic of [Grove's] argument, this would imply a tax only on the increasing returns industries" is not correct. And I cannot imagine what he means when he says that the "big exporting success these days is Germany, which has less "scale" than does the United States." Less scale in what sense? Population or per-capita income differences between the two countries are entirely irrelevant here. Is he trying to say that Germany engages in less scaling (and hence more offshoring) than does the United States? This would be relevant, but is empirically dubious.

Like Tyler, I am not convinced that Grove's policy proposals are wise. But his analysis of the relationship between innovation and scaling and the need for a policy response really does deserve to be read with more care.

---

Update (7/6). Tim Duy follows up with a characteristically detailed and thoughtful post. His bottom line:
Something more than cyclical forces is weighing on the American jobs machine. Here I have tried to extend the Grove/Smith/Sethi discourse with additional focus on absolute declines in manufacturing jobs and distressing declines in capacity growth rates. These trends may be critically important in understanding the dismal performance of US labor markets. If they are in fact critical, they raise serious questions about US trade policy – questions that few in Washington want to address. Given the extent to which manufacturing capacity has already been offshored, those questions go far beyond the recently announced tiny shift in Chinese currency policy. Simply put, accepting the importance of manufacturing capacity and the possibility that offshoring has had a much more deleterious impact on the US economy than commonly accepted would require a significant paradigm shift in the thinking of US policymakers. If you scream “protectionist fool” in response, then you need to have a viable policy alternative that goes beyond the empty rhetoric of “we need to teach better creative thinking skills in schools.” That answer is simply too little too late.
It's worth reading the entire post to see the data and reasoning that drives him to this conclusion.

---

I'll be away at a (very interesting) conference for the next couple of days and will be slow to respond to comments and emails.

Friday, July 02, 2010

Market Microstructure and Capital Formation

In an earlier post I argued that recent changes in technology have altered the distribution of trading strategies in asset markets, with information extracting strategies becoming more prevalent at the expense of information augmenting strategies. Specifically, there has been a dramatic increase in the market share of strategies based on rapid responses to market data using algorithms and co-location facilities. One consequence is that the data itself becomes less reliable over time, resulting in greater price volatility and occasional severe disruptions. The flash crash of May 6 was a striking example. 
While my focus has been on market stability, this kind of transformation in microstructure probably has a number of other important effects. In recent testimony before the joint CFTC-SEC committee on emerging regulatory issues, David Weild has argued that one of these consequences is on the size distribution of publicly traded companies, and on capital formation more generally:
There has been a computer arms race unleashed on Wall Street by changes in regulation and technology... [This] is displacing fundamental investing with computer‐trading based strategies and has created new forms of systemic risk, a loss of investor confidence, and a disastrous decline in primary (IPO) capital formation and the number of publicly listed companies in the United States.

From 1997 to Year End 2009 there has been a 40% decline in the number of publicly listed (i.e., NYSE, AMEX and NASDAQ) companies in the United States. On a GDP weighted basis, we have seen a more than 55% decline in the number of publicly listed companies. Today’s market structure has lost the ability to support small capitalization companies and initial public offerings (IPOs) on the scale necessary to help drive the US economy. The U.S. now annually delists twice as many companies as it lists and this trend has been going on since the advent of electronic trading... the unemployment crisis in the United States has been partly caused by changes to debt and equity capital market structure and the events of May 6 may give us an opportunity to come to grips with the notion that we have entered into an era where trading interests are eclipsing fundamental investment and economic interests.

Fundamental investing, or so‐called “information increasing” activities, are being displaced by trading, or so‐called “information mining” activities. The growth in indexing and ETFs may be exacerbating this problem.

In addition, stock market structure today is geared for large‐capitalization stocks with typically symmetrical order books but disastrous for the vast majority of small‐capitalization stocks with asymmetrical order books (where there is not naturally an offsetting buy order to match against a sell order and vice versa)... The “Flash Crash” was an example of where even normally liquid securities went to a state of “asymmetry” and price discovery broke down...
[Until] all trades, quotes and other messages in all interrelated markets are tagged and traceable to the trading venue, broker and ultimate investor, and disclosed to the market, markets will not be perceived as fair... With full tagging, tracking and reporting and the application of posttrade analysis and test bed techniques such as Agent‐Based Models, regulators and market participants will... once and for all be in a position to judge the impact of other participants and to regulate and plan accordingly...
It may be time to admit that what works for large, naturally visible companies, is the antithesis of what is needed by small companies and it is these small companies that are essential to grow our markets, reduce unemployment, restore US competitiveness and drive the US economy.
I am not aware of any academic research that links market microstructure to the size distribution of publicly listed companies in the manner suggested here, and I am grateful to David for for bringing his testimony and supporting documents to my attention. The issue is clearly of considerable importance and deserving of greater scrutiny.

---

Update (7/2). In an email (posted with permission) David adds:
I did a presentation to the ISEEE (International Stock Exchange Executives Emeriti) at the end of April.  The audience consisted of about 25 mostly former senior stock exchange executives... I was taken aback by the reaction of people from places like the Zurich Stock Exchange, Australian, New Zealand, Bovespa and others who were of the opinion that these electronic market structures (specifically, compressed spread-trading centric electronic continuous auction markets) are hurting primary capital formation in many of their countries as well.

For me, having run strategy for investment banking, research, institutional sales and trading at a major Wall Street firm, it is pretty simple - If one can't make money supporting small cap stocks, one won't support small cap stocks...
This has had two effects:
  1. The investment banks tell issuers that they have to do a much larger ($75 million) IPO; minimum IPO sizes have increased at much faster than the rate of inflation.   
  2. Aftermarket support for IPOs has withered because issuers lose money providing it (unless the companies are much larger).
It is commonly argued that the rise of algorithmic trading has resulted in increased liquidity, although this claim is by no means universally accepted. David (if I understand him correctly) is arguing that even if liquidity has increased for some classes of securities, it has declined for others, with detrimental net effects on capital formation.

Happiness and the World Cup

Tyler Cowen considers the question of which team's victory in the World Cup would result in the greatest overall happiness, and concludes (based on the number and intensity of fans) that it would be Brazil. As far as the immediate effects of a victory are concerned, this is probably about right. But could there not also be consequences for global economic growth and financial stability? 
Hein Schotsman of ABN AMRO has looked at these broader economic effects and comes to the conclusion that a victory by a large economy currently running a significant trade surplus would be best. This leads him to the one obvious candidate:
According to a detailed analysis of the 32 countries in this year’s tournament, Mr. Schotsman is convinced that a win by the Germans would boost the global economy. Here’s how: Germany is among the world’s biggest economies and has a large trade surplus. A win by the Germans would boost domestic confidence and spending, thus increasing imports from other countries.

“A German victory will result in a relatively big dent in the German trade surplus, which is best for the stability of the world economy. This is just what is badly needed after the credit crisis,” Mr. Schotsman said in a report released Tuesday called Soccernomics 2010.
Maybe so. But as far as my own happiness is concerned, I would like to see Argentina prevail against Germany tomorrow. Lionel Messi has been the player of the tournament so far and I would hate to see his team eliminated.
---
I thank Ingela Alger for alerting me to this story and sending me references. For those not fully fluent in Dutch, Schotsman's paper may be upload to Google Translate for a reasonably comprehensible rendering.

Tuesday, June 29, 2010

On Blogs and Economic Discourse

I was making my way back from a conference yesterday and completely missed the uproar over Kartik Athreya's provocative essay on economics blogs. Athreya argued, in effect, that most such blogging is done by ill-informed hacks who ought to be ignored while properly trained experts (such as himself) are left in peace to do the difficult work of making progress in the field. The original post has been taken down but (as a telling reminder that no public statement can subsequently be made private in this day and age) a copy may be viewed here.

The response from the accused was swift and brutal (see Thoma, DeLong, Sumner, Rowe, Cowen, Kling, Avent, Yglesias and Wilkinson for a sample). I don't want to pile on, and there's little I can add to what others have already said. But I'd like to take this opportunity to reiterate and expand upon a couple of points that I have made in previous posts about the rapidly changing role of blogs in economic discourse.

My view of the matter is almost diametrically opposed to that of Athreya: I consider these changes to be both irreversible and potentially very healthy. In a post commemorating the birthdays of two excellent economics blogs, I made this point as follows (see also Andrew Gelman's follow-up):
The community of academic economists is increasingly coming to be judged not simply by peer reviewers at journals or by carefully screened and selected cohorts of students, but by a global audience of curious individuals spanning multiple disciplines and specializations. Voices that have long been silenced in mainstream journals now insist on being heard on an equal footing. Arguments on blogs seem to be judged largely on their merits, independently of the professional stature of those making them. This has allowed economists in far-flung places with heavy teaching loads, or those who pursued non-academic career paths, to join debates. Even anonymous writers and autodidacts can wield considerable influence in this environment, and a number of genuinely interdisciplinary blogs have emerged...
This has got to be a healthy development. One might persuade a referee or seminar audience that a particular assumption is justified simply because there is a large literature that builds on it, or that tractability concerns preclude reasonable alternatives. But this broader audience is not so easy to convince. Persuading a multitude of informed, thoughtful, intelligent readers of the relevance and validity of one's arguments using words rather than formal models is a far more challenging task than persuading one's own students or peers. If one can separate the wheat from the chaff, the reasoned argument from the noise, this process should result in a more dynamic and robust discipline in the long run.
In fact, the refereeing process for blog posts is in some respects more rigorous than that for journal articles. Reports are numerous, non-anonymous, public, rapidly and efficiently produced, and collaboratively constructed. It is not obvious to me that this process of evaluation is any less legitimate than that for journal submissions, which rely on feedback from two or three anonymous referees who are themselves invested in the same techniques and research agenda as the author.

I suspect that within a decade, blogs will be a cornerstone of research in economics. Many original and creative contributions to the discipline will first be communicated to the profession (and the world at large) in the form of blog posts, since the medium allows for material of arbitrary length, depth and complexity. Ideas first expressed in this form will make their way (with suitable attribution) into reading lists, doctoral dissertations and more conventionally refereed academic publications. And blogs will come to play a central role in the process of recruitment, promotion and reward at major research universities. This genie is not going back into its bottle.

---

Update (6/30). Andrew Gelman follows up with a long and thoughtful post on the role of blogs in academic research across different fields:
Sethi points out that, compared to journal articles, blog entries can be subject to more effective criticism. Beyond his point (about a more diverse range of reviewers), blogging also has the benefit that the discussion can go back and forth. In contrast, the journal reviewing process is very slow, and once an article is published, it typically just sits there...

Can/should the blogosphere replace the journal-sphere in statistics? I dunno. At times I've been able to publish effective statistical reactions in blog form... or to use the blog as a sort of mini-journal to collect different viewpoints... And when it comes to pure ridicule... maybe blogging is actually more appropriate than formally writing a letter to the editor of a journal.

But I don't know if blogs are the best place for technical discussions. This is true in economics as much as in statistics, but the difference is that many people have argued (perhaps correctly) that econ is already too technical, hence the prominence of blog-based arguments is maybe a move in the right direction...

Statistics, though, is different... even the applied stuff that I do is pretty technical--algebra, calculus, differential equations, infinite series, and the like... Can this sort of highly-technical material be blogged? Maybe so. Igor Carron does it, and so does Cosma Shalizi--and both of them, in their technical discussions, clearly link the statistical material to larger conceptual questions in scientific inference and applied questions about the world. But this sort of blogging is really hard--much harder, I think, than whatever it takes for an economics professor with time on his or her hands to regularly churn out readable and informative blogs at varying lengths commenting on current events, economic policy, the theories of micro- and macro-economics, and all the rest...

On the other hand, the current system of scientific journals is, in many ways, a complete joke. The demand for referee reports of submitted articles is out of control, and I don't see Arxiv as a solution, as it has its own cultural biases. I agree with Sethi that some sort of online system has to be better, but I'm guessing that blogs will play more of a facilitating informal discussions rather than replacing the repositories of formal research. I could well be wrong here, though: all I have are my own experiences, I don't have any good general way of thinking about this sort of sociology-of-science issue.
One minor point of clarification: I did not say (or mean to imply) that blogs would replace journals as the primary repositories of academic research. My point was simply that blogs are fast becoming an integral part of the research infrastructure and that, looking ahead, many innovative ideas will find initial expression in this format before being subject to further development along more traditional lines.

Tuesday, June 22, 2010

Gamesmanship and Collective Reputation

I've often wondered why diving is so prevalent in football. Even if one manages to fool a referee occasionally, the act is captured on video for all to see and inevitably hurts the reputation of the player and his team. Quite apart from the resulting ridicule, there are also long term costs on the field. Referees are more likely to be suspicious when they see players with tarnished reputations tumbling like bowling pins with little apparent contact. Some legitimate fouls may not be called as a result, and there's always the possibility that a player may be cautioned or sent off for unsportsmanlike conduct. So the whole culture of diving, and the fact that it has been embraced so thoroughly by certain teams while being avoided and frowned upon by others, has always been a bit of a puzzle to me.
In a fascinating article, Andrea Tallarita provides some rationalization for this behavior. He explains that diving is a part of a broad range of calculated tactics that are used to get into an opponent's head, inducing frustration, loss of concentration and overreaction. Zidane's costly headbutt of Materazzi in the 2006 World Cup final is the most famous of many examples. Here's how Tallarita explains the approach: 
Perhaps nothing has been more influential in determining the popular perception of the Italian game than furbizia, the art of guile... The word ‘furbizia’ itself means guile, cunning or astuteness. It refers to a method which is often (and admittedly) rather sly, a not particularly by-the-book approach to the performative, tactical and psychological part of the game. Core to furbizia is that it is executed by means of stratagems which are available to all players on the pitch, not only to one team. What are these stratagems? Here are a few: tactical fouls, taking free kicks before the goalkeeper has finished positioning himself, time-wasting, physical or verbal provocation and all related psychological games, arguably even diving... Anyone can provoke an adversary, but it takes real guile (real furbizia) to find the weakest links in the other team’s psychology, then wear them out and bite them until something or someone gives in - all without ever breaking a single rule in the book of football. 
Viewed in this light, the prevalence of diving starts to make a bit more sense. Even if one doesn't win the immediate foul or penalty, the practice can unsettle an opponent and induce errors. And a reputation for diving can cause an opponent to avoid even minimal, routine contact. This is gamesmanship, pure and simple.
But if gamesmanship is so rewarding, why are some teams reluctant to embrace it? Why do the Spanish play such a clean version of the game and consider these tactics to be beneath them, while their closest neighbors, the Italians and Portuguese, have no such qualms? Here is Tallarita's explanation:
Ultimately, these differences come from two irreconcilable visions of the game. The Spanish style understands football as something like a fencing match, a rapid and meticulous art of noble origins where honour is the brand of valour. To the Italians, football is more like an ancient battle, a primal and inclement bronze-age scenario where survival rules over honour.
But this just begs the question: why are the visions of the game so different in nations that are geographically and culturally so close? I think that the answer (or at least part of it) lies in the fact that once a collective reputation has been established, it becomes individually rational for new entrants to the group to act in ways that preserve it. This mechanism was explored in a very interesting 1996 paper by Jean Tirole in which he explains why "new members of an organization may suffer from an original sin of their elders long after the latter are gone." 
The reason why the past behavior of the group affects the incentives of current and future members is that past behavior is not perfectly observable at the level of the individual. Groups consist of overlapping cohorts, with older members mixed in with newer ones. Those older members who have behaved "badly" in the past and thus ruined their reputations have no incentive to behave "well" currently. But suspicion also falls on the newer members, who cannot be perfectly distinguished from the older ones. This suspicion alters incentives in such a manner as to make it self-fulfilling. Even if the entire group would benefit from a change in reputation, this may be impossible to accomplish. Lifting the reputation of the group would require several cohorts to behave well despite being presumed to behave badly, and this is a sacrifice that does not serve their individual interests.
While I have used Tirole's model here to account for variations across teams in their levels of gamesmanship, his own motivation is much broader: he is interested in understanding variations across societies in levels of corruption and differences among firms in their reputation for product quality. And one can think of numerous other examples in which history has saddled a group with a reputation that is hard to shake because doing so requires significant and sustained collective sacrifices from current and future members.

---

Update (6/25). An excellent comment (as usual) by Andrew Oh-Willeke:
The notion that cultural founder effects have great institutional legacies also has strong implications for bankruptcy policy and for policy related to government bureaucracies.

It suggests that completely shutting down one organization, even if it will be replaced by a new organization doing the same thing with the same technology should often be preferred to trying to reorganize existing organizations, because the failure of the troubled firm or bureaucratic unit may be a problem with organizational culture that would otherwise persist, rather than more "objective" factors.

This might also suggest that seemingly absurd economic development strategies, like Attaturk's law mandating that all men wear bowler hats, may have more merit to them than they seem to at an obvious level. The example Malcolm Gladwell used of this phenomena was the increased safety record that was observed at Korean Airlines when flight crews started to use English rather than Korean.
I hope to say more about this in a subsequent post.

An alternative (and perhaps complementary) perspective on heterogeneity in behavior across teams comes from Cyril Hedoin at Rationalité Limitée, who argues that there are major differences across national leagues in gamesmanship norms, sustained by the sanctioning of those who fail to conform to local expectations.

I'm in Istanbul for a conference at the moment and will be slow to respond to emails and comments for a few days.

Sunday, June 20, 2010

The Diving Champions of the (Football) World

Aside from early losses by Germany and Spain, the biggest surprise of the World Cup so far is probably the inability of Italy (the reigning champions) to win either of their first two games. First they drew with Paraguay, ranked 31st in the world, and then again today against 78th ranked New Zealand.
In both cases the Italians came back from a goal behind, and in the latter game did so on the basis of a dubious penalty. De Rossi's spectacular dive after getting his shirt gently tugged by Smith was a wonder to behold, revealing yet again that the Italians are undisputed masters of the simulated foul. Even the Wikipedia entry on the art of diving acknowledges this:
Diving (or simulation - the term used by FIFA) in the context of association football is an attempt by a player to gain an unfair advantage by diving to the ground and possibly feigning an injury, to appear as if a foul has been committed. Dives are often used to exaggerate the amount of contact present in a challenge. Deciding on whether a player has dived is very subjective, and one of the most controversial aspects of football discussion. Players do this so they can receive free kicks or penalty kicks, which can provide scoring opportunities, or so the opposing player receives a yellow or red card, giving their own team an advantage. The Italian national football team have been well known to use this tactic... In fact, their victory at the 2006 FIFA World Cup has been overshadowed by the sheer volume of controversial dives.
While the anecdotal (and video) evidence against Italy is strong, it would be useful to have a statistical measure of diving on the basis of which international comparisons could be made. One possibility is to use data on fouls suffered. For instance, in the latest game, Italy was fouled 23 times while New Zealand suffered just 10 fouls. Either New Zealand is an unusually aggressive (or clumsy) team, or a number of the "fouls" suffered by Italy were simulated.
Since data on fouls committed and suffered is readily available for all World Cup games, it should be possible to sort all this out statistically. Suppose that in any game, the total number of fouls suffered by a team depends on three factors: its propensity to dive (without detection), the opponent's propensity to foul, and idiosyncratic factors independent of the identity of the teams. Then, with a rich enough data set, it should be possible to identify the diving propensity of each team. There are subtleties that could confound the analysis, but a good forensic statistician should be able to handle these. Perhaps Nate Silver will take up the challenge?
In the meantime, for a lesson on how not to dive, enjoy this legendary "posthumous" effort by Gilardino in a 2007 game between AC Milan and Celtic: