Sunday, June 20, 2010

The Diving Champions of the (Football) World

Aside from early losses by Germany and Spain, the biggest surprise of the World Cup so far is probably the inability of Italy (the reigning champions) to win either of their first two games. First they drew with Paraguay, ranked 31st in the world, and then again today against 78th ranked New Zealand.
In both cases the Italians came back from a goal behind, and in the latter game did so on the basis of a dubious penalty. De Rossi's spectacular dive after getting his shirt gently tugged by Smith was a wonder to behold, revealing yet again that the Italians are undisputed masters of the simulated foul. Even the Wikipedia entry on the art of diving acknowledges this:
Diving (or simulation - the term used by FIFA) in the context of association football is an attempt by a player to gain an unfair advantage by diving to the ground and possibly feigning an injury, to appear as if a foul has been committed. Dives are often used to exaggerate the amount of contact present in a challenge. Deciding on whether a player has dived is very subjective, and one of the most controversial aspects of football discussion. Players do this so they can receive free kicks or penalty kicks, which can provide scoring opportunities, or so the opposing player receives a yellow or red card, giving their own team an advantage. The Italian national football team have been well known to use this tactic... In fact, their victory at the 2006 FIFA World Cup has been overshadowed by the sheer volume of controversial dives.
While the anecdotal (and video) evidence against Italy is strong, it would be useful to have a statistical measure of diving on the basis of which international comparisons could be made. One possibility is to use data on fouls suffered. For instance, in the latest game, Italy was fouled 23 times while New Zealand suffered just 10 fouls. Either New Zealand is an unusually aggressive (or clumsy) team, or a number of the "fouls" suffered by Italy were simulated.
Since data on fouls committed and suffered is readily available for all World Cup games, it should be possible to sort all this out statistically. Suppose that in any game, the total number of fouls suffered by a team depends on three factors: its propensity to dive (without detection), the opponent's propensity to foul, and idiosyncratic factors independent of the identity of the teams. Then, with a rich enough data set, it should be possible to identify the diving propensity of each team. There are subtleties that could confound the analysis, but a good forensic statistician should be able to handle these. Perhaps Nate Silver will take up the challenge?
In the meantime, for a lesson on how not to dive, enjoy this legendary "posthumous" effort by Gilardino in a 2007 game between AC Milan and Celtic:

Saturday, June 19, 2010

On Tail Risk and the Winner's Curse

Richard Thaler used to write a wonderful column on anomalies in the Journal of Economic Perspectives. Here's an extract from a 1988 entry on the winner's curse:
The winner's curse is a concept that was first discussed in the literature by three Atlantic Richfield engineers, Capen, Clapp, and Campbell (1971). The idea is simple. Suppose many oil companies are interested in purchasing the drilling rights to a particular parcel of land. Let's assume that the rights are worth the same amount to all bidders, that is, the auction is what is called a common value auction. Further, suppose that each bidding firm obtains an estimate of the value of the rights from its experts. Assume that the estimates are unbiased, so the mean of the estimates is equal to the common value of the tract. What is likely to happen in the auction? Given the difficulty of estimating the amount of oil in a given location, the estimates of the experts will vary substantially, some far too high and some too low. Even if companies bid somewhat less than the estimate their expert provided, the firms whose experts provided high estimates will tend to bid more than the firms whose experts guessed lower... If this happens, the winner of the auction is likely to be a loser.
Thaler goes on to point out that the winner's curse would not arise if all bidders were rational, for they would take into account when bidding that conditional on winning the auction, the valuation of their experts is likely to have been inflated. But he also presents evidence (from laboratory experiments as well as field data on offshore oil and gas leases and corporate takeovers) that bidders are not rational to this degree, and that the winner's curse is therefore an empirically relevant phenomenon. Many observers of the free agent market in baseball would agree.
In Thaler's description, the winner's curse arises despite the fact that bidder estimates are unbiased: their valuations are correct on average, even though the winning bid happens to come from someone with excessively optimistic expectations. Someone familiar with this phenomenon would therefore never conclude that all bidders are excessively optimistic simply by observing the fact that winning bidders tend to wish that they had lost.
By the same token, when firms like BP and AIG are revealed to have underestimated the extent to which their actions exposed them (and numerous others) to tail risk, one ought not to presume that they were acting under the influence of a psychological propensity to which we are all vulnerable. Those who had more realistic (or excessively pessimistic) expectations regarding such risks simply avoided them, and by doing so also avoided coming to our attention.
And yet, here is the very same Richard Thaler arguing that a behavioral propensity to accept "risks that are erroneously thought to be vanishingly small" was responsible for both the financial crisis and the oil spill:
The story of the oil crisis is still being written, but it seems clear that BP underestimated the risk of an accident. Tony Hayward, its C.E.O., called this kind of event a “one-in-a-million chance.” And while there is no way to know for sure, of course, whether BP was just extraordinarily unlucky, there is much evidence that people in general are not good at estimating the true chances of rare events, especially when human error may be involved.
There is certainly a grain of truth in this characterization, but I feel that it misses the real story. As the analysis underlying the winner's curse teaches us, those with the most optimistic expectations will take the greatest risks and suffer the most severe losses when the low probability events that they have disregarded eventually come to pass. But tail risks are unlike auctions in one important respect: there can be a significant time lag between the acceptance of the risk and the realization of a catastrophic event. In the interim, those who embrace the risk will generate unusually high profits and place their less sanguine competitors in the difficult position of either following their lead or accepting a progressively diminishing market share. The result is herd behavior with entire industries acting as if they share the expectations of the most optimistic among them. It is competitive pressure rather than human psychology that causes firms to act in this way, and their actions are often taken against their own better judgment. 
This ecological perspective lies at the heart of Hyman Minsky's analysis of financial instability, and it can be applied more generally to tail risks of all kinds. As an account of the (environmental and financial) catastrophes with which we continue to grapple, I find it more compelling and complete than the psychological story. And it has the virtue of not depending for its validity on systematic,  persistent, and largely unexplained cognitive biases among professionals in high stakes situations.
Both James Kwak and Maxine Udall have also taken issue with Thaler's characterization (though on somewhat different grounds). James also had this to say about behavioral economics more generally:
Don’t get me wrong: I like behavioral economics as much as the next guy. It’s quite clear that people are irrational in ways that the neoclassical model assumes away... But I don’t think cognitive fallacies are the answer to everything, and I don’t think you can explain away the myriad crises of our time as the result of them.
I agree completely. As I said in an earlier post, I can't help thinking that too much is being asked of behavioral economics at this time, much more than it has the capacity to deliver.

---

Update (6/20). In a response to this post, Brad DeLong makes two points. First, he observes that those who underestimate tail risk can make unusually high profits not just in the interim period before a catastrophic event occurs, but also if one averages across good and bad realizations:
To the extent that the optimism of noise traders leads them to hold larger average positions in assets that possess systemic risk, their average returns will be higher in a risk-averse world--not just in those states of the world in which the catastrophe has not happened yet, but quite possibly averaged over all states of the world including catastrophic states.
This is logically correct, for reasons that were discussed at length in Brad's 1990 JPE paper with Shleifer, Summers and Waldmann. But (as I noted in my comment on his post) I don't think the argument applies to the risks taken by BP and AIG, which could easily have proved fatal to the firms. One could try to make the case that even with bankruptcy, the cumulative dividend payouts would have resulted in higher returns than less exposed competitors, but the claim seems empirically dubious to me.

Brad's second point is that my distinction between the ecological and psychological approaches is unwarranted, and that the two are in fact complementary. Here he quotes Charles Kindleberger:
Overestimation of profits comes from euphoria, affects firms engaged in the production and distributive processes, and requires no explanation. Excessive gearing arises from cash requirements that are low relative both to the prevailing price of a good or asset and to possible changes in its price. It means buying on margin, or by installments, under circumstances in which one can sell the asset and transfer with it the obligation to make future payments. As firms or households see others making profits from speculative purchases and resales, they tend to follow: "Monkey see, monkey do." In my talks about financial crisis over the last decades, I have polished one line that always gets a nervous laugh: "There is nothing so disturbing to one’s well-being and judgment as to see a friend get rich."
The Kindeberger quote is wonderful, but the claim is about interdependent preferences, not cognitive limitations. I don't doubt that cognitive limitations matter (I started my post with the winner's curse after all) but I was trying to shift the focus to interactions and away from psychology. In general I think that the Minsky story can be told with very modest departures from rationality, which to me is one of the strengths of the approach.

Tuesday, June 15, 2010

An Extreme Version of a Routine Event

The flash crash of May 6 has generally been viewed as a pathological event, unprecedented in history and unlikely to be repeated in the foreseeable future. The initial response was to lay blame on an external source for the instability: fat fingers, computer glitches, market manipulation, and even sabotage were all contemplated. But once it became apparent that this was a fully endogenous event, arising from interactions among trading strategies, it was time to drag out the perennial metaphor of the perfect storm. Consider, for instance, the response from Barclays Capital:
Thursday’s market action, in our opinion, did not begin and end with trading errors and/or exchange technology failures. Nor, as some commentators are suggesting, were quantitative trading strategies primarily responsible for the events that unfolded. All of these forces may have contributed to the voracious sell-off, but our analysis suggests that last Thursday’s events were more a function of a “perfect storm,” to borrow a cliché phrase.
Resorting to this tired analogy is both intellectually lazy and dangerously misleading. It lulls one into a false complacency and suggests that there is little one can (or needs to) do to prevent a recurrence. And since the correction was quick, and trades at the most extreme prices were canceled, it could even be argued that little damage was done. Might this not reflect the resilience of markets rather than their vulnerability?
But consider, for a moment, the possibility that far from being a pathological event, the flash crash was simply a very extreme version of a relatively routine occurrence. It was extreme with respect the the scale of departures of prices from fundamentals, and the speed with which they arose and were corrected. But it was routine in the sense that such departures do arise from time to time, building cumulatively rather than suddenly, and lasting for months or years rather than minutes, with corrections that can be rapid or prolonged but almost impossible to time.
Viewed in this manner, the flash crash can provide us with insights into the more general dynamics of prices in speculative asset markets, in much the same manner as high speed photography can reveal intricate details about the flight of an insect. The crash revealed with incredible clarity how (as James Tobin observed a long time ago) markets can satisfy information arbitrage efficiency while failing to satisfy fundamental valuation efficiency. The collapse and recovery of prices could not have been predicted based on an analysis of any publicly available market data, at least not with respect to timing and scale. And yet prices reached levels (both high and low) that were staggering departures from fundamental values.
So what can we learn from the crash? The SEC report on the event contains two pieces of information that are revealing: the vast majority of trades against stub quotes of five cents or less were short sales, and there were major departures of prices from fundamentals in both directions, with a number of trades executed at ten million dollars per round lot. It is very unlikely that these orders came from retail investors; they were almost certainly generated by algorithms implementing strategies that involve directional bets for short holding periods in response to incoming market data.
While the algorithmic implementation of such strategies is a relatively recent development, the strategies themselves have been around for as long as securities markets have existed. They can be very effective when sufficiently rare, but become increasingly vulnerable to major losses as they become more widespread. Their success in stable markets leads to their proliferation, which in turn causes the information in market data to become progressively more garbled. These strategies can then become mutually amplifying, resulting in major departures of prices from fundamentals. When the inevitable correction arrives, some of them are wiped out, and market stability is restored for a while. This process of endogenous regime switching finds empirical expression in the clustering of volatility.
The reason why departures of prices from fundamentals were so quickly corrected during the flash crash was because the discrepancies were so obvious. It was common knowledge among market participants that a penny per share for Accenture or a hundred thousand for Sotheby's were not real prices (to use Jim Cramer's memorable expression) and therefore presented significant and immediate profit opportunities. Traders pounced and sanity was restored.
But when departures of prices from fundamentals arise on a more modest scale, a coordinated response is more difficult to accomplish. This is especially the case when securities become overvalued. Bubbles can continue to expand even as awareness of overvaluation spreads because short selling carries enormous downside risk and maintaining short positions in a rising market requires increasing amounts of capital to meet margin requirements. Many very sophisticated fund managers suffered heavy losses while attempting to time the collapse in technology stocks a decade ago. And many of those who recently used credit derivatives to bet on a collapse in housing prices might well have met the same fate were it not for the taxpayer funded rescue of a major counterparty.
Aside from scale and speed, one major difference between the flash crash and its more routine predecessors was the unprecedented cancellation of trades. As I have argued before, this was a mistake: losses from trading provide the only mechanism that currently keeps the proliferation of destabilizing strategies in check. The decision is not one that can now be reversed, but the SEC should at least make public the list of beneficiaries and the amounts by which their accounts were credited. Dissemination of these simple facts would help to identify the kinds of trading strategies that were implicated. And it is information to which the public is surely entitled.

Tuesday, May 25, 2010

An Outsider's View of Modern Macroeconomics

Following up on a testy exchange with David Andolfatto, Mark Thoma has written a thoughtful post in which he discusses the state of modern macroeconomic theory, the appropriateness of appeals to professional authority, the shortcomings of some canonical models, and the way forward. I posted a brief comment in response, with a few constructive suggestions for mainstream macroeconomists from the perspective of an outsider. I have made these points on various occasions before, and reproduce them here (slightly edited and expanded with links to earlier posts):
  1. Rational expectations is not a behavioral hypothesis, it's an equilibrium assumption and therefore much more restrictive than "forward-looking behavior". It might be justified if equilibrium paths were robustly stable under plausible specifications of disequilibrium dynamics, but this needs to be explored explicitly instead of simply being assumed. 
  2. Think about whether a theory of economic fluctuations should be shock-dependent (in the Frisch-Slutsky tradition) or shock-independent (in the Goodwin tradition). Go back and look at Goodwin's  1951 Econometrica paper to appreciate the importance of the distinction.
  3. Build models in which leverage, collateral, and default play a central role. The work of John Geanakoplos on this is an excellent starting point. He uses equilibrium theory but allows for heterogeneous priors (so differences in beliefs can persist even if they are common knowledge.) More broadly, take a close look at Hyman Minsky's integrated analysis of real and financial activity. 
  4. Do not assume that flexible wages and prices imply labor market clearing. They do in equilibrium (by definition) but wage and price flexibility in disequilibrium can make matters worse. Keynes recognized this, and Tobin explored these mechanisms formally. Arbitrary assumptions of "sticky prices" are not necessary to account for persistent unemployment or under-utilization of capacity.
  5. Finally, show some humility. There are anonymous bloggers out there, some self-taught in economics, who may know more about the functioning of a modern economy than you do.
The last point is directed at David Andolfatto, whose arrogant appeal to professional authority jolted the normally polite Mark Thoma to respond with (justifiable) belligerence. Andolfatto's entire post was dripping with condescension, but I found the following passage particularly disturbing:
DeLong tells us that we can learn a lot of economics from Krugman. You will be forgiven for wondering whether DeLong can even tell whether he is learning economics or not. DeLong is, as far as I can tell, an historian.
As I said on Mark's blog, it could be argued that economic historians (and historians of thought) have had more useful things to say about recent events than the highest of high priests in macroeconomics. Andolfatto seems to be confusing an understanding of modern macroeconomic theory with an understanding of the modern macroeconomy. The two are not the same, and the former is neither necessary nor sufficient for the latter.
Contrast the tone of Andolfatto's post with the following passage from a recent essay by Narayana Kocherlakota, president of the Minneapolis Fed:
I believe that during the last financial crisis, macroeconomists (and I include myself among them) failed the country, and indeed the world. In September 2008, central bankers were in desperate need of a playbook that offered a systematic plan of attack to deal with fast-evolving circumstances. Macroeconomics should have been able to provide that playbook. It could not. Of course, from a longer view, macroeconomists let policymakers down much earlier, because they did not provide policymakers with rules to avoid the circumstances that led to the global financial meltdown.

Because of this failure, macroeconomics and its practitioners have received a great deal of pointed criticism both during and after the crisis. Some of this criticism has come from policymakers and the media, but much has come from other economists. Of course, macroeconomists have responded with considerable vigor, but the overall debate inevitably leads the general public to wonder: What is the value and applicability of macroeconomics as currently practiced?
Kocherlakota goes on to defend the many advances made in macroeconomic research over the past four decades, but openly acknowledges the enormous challenges that remain. He goes on to say:
The seventh floor of the Federal Reserve Bank of Minneapolis is one of the most exciting macro research environments in the country. As president, I plan to learn from our staff, consultants, and visitors.
I hope that some of those visitors (real or virtual) will be voices of dissent from beyond the inner circle of research macroeconomics. It is in this spirit of openness that my comments are offered.

---

Update (5/26). One item that I'd like to add to the list above is methodological pluralism. For instance, there is interesting work in macroeconomics using agent-based computational methods; see, for instance, the 2008 book on Emergent Macroeconomics by Delli Gatti, Gaffeo, Gallegati, Giulioni, and Palestrini. As I have said before, such models can provide microfoundations for macroeconomics in a manner that is both more plausible and more authentic than is the case with highly aggregative representative agent models.

---

Update (5/26). Some useful perspective from Malaise Precis:
The dustup between Mark Thoma and David Andolfatto... is perhaps more symptomatic of the divide between - at extreme risk of too much simplification - "new" macroeconomists and "old" macroeconomists. The macroeconomists of my generation were taught DSGE models. Facts were "stylized" facts, i.e. first and second moments of "key" economic variables such as GNP, investment and consumption. During my entire 6 years at graduate school things like institutional details and historical events that may have affected the economy were laid aside or treated as not being "relevant" to the model. Economies were frictionless and markets always cleared. Sure, some frictions were eventually introduced but perhaps the biggest elephant in the room was that the curriculum cultivated us with a certain attitude that:
  1. There are those who can build DSGE models and there are those who can't.
  2. All partial equilibrium models can be dismissed off hand.
  3. All structural equation models are completely irrelevant especially those not based on DSGE models. (IS-LM or Keynesian "cross" models are definitely in this category.)
  4. Any paper that does not present a model can be dismissed - this included narratives as well as historical papers.
Perhaps the economists who fail to understand history will be doomed to repeat [it]?
---

Update (5/27). David Andolfatto has posted an uncommonly gracious follow-up to his earlier remarks. As Mark points out in response, "it is possible to find shrill, over the top attacks on all sides of the debate on macroeconomic policy." What bothered me about David's earlier post was not the harshness of the language but the idea that some people are simply not qualified to speak out on certain issues. I believe that we economists need (and should welcome) voices from outside our narrow areas of specialization, and indeed outside our discipline.

Sunday, May 23, 2010

Blame the Instructions, Not the Machines

Following the dramatic flash crash on May 6, there has been a lot of attention paid to the mechanics of trading (automation, frequency, scale and speed) but not enough to the kinds of strategies that are being implemented using these mechanisms. Trading algorithms do whatever they are instructed to do, and market movements result from the distribution of instructions and not the technology used to implement them. Technology certainly matters, but in an indirect way. Just as changes in climate can alter the distribution of species in an ecosystem, driving some to extinction and allowing others to proliferate, new technologies can alter the distribution of strategies among the population of traders. Major changes of this kind can affect systemic stability, in the case of markets and ecosystems alike. 

The variety of trading strategies in use is vast, but I find it useful to partition them into two broad categories: those that are information augmenting and those that are information extracting. The first group of strategies are based on some form of fundamental analysis: examination of balance sheets, growth potential, and risk, for instance, and trading based on departures of prices from estimated valuations. Such strategies require the investment of resources in information gathering, and end up feeding information to the market. The other class of strategies use market data itself to direct trades. These could be non-directional and arbitrage-based, or directional strategies based on such factors as momentum. This latter class of strategies use volume, price, and other market data as a basis for entering and exiting positions.

A market dominated by information augmenting strategies will tend to be stable and to track information as it arises in the economy. But information extracting strategies can be very profitable in stable markets as long as they react quickly and forcefully to new market data. Changes in technology have made rapid responses to market data feasible on a large scale, resulting in an increase in total market wealth that is invested on the basis of such strategies. The problem is that if too many people are using such strategies, there isn't enough information getting into prices systematically, and certain technical strategies can start generating mutually amplifying responses to noise.

The SEC-CFTC preliminary report on the crash contains a wealth of information and some interesting clues about the kinds of strategies that may have been implicated. First, "approximately 200 securities traded, at their lows, almost 100% below their previous day’s values." These trades, "occurred at extraordinarily low prices – five cents or less – which indicates an execution against a “stub” quote of a market maker." The overwhelming majority of these trades, it turns out, were short sales:
During the period of peak market volatility, 2:45 p.m. to 2:55 p.m., the broken trades executed at five cents or less were primarily short sales. Short sales account for approximately 70.1% of executions against “stub” quotes between 2:45 p.m. and 2:50 p.m., and approximately 90.1% of executions against “stub” quotes between 2:50 p.m. and 2:55 p.m.
In other words, the trades at the most extreme prices were not generated by retail investors whose stop loss orders were converted to market sell orders as prices fell: they were generated by short selling in a falling market.
Also interesting is the case of securities that displayed "aberrant behavior" on the upside:
Sotheby’s (BID) is actively traded and has a narrow bid-ask spread from 2:44 p.m. through 2:49 p.m. after which volume is low but bid and ask quotes remain stable. However, after about 2:57 p.m. volume spikes dramatically and trades are executed at a high (presumably stub) quote of approximately $100,000... BID trades through the national best offer multiple times between 2:57:05 p.m. and 2:57:12 p.m. This includes trades at approximately $100,000 which is presumably a top-end stub quote.
A single round lot of shares in Sotheby's would have cost ten million dollars at this price. Given that the orders were executed, it seems inconceivable to me that they came from retail investors.
What kinds of strategies could have been responsible for these trades? In January of this year the SEC published a Concept Release on Equity Market Structure that explicitly discussed the destabilizing consequences of certain strategies used by proprietary trading firms. Of special concern were strategies based on order anticipation and momentum ignition:
One example of an order anticipation strategy is when a proprietary firm seeks to ascertain the existence of one or more large buyers (sellers) in the market and to buy (sell) ahead of the large orders with the goal of capturing a price movement in the direction of the large trading interest... The type of order anticipation strategy referred to in this release involves any means to ascertain the existence of a large buyer (seller) that does not involve violation of a duty, misappropriation of information, or other misconduct. Examples include the employment of sophisticated pattern recognition software to ascertain from publicly available information the existence of a large buyer (seller), or the sophisticated use of orders to “ping” different market centers in an attempt to locate and trade in front of large buyers and sellers... An important issue for purposes of this release is whether the current market structure and the availability of sophisticated, high-speed trading tools enable proprietary firms to engage in order anticipation strategies on a greater scale than in the past.
A very different type of potentially destabilizing strategy seeks to engineer and exploit momentum in prices:
Another type of directional strategy that may raise concerns in the current market structure is momentum ignition. With this strategy, the proprietary firm may initiate a series of orders and trades... in an attempt to ignite a rapid price move either up or down. For example, the trader may intend that the rapid submission and cancellation of many orders, along with the execution of some trades, will “spoof” the algorithms of other traders into action and cause them to buy (sell) more aggressively. Or the trader may intend to trigger standing stop loss orders that would help cause a price decline. By establishing a position early, the proprietary firm will attempt to profit by subsequently liquidating the position if successful in igniting a price movement.
Order anticipation and momentum ignition are just extreme cases of a broad range of directional strategies that are either information extracting or seek to trigger information extracting algorithms. If too great a share of total market activity is driven by such strategies, major departures of prices from fundamentals will arise sooner or later. It is important, therefore, to allow such strategies to take heavy losses when they do eventually misfire. Macroeconomic Resilience has an excellent analytical post on the crash that makes a similar point:
Policy measures that aim to stabilise the system by countering the impact of positive feedback processes select against and weed out negative feedback processes – Stabilisation reduces system resilience. The decision to cancel errant trades is an example of such a measure. It is critical that all market participants who implement positive feedback strategies... suffer losses and those who step in to buy in times of chaos i.e. the negative-feedback providers are not denied of the profits that would accrue to them if markets recover. This is the real damage done by policy paradigms such as the “Greenspan/Bernanke Put” that implicitly protect asset markets. They leave us with a fragile market prone to collapse even with a “normal storm”, unless there is further intervention as we saw from the EU/ECB. Of course, every subsequent intervention that aims to stabilise the system only further reduces its resilience.
By canceling trades, the exchanges reversed a redistribution of wealth that would have altered the composition of strategies in the trading population. I'm sure that many retail investors whose stop loss orders were executed at prices far below anticipated levels were relieved. But the preponderance of short sales among trades at the lowest prices and the fact that aberrant price behavior also occurred on the upside suggests to me that the largest beneficiaries of the cancellation were proprietary trading firms making directional bets based on rapid responses to incoming market data. The widespread cancellation of trades following the crash served as an implicit subsidy to such strategies and, from the perspective of market stability, is likely to prove counter-productive.

Saturday, May 15, 2010

James Tobin's Hirsch Lecture

James Tobin's Fred Hirsch Memorial Lecture "On the Efficiency of the Financial System" was originally published in a 1984 issue of the Lloyds Bank Review, and republished three years later in a collection of his writings. Willem Buiter discussed the essay at some length about a year ago in a provocative post dealing with the regulation of derivatives. Both the original essay and Buiter's discussion of it remain well worth reading today as guides to the broad principles that ought to underlie financial market reform.

In his essay, Tobin considers four distinct conceptions of financial market efficiency:
Efficiency has several different meanings: first, a market is 'efficient' if it is on average impossible to gain from trading on the basis of generally available public information... Efficiency in this meaning I call information arbitrage efficiency.

A second and deeper meaning is the following: a market in a financial asset is efficient if if its valuations reflect accurately the future payments to which the asset gives title... I call this concept fundamental valuation efficiency.

Third, a system of financial markets is efficient if it enables economic agents to insure for themselves deliveries of goods and services in all future contingencies, either by surrendering some of their own resources now or by contracting to deliver them in specified future contingencies... I call efficiency in this Arrow-Debreu sense full insurance efficiency.

The fourth concept relates more concretely to the economic functions of the financial industries... These include: the pooling of risks and their allocation to those most able and willing to bear them... the facilitation of transactions by providing mechanisms and networks of payments; the mobilization of saving for investments in physical and human capital... and the allocation of saving to to their more socially productive uses. I call efficiency in these respects functional efficiency.
The first two criteria correspond, respectively, to weak and strong versions of the efficient markets hypothesis. Tobin argues that the weak form is generally satisfied on the grounds that "actively managed portfolios, allowance made for transactions costs, do not beat the market." He notes, however that efficiency in the second (strong form) sense is "by no means implied" by this, and that "market speculation multiplies several fold the underlying fundamental variability of dividends and earnings."
My own view of the matter (expressed in an earlier post) is that such a neat separation of these two concepts of efficiency is too limiting: endogenous variations in the composition of trading strategies result in alternating periods of high and low volatility. Nevertheless, as an approximate view of market efficiency over long horizons, I feel that Tobin's characterization is about right. 
Full insurance efficiency requires complete markets in state contingent claims. This is a theoretical ideal that is impossible to attain in practice for a variety of reasons: the real resource costs of contracting, the thinness of potential markets for exotic contingent claims, and the difficulty of dispute resolution. Nevertheless, Tobin argues for the introduction of new assets that insure against major contingencies such as inflation, and securities of this kind have indeed been introduced since his essay was published.
Finally, Tobin turns to functional efficiency, and this is where he expresses greatest concern:
What is clear that very little of the work done by the securities industry, as gauged by the volume of market activity, has to do with the financing of real investment in any very direct way. Likewise, those markets have very little to do, in aggregate, with the translation of the saving of households into corporate business investment. That process occurs mainly outside the market, as retention of earnings gradually and irregularly augments the value of equity shares...

I confess to an uneasy Physiocratic suspicion, perhaps unbecoming in an academic, that we are throwing more and more of our resources, including the cream of our youth, into financial activities remote from the production of goods and services, into activities that generate high private rewards disproportionate to their social productivity. I suspect that the immense power of the computer is being harnessed to this 'paper economy', not to do the same transactions more economically but to balloon the quantity and variety of financial exchanges. For this reason perhaps, high technology has so far yielded disappointing results in economy-wide productivity. I fear that, as Keynes saw even in his day, the advantages of the liquidity and negotiability of financial instruments come at the cost of facilitation nth-degree speculation which is short sighted and inefficient...
Arrow and Debreu did not have continuous sequential trading in mind; when that occurs, as Keynes noted, it attracts short-horizon speculators and middlemen, and distorts or dilutes the influence of fundamentals on prices. I suspect that Keynes was right to suggest that we should provide greater deterrents to transient holdings of financial instruments and larger rewards for long-term investors.
Recall that these passages were published in 1984; the financial sector has since been transformed beyond recognition. Buiter argues that Tobin's concerns about functional efficiency are more valid today than they have ever been, and is particularly concerned with derivatives contacts involving directional bets by both parties to the transaction:
[Since] derivatives trading is not costless, scarce skilled resources are diverted to what are not even games of pure redistribution.  Instead these resources are diverted towards games involving the redistribution of a social pie that shrinks as more players enter the game.

The inefficient redistribution of risk that can be the by-product of the creation of new derivatives markets and their inadequate regulation can also affect the real economy through an increase in the scope and severity of defaults.  Defaults, insolvency and bankruptcy are key components of a market economy based on property rights.  There involve more than a redistribution of property rights (both income and control rights).  They also destroy real resources.  The zero-sum redistribution characteristic of derivatives contracts in a frictionless world becomes a negative-sum redistribution when default and insolvency is involved.  There is a fundamental asymmetry in the market game between winners and losers: there is no such thing as super-solvency for winners.  But there is such a thing as insolvency for losers, if the losses are large enough.
The easiest solution to this churning problem would be to restrict derivatives trading to insurance, pure and simple.  The party purchasing the insurance should be able to demonstrate an insurable interest.  [Credit Default Swaps] could only be bought and sold in combination with a matching amount of the underlying security. 
The debate over naked credit default swaps is contentious and continues to rage. While market liquidity and stability have been central themes in this debate to date, it might be useful also to view the issue through the lens of functional efficiency. More generally, we ought to be asking whether Tobin was right to be concerned about the size of the financial sector in his day, and whether its dramatic growth over the couple of decades since then has been functional or dysfunctional on balance.

Monday, May 10, 2010

Reflections on the Flash Crash

Index Universe observes that much of the unusual trading activity last Thursday involved exchange traded funds and notes:
Nasdaq has released a list of 281 securities that saw unusual activity during yesterday’s “flash crash” on the market... In all, 193 of the 281 securities (68.7 percent) on the NASDAQ list were exchange-traded funds or exchange-traded notes... The New York Stock Exchange has published a similar list, detailing 173 different securities whose trades will be cancelled. In all, 111 of those securities (64.2 percent) were ETFs or ETNs...
It was not immediately clear why ETFs dominate the lists.
Izabella Kaminska follows up on FT Alphaville:
ETF and ETN trading is closely related to high-frequency trading... Constant market-making and arbitrage opportunities are provided to authorised participants (often high frequency trading firms) by the ETF model’s dependence on converging to the net asset value on a daily basis. A typical fund has about five authorised participants.

The so-called creation and redemption mechanism allows authorised participants to lock-in profits when the shares of ETFs over-price or under-price the NAV, since only they are allowed to redeem or create shares at the official NAV price of the funds.
Dynamic hedging is needed to protect the arbitrage until the moment the creation or redemption process can take place... A significant change in any constituent stock in the interim can can hence fuel frantic fine-tuning of positions ahead of NAV publication time.
Can the algorithmic strategies used by authorized participants making markets in exchange traded funds help account for the crash? Not really. Index arbitrage of this kind simply brings the prices of exchange traded funds in line with the prices of their constituent securities, and is non-directional. This activity could explain a spike in volume as a result of sharp movements in prices, but this is a symptom rather than a cause of the crash. Something else caused the prices of the funds and/or the constituent securities to drop, and index arbitrage activity picked up as a result. What was this cause?

Some have pointed to the fact that liquidity vanished from the market during the crash, or that stop loss orders were triggered as prices fell. While these effects certainly accelerated and amplified the decline, there must have been an independent source of massive selling pressure that ran through the available bids, triggered stop orders, and caused electronic market makers to shut down. Again, where did this overwhelming selling pressure come from?

The best explanation that I have seen is contained in a message by an anonymous analyst that Yves Smith posted earlier today. The hypothesis is that the initial trigger came from algorithms implementing volume-sensitive technical strategies:
Volume was gigantic yesterday before we really went into freefall. As of 2 p.m., some 40 minutes before Armageddon, we were tracking for a massive 15.6 billion share day (we ended up doing 19.3 billion – the second largest day ever after the October 10th, 2008 whitewash). Half an hour later, at 2:30 p.m. – still ten minutes before the bottom fell out – volume had surged and we were tracking for a 17.2 billion share day. The period between 2 p.m. and 2:40 p.m. saw immense selling pressure in both the cash market and the futures market, and that occurred with the E-minis still north of 1120...

In other words, it was not a sudden, random surge of volume from a fat finger that overwhelmed the market. It was a steady onslaught of selling that pressured the market lower in order to catch up with the carnage taking place in the credit markets and the currency markets...
So what happened here? Three things:
  1. Sellers probably had orders in algorithms – percentage-of-volume strategies most likely, maybe VWAP – and could not cancel, could not “get an out.” These sellers could be really “quanty” types, or high freqs, or they could be vanilla buy side accounts. It really doesn’t matter. The issue here is that the trader did not anticipate such a sharp price move and did not put a limit on the order...
  2. Sell stop orders were triggered which forced market sell orders into an already well offered market. 
  3. While the market was well offered, it was not well bid. Liquidity disappeared... Bids disappeared, spreads blew out, and no one was trading except a handful of orphaned algo orders, stop sell orders, and maybe a few opportunists who had loaded up the order book with low ball bids (“just in case”). High frequency accounts and electronic market makers were, by all accounts, nowhere to be found.
It boils down to this: this episode exposed structural flaws in how a trade is implemented (think orphaned algo orders) and it exposed the danger of leaving market making up to a network of entities with no mandate to ensure the smooth and orderly functioning of the market (think of the electronic market makers and high freqs who can pull bids instantaneously as opposed to a specialist on the floor who has a clearly defined mandate to provide liquidity).
This rings true to me. Accounting for the crash requires us to go beyond the mechanics of the trading process (automation, scale, speed) and to examine the kinds of strategies that were being implemented by the algorithms. A market dominated by technical analysis is always going to be vulnerable to this kind of instability. The fact that the prices of some securities and funds crashed to absurd levels that were clearly out of line with fundamentals made this obvious and resulted in a quick recovery. But what if the trading strategies had given rise to upward rather than downward instability? It would have been more difficult to establish conclusively that assets were overpriced, and accordingly more risky to enter positions to bring them back in line with fundamentals. This, presumably, is how asset price bubbles get started.