Tuesday, May 25, 2010

An Outsider's View of Modern Macroeconomics

Following up on a testy exchange with David Andolfatto, Mark Thoma has written a thoughtful post in which he discusses the state of modern macroeconomic theory, the appropriateness of appeals to professional authority, the shortcomings of some canonical models, and the way forward. I posted a brief comment in response, with a few constructive suggestions for mainstream macroeconomists from the perspective of an outsider. I have made these points on various occasions before, and reproduce them here (slightly edited and expanded with links to earlier posts):
  1. Rational expectations is not a behavioral hypothesis, it's an equilibrium assumption and therefore much more restrictive than "forward-looking behavior". It might be justified if equilibrium paths were robustly stable under plausible specifications of disequilibrium dynamics, but this needs to be explored explicitly instead of simply being assumed. 
  2. Think about whether a theory of economic fluctuations should be shock-dependent (in the Frisch-Slutsky tradition) or shock-independent (in the Goodwin tradition). Go back and look at Goodwin's  1951 Econometrica paper to appreciate the importance of the distinction.
  3. Build models in which leverage, collateral, and default play a central role. The work of John Geanakoplos on this is an excellent starting point. He uses equilibrium theory but allows for heterogeneous priors (so differences in beliefs can persist even if they are common knowledge.) More broadly, take a close look at Hyman Minsky's integrated analysis of real and financial activity. 
  4. Do not assume that flexible wages and prices imply labor market clearing. They do in equilibrium (by definition) but wage and price flexibility in disequilibrium can make matters worse. Keynes recognized this, and Tobin explored these mechanisms formally. Arbitrary assumptions of "sticky prices" are not necessary to account for persistent unemployment or under-utilization of capacity.
  5. Finally, show some humility. There are anonymous bloggers out there, some self-taught in economics, who may know more about the functioning of a modern economy than you do.
The last point is directed at David Andolfatto, whose arrogant appeal to professional authority jolted the normally polite Mark Thoma to respond with (justifiable) belligerence. Andolfatto's entire post was dripping with condescension, but I found the following passage particularly disturbing:
DeLong tells us that we can learn a lot of economics from Krugman. You will be forgiven for wondering whether DeLong can even tell whether he is learning economics or not. DeLong is, as far as I can tell, an historian.
As I said on Mark's blog, it could be argued that economic historians (and historians of thought) have had more useful things to say about recent events than the highest of high priests in macroeconomics. Andolfatto seems to be confusing an understanding of modern macroeconomic theory with an understanding of the modern macroeconomy. The two are not the same, and the former is neither necessary nor sufficient for the latter.
Contrast the tone of Andolfatto's post with the following passage from a recent essay by Narayana Kocherlakota, president of the Minneapolis Fed:
I believe that during the last financial crisis, macroeconomists (and I include myself among them) failed the country, and indeed the world. In September 2008, central bankers were in desperate need of a playbook that offered a systematic plan of attack to deal with fast-evolving circumstances. Macroeconomics should have been able to provide that playbook. It could not. Of course, from a longer view, macroeconomists let policymakers down much earlier, because they did not provide policymakers with rules to avoid the circumstances that led to the global financial meltdown.

Because of this failure, macroeconomics and its practitioners have received a great deal of pointed criticism both during and after the crisis. Some of this criticism has come from policymakers and the media, but much has come from other economists. Of course, macroeconomists have responded with considerable vigor, but the overall debate inevitably leads the general public to wonder: What is the value and applicability of macroeconomics as currently practiced?
Kocherlakota goes on to defend the many advances made in macroeconomic research over the past four decades, but openly acknowledges the enormous challenges that remain. He goes on to say:
The seventh floor of the Federal Reserve Bank of Minneapolis is one of the most exciting macro research environments in the country. As president, I plan to learn from our staff, consultants, and visitors.
I hope that some of those visitors (real or virtual) will be voices of dissent from beyond the inner circle of research macroeconomics. It is in this spirit of openness that my comments are offered.

---

Update (5/26). One item that I'd like to add to the list above is methodological pluralism. For instance, there is interesting work in macroeconomics using agent-based computational methods; see, for instance, the 2008 book on Emergent Macroeconomics by Delli Gatti, Gaffeo, Gallegati, Giulioni, and Palestrini. As I have said before, such models can provide microfoundations for macroeconomics in a manner that is both more plausible and more authentic than is the case with highly aggregative representative agent models.

---

Update (5/26). Some useful perspective from Malaise Precis:
The dustup between Mark Thoma and David Andolfatto... is perhaps more symptomatic of the divide between - at extreme risk of too much simplification - "new" macroeconomists and "old" macroeconomists. The macroeconomists of my generation were taught DSGE models. Facts were "stylized" facts, i.e. first and second moments of "key" economic variables such as GNP, investment and consumption. During my entire 6 years at graduate school things like institutional details and historical events that may have affected the economy were laid aside or treated as not being "relevant" to the model. Economies were frictionless and markets always cleared. Sure, some frictions were eventually introduced but perhaps the biggest elephant in the room was that the curriculum cultivated us with a certain attitude that:
  1. There are those who can build DSGE models and there are those who can't.
  2. All partial equilibrium models can be dismissed off hand.
  3. All structural equation models are completely irrelevant especially those not based on DSGE models. (IS-LM or Keynesian "cross" models are definitely in this category.)
  4. Any paper that does not present a model can be dismissed - this included narratives as well as historical papers.
Perhaps the economists who fail to understand history will be doomed to repeat [it]?
---

Update (5/27). David Andolfatto has posted an uncommonly gracious follow-up to his earlier remarks. As Mark points out in response, "it is possible to find shrill, over the top attacks on all sides of the debate on macroeconomic policy." What bothered me about David's earlier post was not the harshness of the language but the idea that some people are simply not qualified to speak out on certain issues. I believe that we economists need (and should welcome) voices from outside our narrow areas of specialization, and indeed outside our discipline.

Sunday, May 23, 2010

Blame the Instructions, Not the Machines

Following the dramatic flash crash on May 6, there has been a lot of attention paid to the mechanics of trading (automation, frequency, scale and speed) but not enough to the kinds of strategies that are being implemented using these mechanisms. Trading algorithms do whatever they are instructed to do, and market movements result from the distribution of instructions and not the technology used to implement them. Technology certainly matters, but in an indirect way. Just as changes in climate can alter the distribution of species in an ecosystem, driving some to extinction and allowing others to proliferate, new technologies can alter the distribution of strategies among the population of traders. Major changes of this kind can affect systemic stability, in the case of markets and ecosystems alike. 

The variety of trading strategies in use is vast, but I find it useful to partition them into two broad categories: those that are information augmenting and those that are information extracting. The first group of strategies are based on some form of fundamental analysis: examination of balance sheets, growth potential, and risk, for instance, and trading based on departures of prices from estimated valuations. Such strategies require the investment of resources in information gathering, and end up feeding information to the market. The other class of strategies use market data itself to direct trades. These could be non-directional and arbitrage-based, or directional strategies based on such factors as momentum. This latter class of strategies use volume, price, and other market data as a basis for entering and exiting positions.

A market dominated by information augmenting strategies will tend to be stable and to track information as it arises in the economy. But information extracting strategies can be very profitable in stable markets as long as they react quickly and forcefully to new market data. Changes in technology have made rapid responses to market data feasible on a large scale, resulting in an increase in total market wealth that is invested on the basis of such strategies. The problem is that if too many people are using such strategies, there isn't enough information getting into prices systematically, and certain technical strategies can start generating mutually amplifying responses to noise.

The SEC-CFTC preliminary report on the crash contains a wealth of information and some interesting clues about the kinds of strategies that may have been implicated. First, "approximately 200 securities traded, at their lows, almost 100% below their previous day’s values." These trades, "occurred at extraordinarily low prices – five cents or less – which indicates an execution against a “stub” quote of a market maker." The overwhelming majority of these trades, it turns out, were short sales:
During the period of peak market volatility, 2:45 p.m. to 2:55 p.m., the broken trades executed at five cents or less were primarily short sales. Short sales account for approximately 70.1% of executions against “stub” quotes between 2:45 p.m. and 2:50 p.m., and approximately 90.1% of executions against “stub” quotes between 2:50 p.m. and 2:55 p.m.
In other words, the trades at the most extreme prices were not generated by retail investors whose stop loss orders were converted to market sell orders as prices fell: they were generated by short selling in a falling market.
Also interesting is the case of securities that displayed "aberrant behavior" on the upside:
Sotheby’s (BID) is actively traded and has a narrow bid-ask spread from 2:44 p.m. through 2:49 p.m. after which volume is low but bid and ask quotes remain stable. However, after about 2:57 p.m. volume spikes dramatically and trades are executed at a high (presumably stub) quote of approximately $100,000... BID trades through the national best offer multiple times between 2:57:05 p.m. and 2:57:12 p.m. This includes trades at approximately $100,000 which is presumably a top-end stub quote.
A single round lot of shares in Sotheby's would have cost ten million dollars at this price. Given that the orders were executed, it seems inconceivable to me that they came from retail investors.
What kinds of strategies could have been responsible for these trades? In January of this year the SEC published a Concept Release on Equity Market Structure that explicitly discussed the destabilizing consequences of certain strategies used by proprietary trading firms. Of special concern were strategies based on order anticipation and momentum ignition:
One example of an order anticipation strategy is when a proprietary firm seeks to ascertain the existence of one or more large buyers (sellers) in the market and to buy (sell) ahead of the large orders with the goal of capturing a price movement in the direction of the large trading interest... The type of order anticipation strategy referred to in this release involves any means to ascertain the existence of a large buyer (seller) that does not involve violation of a duty, misappropriation of information, or other misconduct. Examples include the employment of sophisticated pattern recognition software to ascertain from publicly available information the existence of a large buyer (seller), or the sophisticated use of orders to “ping” different market centers in an attempt to locate and trade in front of large buyers and sellers... An important issue for purposes of this release is whether the current market structure and the availability of sophisticated, high-speed trading tools enable proprietary firms to engage in order anticipation strategies on a greater scale than in the past.
A very different type of potentially destabilizing strategy seeks to engineer and exploit momentum in prices:
Another type of directional strategy that may raise concerns in the current market structure is momentum ignition. With this strategy, the proprietary firm may initiate a series of orders and trades... in an attempt to ignite a rapid price move either up or down. For example, the trader may intend that the rapid submission and cancellation of many orders, along with the execution of some trades, will “spoof” the algorithms of other traders into action and cause them to buy (sell) more aggressively. Or the trader may intend to trigger standing stop loss orders that would help cause a price decline. By establishing a position early, the proprietary firm will attempt to profit by subsequently liquidating the position if successful in igniting a price movement.
Order anticipation and momentum ignition are just extreme cases of a broad range of directional strategies that are either information extracting or seek to trigger information extracting algorithms. If too great a share of total market activity is driven by such strategies, major departures of prices from fundamentals will arise sooner or later. It is important, therefore, to allow such strategies to take heavy losses when they do eventually misfire. Macroeconomic Resilience has an excellent analytical post on the crash that makes a similar point:
Policy measures that aim to stabilise the system by countering the impact of positive feedback processes select against and weed out negative feedback processes – Stabilisation reduces system resilience. The decision to cancel errant trades is an example of such a measure. It is critical that all market participants who implement positive feedback strategies... suffer losses and those who step in to buy in times of chaos i.e. the negative-feedback providers are not denied of the profits that would accrue to them if markets recover. This is the real damage done by policy paradigms such as the “Greenspan/Bernanke Put” that implicitly protect asset markets. They leave us with a fragile market prone to collapse even with a “normal storm”, unless there is further intervention as we saw from the EU/ECB. Of course, every subsequent intervention that aims to stabilise the system only further reduces its resilience.
By canceling trades, the exchanges reversed a redistribution of wealth that would have altered the composition of strategies in the trading population. I'm sure that many retail investors whose stop loss orders were executed at prices far below anticipated levels were relieved. But the preponderance of short sales among trades at the lowest prices and the fact that aberrant price behavior also occurred on the upside suggests to me that the largest beneficiaries of the cancellation were proprietary trading firms making directional bets based on rapid responses to incoming market data. The widespread cancellation of trades following the crash served as an implicit subsidy to such strategies and, from the perspective of market stability, is likely to prove counter-productive.

Saturday, May 15, 2010

James Tobin's Hirsch Lecture

James Tobin's Fred Hirsch Memorial Lecture "On the Efficiency of the Financial System" was originally published in a 1984 issue of the Lloyds Bank Review, and republished three years later in a collection of his writings. Willem Buiter discussed the essay at some length about a year ago in a provocative post dealing with the regulation of derivatives. Both the original essay and Buiter's discussion of it remain well worth reading today as guides to the broad principles that ought to underlie financial market reform.
In his essay, Tobin considers four distinct conceptions of financial market efficiency:
Efficiency has several different meanings: first, a market is 'efficient' if it is on average impossible to gain from trading on the basis of generally available public information... Efficiency in this meaning I call information arbitrage efficiency.

A second and deeper meaning is the following: a market in a financial asset is efficient if if its valuations reflect accurately the future payments to which the asset gives title... I call this concept fundamental valuation efficiency.

Third, a system of financial markets is efficient if it enables economic agents to insure for themselves deliveries of goods and services in all future contingencies, either by surrendering some of their own resources now or by contracting to deliver them in specified future contingencies... I call efficiency in this Arrow-Debreu sense full insurance efficiency.

The fourth concept relates more concretely to the economic functions of the financial industries... These include: the pooling of risks and their allocation to those most able and willing to bear them... the facilitation of transactions by providing mechanisms and networks of payments; the mobilization of saving for investments in physical and human capital... and the allocation of saving to to their more socially productive uses. I call efficiency in these respects functional efficiency.
The first two criteria correspond, respectively, to weak and strong versions of the efficient markets hypothesis. Tobin argues that the weak form is generally satisfied on the grounds that "actively managed portfolios, allowance made for transactions costs, do not beat the market." He notes, however that efficiency in the second (strong form) sense is "by no means implied" by this, and that "market speculation multiplies several fold the underlying fundamental variability of dividends and earnings."
My own view of the matter (expressed in an earlier post) is that such a neat separation of these two concepts of efficiency is too limiting: endogenous variations in the composition of trading strategies result in alternating periods of high and low volatility. Nevertheless, as an approximate view of market efficiency over long horizons, I feel that Tobin's characterization is about right. 
Full insurance efficiency requires complete markets in state contingent claims. This is a theoretical ideal that is impossible to attain in practice for a variety of reasons: the real resource costs of contracting, the thinness of potential markets for exotic contingent claims, and the difficulty of dispute resolution. Nevertheless, Tobin argues for the introduction of new assets that insure against major contingencies such as inflation, and securities of this kind have indeed been introduced since his essay was published.
Finally, Tobin turns to functional efficiency, and this is where he expresses greatest concern:
What is clear that very little of the work done by the securities industry, as gauged by the volume of market activity, has to do with the financing of real investment in any very direct way. Likewise, those markets have very little to do, in aggregate, with the translation of the saving of households into corporate business investment. That process occurs mainly outside the market, as retention of earnings gradually and irregularly augments the value of equity shares...

I confess to an uneasy Physiocratic suspicion, perhaps unbecoming in an academic, that we are throwing more and more of our resources, including the cream of our youth, into financial activities remote from the production of goods and services, into activities that generate high private rewards disproportionate to their social productivity. I suspect that the immense power of the computer is being harnessed to this 'paper economy', not to do the same transactions more economically but to balloon the quantity and variety of financial exchanges. For this reason perhaps, high technology has so far yielded disappointing results in economy-wide productivity. I fear that, as Keynes saw even in his day, the advantages of the liquidity and negotiability of financial instruments come at the cost of facilitation nth-degree speculation which is short sighted and inefficient...
Arrow and Debreu did not have continuous sequential trading in mind; when that occurs, as Keynes noted, it attracts short-horizon speculators and middlemen, and distorts or dilutes the influence of fundamentals on prices. I suspect that Keynes was right to suggest that we should provide greater deterrents to transient holdings of financial instruments and larger rewards for long-term investors.
Recall that these passages were published in 1984; the financial sector has since been transformed beyond recognition. Buiter argues that Tobin's concerns about functional efficiency are more valid today than they have ever been, and is particularly concerned with derivatives contacts involving directional bets by both parties to the transaction:
[Since] derivatives trading is not costless, scarce skilled resources are diverted to what are not even games of pure redistribution.  Instead these resources are diverted towards games involving the redistribution of a social pie that shrinks as more players enter the game.

The inefficient redistribution of risk that can be the by-product of the creation of new derivatives markets and their inadequate regulation can also affect the real economy through an increase in the scope and severity of defaults.  Defaults, insolvency and bankruptcy are key components of a market economy based on property rights.  There involve more than a redistribution of property rights (both income and control rights).  They also destroy real resources.  The zero-sum redistribution characteristic of derivatives contracts in a frictionless world becomes a negative-sum redistribution when default and insolvency is involved.  There is a fundamental asymmetry in the market game between winners and losers: there is no such thing as super-solvency for winners.  But there is such a thing as insolvency for losers, if the losses are large enough.
The easiest solution to this churning problem would be to restrict derivatives trading to insurance, pure and simple.  The party purchasing the insurance should be able to demonstrate an insurable interest.  [Credit Default Swaps] could only be bought and sold in combination with a matching amount of the underlying security. 
The debate over naked credit default swaps is contentious and continues to rage. While market liquidity and stability have been central themes in this debate to date, it might be useful also to view the issue through the lens of functional efficiency. More generally, we ought to be asking whether Tobin was right to be concerned about the size of the financial sector in his day, and whether its dramatic growth over the couple of decades since then has been functional or dysfunctional on balance.

Monday, May 10, 2010

Reflections on the Flash Crash

Index Universe observes that much of the unusual trading activity last Thursday involved exchange traded funds and notes:
Nasdaq has released a list of 281 securities that saw unusual activity during yesterday’s “flash crash” on the market... In all, 193 of the 281 securities (68.7 percent) on the NASDAQ list were exchange-traded funds or exchange-traded notes... The New York Stock Exchange has published a similar list, detailing 173 different securities whose trades will be cancelled. In all, 111 of those securities (64.2 percent) were ETFs or ETNs...
It was not immediately clear why ETFs dominate the lists.
Izabella Kaminska follows up on FT Alphaville:
ETF and ETN trading is closely related to high-frequency trading... Constant market-making and arbitrage opportunities are provided to authorised participants (often high frequency trading firms) by the ETF model’s dependence on converging to the net asset value on a daily basis. A typical fund has about five authorised participants.

The so-called creation and redemption mechanism allows authorised participants to lock-in profits when the shares of ETFs over-price or under-price the NAV, since only they are allowed to redeem or create shares at the official NAV price of the funds.
Dynamic hedging is needed to protect the arbitrage until the moment the creation or redemption process can take place... A significant change in any constituent stock in the interim can can hence fuel frantic fine-tuning of positions ahead of NAV publication time.
Can the algorithmic strategies used by authorized participants making markets in exchange traded funds help account for the crash? Not really. Index arbitrage of this kind simply brings the prices of exchange traded funds in line with the prices of their constituent securities, and is non-directional. This activity could explain a spike in volume as a result of sharp movements in prices, but this is a symptom rather than a cause of the crash. Something else caused the prices of the funds and/or the constituent securities to drop, and index arbitrage activity picked up as a result. What was this cause?

Some have pointed to the fact that liquidity vanished from the market during the crash, or that stop loss orders were triggered as prices fell. While these effects certainly accelerated and amplified the decline, there must have been an independent source of massive selling pressure that ran through the available bids, triggered stop orders, and caused electronic market makers to shut down. Again, where did this overwhelming selling pressure come from?

The best explanation that I have seen is contained in a message by an anonymous analyst that Yves Smith posted earlier today. The hypothesis is that the initial trigger came from algorithms implementing volume-sensitive technical strategies:
Volume was gigantic yesterday before we really went into freefall. As of 2 p.m., some 40 minutes before Armageddon, we were tracking for a massive 15.6 billion share day (we ended up doing 19.3 billion – the second largest day ever after the October 10th, 2008 whitewash). Half an hour later, at 2:30 p.m. – still ten minutes before the bottom fell out – volume had surged and we were tracking for a 17.2 billion share day. The period between 2 p.m. and 2:40 p.m. saw immense selling pressure in both the cash market and the futures market, and that occurred with the E-minis still north of 1120...

In other words, it was not a sudden, random surge of volume from a fat finger that overwhelmed the market. It was a steady onslaught of selling that pressured the market lower in order to catch up with the carnage taking place in the credit markets and the currency markets...
So what happened here? Three things:
  1. Sellers probably had orders in algorithms – percentage-of-volume strategies most likely, maybe VWAP – and could not cancel, could not “get an out.” These sellers could be really “quanty” types, or high freqs, or they could be vanilla buy side accounts. It really doesn’t matter. The issue here is that the trader did not anticipate such a sharp price move and did not put a limit on the order...
  2. Sell stop orders were triggered which forced market sell orders into an already well offered market. 
  3. While the market was well offered, it was not well bid. Liquidity disappeared... Bids disappeared, spreads blew out, and no one was trading except a handful of orphaned algo orders, stop sell orders, and maybe a few opportunists who had loaded up the order book with low ball bids (“just in case”). High frequency accounts and electronic market makers were, by all accounts, nowhere to be found.
It boils down to this: this episode exposed structural flaws in how a trade is implemented (think orphaned algo orders) and it exposed the danger of leaving market making up to a network of entities with no mandate to ensure the smooth and orderly functioning of the market (think of the electronic market makers and high freqs who can pull bids instantaneously as opposed to a specialist on the floor who has a clearly defined mandate to provide liquidity).
This rings true to me. Accounting for the crash requires us to go beyond the mechanics of the trading process (automation, scale, speed) and to examine the kinds of strategies that were being implemented by the algorithms. A market dominated by technical analysis is always going to be vulnerable to this kind of instability. The fact that the prices of some securities and funds crashed to absurd levels that were clearly out of line with fundamentals made this obvious and resulted in a quick recovery. But what if the trading strategies had given rise to upward rather than downward instability? It would have been more difficult to establish conclusively that assets were overpriced, and accordingly more risky to enter positions to bring them back in line with fundamentals. This, presumably, is how asset price bubbles get started. 

Friday, May 07, 2010

Algorithmic Trading and Price Volatility

Yesterday's dramatic decline and rapid recovery in stock prices may have been triggered by an erroneous trade, but could not have occurred on this scale if it were not for the increasingly widespread use of high frequency algorithmic trading.

Algorithmic trading can be based on a variety of different strategies but they all share one common feature: by using market data as an input, they seek to exploit failures of (weak form) market efficiency. Such strategies are necessarily technical and, for reasons discussed in an earlier post, are most effective when they are rare. But they have become increasingly common recently, and now account for three-fifths of total volume in US equities:
Algorithms have become a common feature of trading, not only in shares but in derivatives such as options and futures. Essentially software programs, they decide when, how and where to trade certain financial instruments without the need for any human intervention... markets have come to be dominated by “high-frequency traders” who rely on the perfect marriage of technology and speed. They use algorithms to trade at ultra-fast speeds, seeking to profit from fleeting opportunities presented by minute price changes in markets. According to Tabb Group, a consultancy, algorithmic and high-frequency trading accounts for more than 60 per cent of activity in US equity markets.
This is a recipe for disaster:
[In] a market dominated by technical analysis, changes in prices and other market data will be less reliable indicators of changes in information regarding underlying asset values. The possibility then arises of market instability, as individuals respond to price changes as if they were informative when in fact they arise from mutually amplifying responses to noise. 
Under such conditions, algorithmic strategies can suffer heavy losses. They do so not because of "computer error" but because of the faithful execution of programs that are responding mechanically to market data. The decision by Nasdaq to "cancel trades of 286 securities that fell or rose more than 60 percent from their prices at 2:40 p.m." might therefore be a mistake: it protects such strategies from their own flaws and allows them to proliferate further. Canceling trades can be justified in response to genuine human or machine error, but not in response to the implementation of flawed algorithms.

I don't know how the losses and gains from yesterday's turmoil were distributed among algorithmic traders and other market participants, but it is conceivable that part of the bounce back was driven by individuals who were alert to fundamental values and recognized a buying opportunity. The following clip of Jim Cramer urging viewers to buy Proctor and Gamble just moments before a sharp recovery in its price is suggestive:


I would be very interested to know whether the transfer of wealth that took place yesterday as prices plunged and then recovered resulted in major losses or gains for the funds using algorithmic trading strategies. I expect that those engaged in cross-market or spot-futures arbitrage would have profited handsomely, at the expense of those relying on some form of momentum based strategies. If so, then the cancellation of trades will simply set the stage for a recurrence of these events sooner rather than later.
---
I thank Charles Davi for alerting me to the Financial Times piece on algorithmic trading, and Jens Kayenburg (a student in my Financial Economics course this semester) for sending me a link to the Cramer clip.

---

Update (5/7). David Merkel is also opposed to the cancellation of trades:
[My] sense of the day is that some algorithmic trading programs went wild, and made trades that no sane human would... NASDAQ should not have canceled the trades.  It ruins the incentives of market actors during a panic. Set your programs so that they don’t so stupid things. Don’t give them the idea that if they do something really stupid, there will be a do-over. In the absence of fraud, trades should not be canceled.
And here's Yves Smith's take on the events of yesterday:
The idea that a fat-fingered trade out of Citi was the cause has been denied by the bank. The downdraft did have the look of a monster sell order, but the more credible explanation is that it was either a sudden rise in yen or the euro hitting the magic number 1.225 to the dollar that set off algorithmic traders. And enough of them look to similar indicators and technical levels that it isn’t hard to see this as the son of program trading, mindless computer-driven selling when the right triggers are hit.
But another side effect of today’s equity market gyrations is further distrust in the markets, particularly by retail buyers. I am told that various retail trading platforms were simply not operating during the acute downdraft and rebound. I couldn’t access hoi polloi Bloomberg news or data pages then either. The idea that the pros could trade (even if a lot of those trades are cancelled) while the little guy was shut out reinforces the perception that the markets are treacherous and the odds are stacked in favor of the big players (even though we all understand that, it isn’t supposed to be this blatant).
---

Update (5/8). Here's an important point about liquidity by Paul Kedrosky:
Largely unnoticed... the provision of liquidity has changed immensely in recent years. It is more fickle, less predictable, and more prone to disappearing suddenly, like snow sublimating straight to vapor during a spring heat wave. Why? Because traditional providers of liquidity, market-makers and other participants, are not standing so ready to make the other side of the market. There are fewer traders prepared to make a market for the sake of market health. This is partly because they can, but mostly because of what has happened with high-frequency trading, algorithms, and the like, which increasingly jump into the trading queue in front of and around orders, creating some liquidity, but also peeling pennies for themselves, frustrating market participants and heretofore liquidity providers, but in the course of normal business generally accepted as a price that gets paid to the market's battle bots.
But all of this changes market microstructure in insidiously destabilizing ways. For the first time we have large providers of this shadow liquidity, algorithms and high-frequency sorts, that individually account for large percentages of daily trading activity, and, at the same time, that can be turned off with a switch, or at an algorithmic whim. As a result, in market crises, when liquidity was always hardest to find, it now doesn't just become hard to find, it disappears altogether, like water rushing out sight via a trapdoor to hell. Old-style market-makers are standing aside as panicky orders pour in, and they look straight at shadow liquidity providers and say, "No thanks. You battle bots take it". And, they don't.
David Murphy, who is always worth reading, thinks that its time to put some sand in the algorithmic wheels:
It is time to... throw some sand in the cogs of the algos. If every trade executed in the same, say, five second interval got the same price, instability would be greatly reduced, yet ordinary investors would not notice the effect. And if every trade were executed on the NYSE, or at least using the same market conventions, then officials could actually stop everything when things get out of hand.
I spoke with Zack Goldfarb of the Washington Post yesterday for an interesting article that ran today. The point I was trying to make is this: the problem lies not so much with the method of trading (algorithmic or otherwise) but with the underlying strategies that are being implemented. Algorithmic trading allows technical strategies to profit and proliferate, and markets dominated by technical analysis will tend to be unstable. If destabilizing strategies are prevented from taking losses when they misfire, the result will be more frequent and significant departures of prices from fundamentals. Hence my concern over the cancellation of trades.

---

Update (5/9). CBS Evening News had a report on this yesterday, including a couple of clips from a conversation I had with Tony Guida earlier in the day. As with most media reports on the topic, the focus is on automation, scale and speed rather than the kinds of trading strategies that these methods allow speculators to implement. We did cover this ground in the interview but (understandably, I suppose) it didn't make it into the broadcast. 

Sunday, May 02, 2010

Reputational Capital and Incentives in Organizations

The following passage, jarring in light of recent revelations, appears in the opening pages of Akerlof and Kranton's recently published book on Identity Economics:
On Wall Street, reputedly, the name of the game is making money. Charles Ellis' history of Goldman Sachs shows that, paradoxically, the partnership's success comes from subordinating that goal, at least in the short run. Rather, the company's financial success has stemmed from an ideal remarkably like that of the U.S. Air Force: "Service before Self." Employees believe, above all, that they are to serve the firm. As a managing director recently told us: "At Goldman we run to the fire." Goldman Sachs' Business Principles, fourteen of them, were composed in the 1970s by the firm's co-chairman, John Whitehead, who feared that the firm might lose its core values as it grew. The first Principle is "Our clients' interests always come first. Our experience shows that if we serve our clients well, our own success will follow." The principles also mandate dedication to teamwork, innovation, and strict adherence to rules and standards. The final principle is "Integrity and honesty are at the heart of our business. We expect our people to maintain high ethical standards in everything they do, both in their work for the firm and in their personal lives."
If the preservation of its reputation for serving the interests of its clients was a major organizational goal for Goldman, then something clearly went terribly wrong. Consider, for example, Chris Nicholson's report on the manner in which the bank managed to shed its holdings of mortgage backed securities shortly before they collapsed in value, allegedly serving itself "at the expense of its clients." Nicholson reproduces the following email from an employee at the European sales desk to the head of mortgage trading: 
Real bad feeling across European sales about some of the trades we did with clients. The damage this has done to our franchise is very significant. Aggregate loss of our clients on just these 5 trades along is 1bln+. In addition team feels that recognition (sales credits and otherwise) they received for getting this business done was not consistent at all with money it ended making/saving the firm.
Felix Salmon considers this email to be "particularly damning" for the following reasons:
Illiquid things like CDOs are sold as much as they’re bought, and Goldman’s highly-paid sales team was aggressively going out and selling instruments which were at one point on Goldman’s balance sheet and which wound up cratering in value.
The effects were twofold: firstly, the Goldman clients who got stuck with this nuclear waste when the music stopped were understandably none too impressed with Goldman. And secondly, Goldman managed to stick the losses on those instruments to its clients, rather than taking those losses itself, and as a result its profits were billions of dollars higher than they would otherwise have been.
Was the hit to Goldman’s franchise value a hit worth taking, given the billions of dollars it saved? Probably yes, until the SEC and Carl Levin came along.
But the possibility that the SEC and Mr. Levin would eventually come along was always there. This is a form of tail risk that is not unlike that taken by the folks at the AIG financial products division when they sold vast amounts of credit protection in the mistaken belief that they would never be faced with significant collateral calls. Raghuram Rajan, in a remarkably prescient 2005 paper, described this process as follows:
Consider the incentive to take on risk that is not in the [compensation] benchmark and is not observable to investors. A number of insurance companies and pension funds have entered the credit derivatives market to sell guarantees against a company defaulting. Essentially, these investment managers collect premia in ordinary times from people buying the guarantees. With very small probability, however, the company will default, forcing the guarantor to pay out a large amount. The investment managers are, thus, selling disaster insurance or, equivalently, taking on “peso” or tail risks, which produce a positive return most of the time as compensation for a rare very negative return. These strategies have the appearance of producing very high alphas (high returns for low risk), so managers have an incentive to load up on them. Every once in a while, however, they will blow up. Since true performance can be estimated only over a long period, far exceeding the horizon set by the average manager’s incentives, managers will take these risks if they can.
As in the case of tail risks arising from the sale of credit protection, damage to the firm's franchise value does not appear in standard compensation benchmarks. The problem in Goldman's case was not that such damage was "a hit worth taking" but rather that the incentives faced by its employees did not adequately reflect the value of the firm's reputation in the first place. To the extent that employee behavior is responsive to such incentives, the sacrifice of reputation for immediate profit will be made regardless of whether or not, in the broader scheme of things, the damage to franchise value exceeds the short term gains.
How, then, might a firm accomplish the subordination of short term goals to long term objectives in practice? There are two possibilities: one could hire individuals who are predisposed to behave in a principled manner even in the face of incentives not to do so, or one could design compensation schemes that adequately reward actions that preserve or enhance reputation. Economists, being fervent believers in the power of incentives, usually tend to favor the latter approach. But in this particular context, there are two possible problems with this. First, the contribution of any given transaction to the reputation of the firm is generally much more difficult to ascertain and quantify than any contribution to the firm's balance sheet. This makes it difficult to assign reward appropriately. Second, in order to serve as credible commitments to clients and customers, compensation schemes must be easily observable and not subject to renegotiation after the fact. This is seldom the case.
The alternative is to hire individuals who are predisposed to behave in a manner that meets organizational objectives: to place a premium not only on ability but also on character. But would this not create incentives for potential employees to simply misrepresent their values? As Groucho Marx famously said: "The secret of life is honesty and fair dealing... if you can fake that, you've got it made." 
Fortunately, the consistent misrepresentation of personality traits is often infeasible or prohibitively costly. There is an interesting line of research in economics, dating back to Schelling and continuing through Hirshleifer and Frank, that explores the commitment value of traits that are costly to fake. Hirshleifer went so far as to argue that the "absence of self-interest can pay off even measured in terms of material selfish gain, and... the loss of control that makes calculated behavior impossible can be more profitable than calculated optimization... we ought not to prejudge the question as to whether the observed limitations upon the human ability to pursue self-interested rationality are really no more than imperfections -- might not these seeming disabilities actually be functional?" 
One could take this a step further: not only might limitations on the unbridled pursuit of material self-interest be functional for individuals, they may also be functional for the organizations to which they belong. And in the long run, firms that manage to identify and promote such individuals will prosper at the expense of those who are unable or unwilling to do so.

---

Update (5/2). At the end of the post I linked to above, Felix Salmon asks:
Let’s say you work at an investment bank and you’re in charge of a book which includes a $1 billion barrel of toxic nuclear waste. You know that barrel is going to zero sooner or later, and you manage to sell it to some European dupes just in time, for full face value, saving your bank from $1 billion in losses. How much of a bonus, if any, should you get on that deal, and where should the money come from? And should you feel bad about avoiding the losses and sticking them to your clients instead?
In a comment on that post, The Epicurean Dealmaker responds:
The answer depends. If you are a proprietary trading shop your job is to make money (and avoid losses) at all costs. In fact, you have a fiduciary duty to somebody (eg, limited partners or shareholders) to do so. You sell the crap and never look back. Caveat emptor rules.

If you are a traditional investment bank, you find out which morons allowed $1 billion of concentrated toxic risk to accumulate on your balance sheet and you fire their incompetent asses for cause (ie, no bonuses, no golden parachutes). Then you convene the Executive Committee to decide whether it is worth permanently damaging your franchise as a supposedly neutral market maker by offloading the waste before it blows up onto your clients, or whether you should eat the loss as punishment for failure...

This is why trying to run a large proprietary trading operation inside a traditional market-making investment bank introduces a fundamental, highly dangerous conflict of interest. At a small scale, this kind of stuff happens all the time, and should, in a traditional investment bank. However, it should never reach the scale that threatens the short- or long-term future of the bank.
This gets it about right in my opinion. See also the many excellent comments along these lines on Mark Thoma's page. Mark links to a related post by Richard Green, who in turn mentions pieces by James Surowiecki and Yves Smith that are both worth reading. Here is Yves' bottom line:
Legal issues aside, it isn’t merely the great unwashed public that is taking an increasingly dim view of Goldman. What is striking is the change in sentiment among professionals.

Recall Goldman’s reputation: that of being the best managed firm on the Street, and its boasting about its risk management as key to its superior profits... its once-vaunted risk management, which led observers to believe that Goldman was doing a better job of managing exposures, now increasingly looks like the firm was simply more systematic and aggressive than its peers in not just shifting risks onto customers but engaging in further profit-maximizing strategies that look downright predatory...

Goldman is increasingly beleagured. Its lobbyists are now pariahs. More private lawsuits are coming to the fore. There are rumors it is in settlement talks with the SEC. But the once-storied firm apparently turned its well oiled machine to ruthless profit-seeking. It is an open question how much damage the firm will sustain from the well-deserved backlash, and whether it can change its conduct. 
Indeed.

---

Update (5/4). As usual, there are a number of excellent comments on Mark Thoma's page. Here's Roger Chittum:
Goldman and the other IBs are in a very different business now than they were in the 1970s. Formerly, their business was representing real-economy clients for generous fees in facilitating mergers, acquisitions, spin-offs, IPOs, bridge loans, restructurings, and other balance sheet transactions, in which they sometimes took participations. Their reputations were quite important to relationship building with the few firms that could afford their services and with the investors to whom they returned again and again. In fact, they used to regard proprietary trading as a business that was beneath them. Thus, Solomon Brothers, for example, which early on was very active in trading, especially in government securities, was not in the top tier reputationally.

In recent years, the businesses and cultures of the IBs have been transformed. Something like 75% of their profits now come from proprietary trading, and the top executives have come up through trading instead of the white shoe service businesses. They are essentially now hedge funds with advisory groups appendages. And they are dangerous and unapologetic predators.

It may be possible to have commercial banks and investment banks as formerly constituted have common ethical cultures that are concerned with the longer term and institutional reputations, but I don't see how that can happen that can happen in hedge funds and proprietary trading units. Each of the big IBs has a story of internal culture struggles as their businesses changed, and notice the recent reports of culture clash arising out of the dysfunctional forced marriage of BofA and Merrill.

Glass Steagall put the wall between commercial banking and investment banking. Perhaps commercial banking and fee-based investment banking in the 1970s style could thrive with a common culture, but proprietary trading needs to be hived off. There are irreconcilable differences there.
To which mrrunangun adds:
Another difference is that in Whitehead's time, GS was a partnership and the great bulk of the partners' wealth was their interest in the partnership. Any failure in the partnership had the potential of creating losses for all of the partners. The long-term success of the partnership was crucial to the long-term security of the partners.
In the public company that GS became, none of that discipline remained to restrain the people running the company from exploiting their position in order to maximize the short-term profits on which their claims to enormous annual pay packages were supposedly based. In contemporary practice, boards of directors are generally much more sympathetic to management than to shareholders. Boards put shareholders first only in small companies where the board is made up of the owners, who have a significant stake in the success of the enterprise, and perhaps their attorney and/or auditor. In medium to large company practice, the CEO generally recruits the board, probably always if the CEO serves as board chairman as well. Board members who are not inside directors rarely have a critical portion of their own wealth in the companies on whose boards they serve.
Then there's Bruce Wilder:
On the issue of "aligning incentives" for individuals, I can tell you what the right scheme looks like: it looks like a fixed salary, with modest "options" in the form of (mostly honorary in magnitude) raises and bonuses.
At the very top of a business enterprise, it makes sense to make top executives buy a substantial interest in the firm -- to essentially become "partners". The top executives don't have to own a large part of the firm, but their ownership interest has to be a large part of their personal wealth. And, it should not be a gift -- no free options; make them buy it, and make it hard to sell: a large part of their ownership interest should be tied up in trusts.
The top executives of large corporations really shouldn't be paid in "performance" contingent options. Shocking I know, but executive leadership has little to do with the piece-work of a sweat-shop or a cucumber field.
And, they shouldn't be paid in magnitudes that could make them independently wealthy in a single calendar year. 
Magnitude matters, and it can easily overwhelm any contingent "alignment" of incentives... If you promise to pay someone a vast amount, realizable in the short-term, contingent on some abstract score-keeping scheme, you are incentivizing (horrible word) them to corrupt the score-keeping. You are asking them to lie.
Rajiv Sethi comes around to this issue, via personal character and the difficulty most of the non-psychopaths among us have, in lying.
He might also consider that Goldman is a vast, hierarchical organization, embedded in a market-exchange network, all of which -- hierarchy and network -- consists entirely of generating and reporting numbers, as part of a complex scheme of compound control.
This system cannot function, if the participants have too much of an incentive to corrupt the reported numbers. If people at the top of the hierarchy, or the clients, or the counter-parties, or on-lookers in the same and related markets, get the "wrong" numbers, things go terribly wrong.
 Bruce also points to the following from Mike Konzal of Rortybomb:
To keep this in economic terms, this crisis has shown a wave of agency problems embedded inside financial institutions. The best phrase for why people were motivated to do the things they do is simple: IBGYBG–I’ll be gone, you’ll be gone.
The other obvious market failure is that in a ruthlessly competitive arena that is judged primarily on quantitative measures, any ability to juke your statistics forces others to participate in that as well. We see this in a variety of ways I can describe if people are interested, but if your competition is repo 105ing their balance sheet to make it look like they are getting better returns with less leverage, they are going to get better deals on customers and capital than you are going to get and put you out of business. So you better do that as well. The notion of Milton Friedman-ite self-regulation through reputation effects has been a complete failure.
This echoes Richard Serlin's comments below. And then there's this from an anonymous source:
My thought reading Sethi's post and also TED's latest was this:
Decades ago the investment banks stopped hiring the scions of wealthy families expecting a sinecure, and started looking for people who were smart and "hungry" (as Grisham has put it). They asked applicants "What would you do if you won the lottery?" And if you made it clear that you didn't really care about the money, you weren't the right person for them.
And as TED makes clear the people who make it to the top at the investment banks are the hungriest sharks. The people who aren't really hungry (and I think there are plenty of them), make enough and leave, or walk when they feel they're asked to do something that compromises their integrity. So there's a huge endogeneity problem here.
Finally, paine is skeptical of the idea that "we can create eisenhowerish org men and then blend em with fresh new corporate norms" on the grounds that competitive pressure transforms character. It's a fair point.