Wednesday, November 17, 2010

Herbert Scarf's 1964 Lectures: An Eyewitness Account

In the fourth volume of The Makers of Modern Economics is a fascinating autobiographical essay by Duncan Foley that traces the arc of his career as an economist and reflects upon developments in the discipline over the past four decades. Duncan describes his first exposure to economics at Swarthmore, his interactions with Tobin as a graduate student at Yale, the introduction in his doctoral dissertation of a concept of equity (now called envy-freeness) that does not depend on interpersonal comparisons of utility, his enormously fruitful collaboration with Miguel Sidrauski at MIT on the microfoundations of macroeconomics, his disillusionment with the rational expectations revolution, and his growing interest in heterodox economics at Stanford and subsequently at Barnard and Columbia.

There's enough material there for several interesting posts, but here I'll confine myself to reproducing Duncan's vivid recollection of a two semester course in mathematical economics taught by Herbert Scarf in 1964 (links added):
After the free pursuit of individual learning fostered by the Swarthmore Honors program, I found the return to traditional classroom teaching at Yale a difficult transition... I was frustrated in these courses not just by the tedium and ineffciency of the class lecture style, but by the tendency for instructors who knew a great deal about the substance and practice of their subjects to waste time rehearsing mathematical and theoretical topics they did not understand very well and often misconstrued...

The great exception to this pattern of misdirected pedagogy was Herbert Scarf's year-long course in Mathematical Economics. Scarf knew this material as well as anyone in the world, and had the gifts of patience, clarity of exposition, and personal charisma to convey it brilliantly and effectively. Scarf's teaching was a revelation to me of what could be accomplished in the classroom, with the appropriate attention to systematic organization, consistently careful preparation, and a judicious balance of lecture and discussion to maintain contact with the level of students' understanding. My notes from this course comprise a better and more complete reference for the topics than any book that has since been published.

The passage of time has revealed that the content of Scarf's course was just as remarkable in its depth and insight as the presentation. Remaining mostly within the realm of finite-dimensional spaces, and emphasizing duality and practical algorithms for the construction of solutions, Scarf gave a thorough tutorial on the mathematics of optimization, starting with linear programming via the simplex method and continuing through Kuhn-Tucker theory, dynamic programming, turnpike theory through Roy Radner's algorithmic approach, and integer programming. Since a huge proportion of economic models boil down to an optimization problem, this survey effectively unified and clarified an immense range of economics for the student. When Peter Diamond was working with James Mirrlees on the problem of optimal taxation (Diamond and Mirrlees, 1971a,b), for example, Scarf's approach helped me to grasp the relation between the complexity of their comparative statics results and the nonconvex structure of the constraint set (the intersection of the set of allocations that are resource and technology-feasible and those that can be supported by distorting taxes) in this problem. The study of these formal problems also convinced me that most economic theory depends on strong assumptions of convexity to assure the tractability of the resulting optimization problem, and that in situations where convexity is inherently absent or implausible it is very difficult to make much progress by traditional methods.

Scarf's course continued with a systematic review of general equilibrium theory, starting from the separating hyperplane approach to the Second Welfare Theorem, and including Gérard Debreu's proof (1959) of existence of a competitive equilibrium, the first presentation of Scarf's algorithmic approach to the calculation of competitive equilibria (1973), the theory of the core and its asymptotic equivalence to competitive equilibrium, and Scarf's own crucial counterexamples to the stability of competitive equilibrium under tâtonnement dynamics with more than two commodities (1960). The critical lesson Scarf emphasized in this discussion was the fact that the competitive equilibrium cannot, except in special cases such as representative agent economies, be represented as the solution of a mathematical programming problem. In other words, the Walrasian system does not generally admit a potential function. As a corollary to this observation we see that the comparative statics of competitive general equilibrium theory inherently lacks the organizing structure of convex programming, so that, for example, equilibrium prices are not in general monotonic functions of endowments. These observations planted the seeds in my mind of what grew to be grave doubts about the Walrasian system. These doubts do not focus on the logical consistency of the system, but on its adequacy as a useful representation of real economic relations...

In retrospect we can see that Scarf's course mapped out the whole development of high economic theory for the next twenty or twenty-five years. The theoretical literature of this period has largely been concerned with generalizing the concepts he taught to more sophisticated commodity spaces (such as infinite-dimensional spaces and spaces of stochastic processes), and rediscovering the general properties and limitations of competitive equilibrium theory in these contexts. This has been a source of both wonder and concern to me. I am amazed at how prescient a mind like Scarf's can be about the future development of a field, guided purely by superb mathematical instincts. But what does this imply about the theoretical fertility of economics during this period? If the core theoretical ideas that have dominated the field since were all present in the Yale classroom in 1964, it suggests that economic theory has been in a scholastic, formalistic phase of development during this period, primarily focusing on working out increasingly esoteric implications of well-established concepts.
Duncan tells me that he still has his notes from this course and that Scarf, who recently retired from teaching, remains full of vigor.

In subsequent posts I hope to discuss Duncan's reflections on the microfoundations of macroeconomics, his work with Sidrauski, his concern that the rational expectations revolution was a step backwards in the development of the theory, and his view that "some break with the full Walrasian system along temporary equilibrium lines is necessary as a foundation for a distinct macroeconomics." (The Hicksian concept of temporary equilibrium allows for asset market clearing in the face of heterogeneous beliefs and mutually inconsistent intertemporal plans.) These are themes that I have touched upon in previous posts and would like to revisit soon. In the meantime, let me repeat my plea to the fellows of the Economteric Society to nominate Duncan for election to their ranks.

---

Update (11/18). Glenn Loury writes in to say:
I never had much interaction with Scarf, but his pedagogic virtuosity and mastery of mathematical economics circa 1970 reminds me of... Stanley Reiter, whom I encountered as a raw assistant professor at Northwestern in the 1970s. Stan, a close friend and occasional collaborator with Leo Hurwicz, was director of the Math Center at Northwestern (forerunner of MEDS), and in the late 1970s had a huge impact on young scholars like Paul Milgrom, Bengt Holmstrom, Mark Satterthwaite and Roger Myerson...
I don't think I agree with the claim that much of "high economic theory" since the 60s has been dotting "i's" and crossing "t's". That was true through the mid-seventies, perhaps, but the asymmetric information, mechanism design, incomplete contract theory revolutions (Hurwicz/Myerson/Maskin, eg.) -- and the emergence of deeply insightful applied theory in a variety of fields from labor and I/O to money, finance and trade suggest otherwise to me.
I basically agree with Glenn on this latter point but, in Duncan's defense, the focus of his essay was on the microfoundations of macroeconomics and the futility of simply aggregating the Walrasian system. And on this dimension I think that progress has been limited at best.

---

Update (11/18). A wonderful comment by Jonathan Conning:
I too sat in Herb Scarf's Yale Micro Theory classroom and still remember the stunned awe that I and my classmates felt at the end of his first lecture with us, which happened to be on the simplex algorithm.
My only regret is that that semester at Yale (1990) we only got a handful of micro lectures from Scarf and so did not get the full "systematic review of general equilibrium theory" that Foley mentions.
I have little to say to improve on Duncan's glowing description of a Scarf lecture except to note that by 1990 the Hillhouse basement classroom had smooth sliding blackboards (which I do not imagine they had in 1964). This meant that there were always three blackboards in use, as he could fill one blackboard full of equations and slide it to conceal or reveal what had been written before. One of the things I recall most vividly is how artfully and efficiently Scarf used those boards, and how rarely he used the eraser. A lecture which might have started with definitions and theory that might have taken a detour through an expertly chosen example to reinforce intuition would in the end always return, with the smoothest glide of a hand to reveal again exactly the right portion of the board to bring the lecture full circle back to the climactic point he wanted. Everything seemed expertly choreographed and timed down to the very last second.
I hope that other former students of Scarf will somehow stumble upon this post.

Monday, October 11, 2010

Glenn Loury on Peter Diamond

Glenn Loury has kindly forwarded me a letter he wrote earlier this year in appreciation of Peter Diamond, one of the co-recipients of this year's Nobel Memorial Prize in Economics. The tribute was written for the occasion of Diamond's retirement, and seems worth publishing today:
April 20, 2010
Prof. James Poterba, Chair
Department of Economics
Massachusetts Institute of Technology

Dear Jim:

It is a pleasure to contribute a brief note of tribute to Peter Diamond, on this occasion of celebration for his work as scholar and teacher.

Peter was an inspiration and role model for me during my student years at MIT. My encounters with him -- in the classroom and in his office -- left an indelible impression. I recall going over to the Dewey Library shortly after arriving in Cambridge, in the summer of 1972, and digging out Peter's doctoral dissertation. This was a mistake! Peter's reputation as a powerful theorist had been noted by my undergraduate teachers at Northwestern. I wanted to see how this reputed superstar had gotten his start. Just how good could it be, I wondered? I had no idea! What I discovered was an elegant, profound and exquisitely argued axiomatic treatment of the general problem of representing consumption preferences over an infinite time horizon, extending results obtained by his undergraduate teacher and the future Nobel Laureate, Tjallings Koopmans.

I prided myself on being a budding mathematician in those years. Yet, Peter's effortless mastery in that dissertation of the relevant techniques from topology and functional analysis, and his successful application of those methods to a problem of fundamental importance in economic theory -- all accomplished by age 23, younger than I was at the moment I held his thesis binder in my hands! – was simply stunning. This set what seem to me then, and still seems so now, to be an unapproachable standard. I was depressed for weeks thereafter!

Even more depressing was what I discovered as I got to know Peter better over the course of my first two years in the program: that mathematical technique was not even his strongest suit! An unerring sense of what constitute the foundational theoretical questions in economic science, and a rare creative gift of being able to imagine just the right formal framework in the context of which such questions can be posed and answered with generality -- this, I came to understand, is what Peter Diamond was really good at.

And so, I learned from him in those years what turned out to be the most important lesson of my graduate educational experience -- that, in the doing of economic theory and relative to the behavioral significance of the issue under investigation, technique is always a matter of secondary importance -- neither necessary nor sufficient for the production of lasting insights. I learned this from the careful study of Peter's seminal contributions to growth theory, the theories of taxation and social insurance, the theories of choice under uncertainty and the allocation of risk-bearing, the theories of legal rules and institutions, and the theory of unemployment. I also learned this from Peter's elegant and comprehensive lectures on the work in these areas of himself and that of other scholars. And so I came -- slowly and fitfully, because I was rather attached to the joys of doing mathematics for its own sake -- to see the world the way that Peter Diamond saw it. And, in the process, I became a much better economist.

Peter graciously agreed to be the second reader on my dissertation, even though I was writing outside of his areas of specialization at the time, and my intellectual indebtedness to him only increased over the course of my last two years at MIT. It has by now become rather clear that I shall never be able to discharge that debt.

So, thanks Peter, for your extraordinary generosity as a teacher, and for your unmatched example as a scholar.

Glenn C. Loury
Merton P. Stoltz Professor of the Social Sciences
Professor of Economics and of Public Policy
Brown University
The following passage from the letter is worth repeating:
And so, I learned from him in those years what turned out to be the most important lesson of my graduate educational experience -- that, in the doing of economic theory and relative to the behavioral significance of the issue under investigation, technique is always a matter of secondary importance -- neither necessary nor sufficient for the production of lasting insights.
I have had very little time for blogging recently, thanks to two new courses, but if I can find the time I'd like to write a post on Diamond's classic 1982 paper on search, and the wonderful coconut parable he used in order to illuminate the theory.

Tuesday, October 05, 2010

Hot Potatoes

RT Leuchtkafer follows up on his earlier remarks with a comment in the Financial Times:
After a detailed four-month review of the flash crash, looking at market data streams tick-by-tick and down to the millisecond, the SEC concluded that a single order in the e-mini S&P 500 futures market ignited an inferno of panic selling. It was over in about seven minutes, and $1,000bn was up in smoke.
Within hours of the SEC’s report, the CME Group, owner of the Chicago Mercantile Exchange, issued a statement to point out that the suspect e-mini order was entirely legitimate, that it came from an institutional asset manager (that is, the public), and was little more than 1 per cent of the e-mini’s daily volume and less than 9 per cent of e-mini volume during and immediately after the crash.
How did this small bit of total volume cause such a conflagration?
You do it with computers. Specifically, you do it with unregulated computers. You pay rent so your machines sit inside the exchanges, minimising travel time for your electrons. You pay licence fees so your computers eat their fill of super-fast proprietary data feeds, data containing a shocking amount of information on everyone’s orders, not just on your own.
And when your computers spot trouble, such as a larger than expected sell-off, they dump inventory and they shut down – because they can.
No one knows what a “larger than expected sell-off” might be, but on May 6 a single hedge that added just an extra 9 per cent of selling pressure was enough to cause chaos.
When that happened, the SEC’s report says, high-frequency traders “stopped providing liquidity and began to take liquidity”, starting a frenzied race for anyone willing to buy. The report likened the panic to a downward-spiralling game of “hot potato” where, as HFT firms bought beyond their risk limits, they pulled their own bids and frantically sold to anyone they could, which were often just other HFT firms, who themselves quickly reached their risk limits and tried to sell to anyone they could, and so on – into the abyss. Fratricide ruled the day. Firms then fled the market altogether, accelerating the sell-off.
Punch drunk, markets rebounded when other market participants realised what had just happened and jumped into the market to buy.
Fair enough, some might say. Markets do panic, and sometimes for no reason. But the larger HFT firms register as formal marketmakers, receiving a variety of regulatory advantages, including greater leverage. All of this extends their enormous reach and power. In the past, they fulfilled certain obligations and observed certain restraints as a quid pro quo for those advantages, a quid pro quo intended to keep them in the market when markets were under stress and to prevent them from adding to that stress. Over the past few years, however, decades-long obligations and restraints all but disappeared, while many advantages stayed.
Computing power also opened marketmaking to a field of unregistered, or informal, high-frequency marketmakers, what investor and commentator Paul Kedrosky termed the “shadow liquidity system”. Exchanges will pay you to do it, too, just as they pay formal marketmakers, and require little in return.
The result is a loose confederation of unregulated, or lightly regulated, high-frequency marketmakers. They feed on what many consider confidential order information, play hot potato in volatile markets, and then instantly change the game to hide-and-seek if even a single hedge hits an unseen and unknowable tipping point.
The only quibble I have with this analysis is that too many different classes of algorithmic trading strategies are being bundled together under the HFT banner. In particular I would like to see a distinction made between directional strategies that are based on predicted short term price movements, and arbitrage based strategies that exploit price differentials across assets and markets. Both of these can be implemented with algorithms, rely on rapid responses to incoming market data, and involve very short holding periods. But they have completely different implications for asset price volatility. It is the mix of strategies rather than the method of their implementation that is the key determinant of market stability.

---

Update: Leuchtkafer writes in to say:
I should have been clear in the piece I was talking specifically about market making strategies. 
I appreciate the clarification, and agree with his characterization of the new market makers

Friday, October 01, 2010

RT Leuchtkafer on the Flash Crash Report

The long-awaited CFTC-SEC report on the flash crash has finally been released. I'm still working my way through it, and hope to respond in due course. In the meantime, here is an email (posted with permission) from the very interesting RT Leuchtkafer, whose thoughts on recent changes in market microstructure have been discussed at some length previously on this blog:
It's natural for any critic to focus on what he wants in the report, and I'm no different.

From the report, in the futures market: "HFTs stopped providing liquidity and instead began to take liquidity." (report pp 14-15); "...the combined selling pressure from the Sell Algorithm, HFT's and other traders drove the price of the E-Mini down..." (report p 15)

And in the equities market: "In general, however, it appears that the 17 HFT firms traded with the price trend on May 6 and, on both an absolute and net basis, removed significant buy liquidity from the public quoting markets during the downturn..." (report p 48); "Our investigation to date reveals that the largest and most erratic price moves observed on May 6 were caused by withdrawals of liquidity and the subsequent execution of trades at stub quotes." (p 79)

It's also natural - if ungraceful - for a critic to say "I told you so." OK, I'm no ballerina, and I told you so (April 16, 2010):

"When markets are in equilibrium these new participants increase available liquidity and tighten spreads. When markets face liquidity demands these new participants increase spreads and price volatility and savage investor confidence."

"...[HFT] firms are free to trade as aggressively or passively as they like or to disappear from the market altogether."

"...[HFT firms] remove liquidity by pulling their quotes and fire off marketable orders and become liquidity demanders. With no restraint on their behavior they have a significant effect on prices and volatility....they cartwheel from being liquidity suppliers to liquidity demanders as their models rebalance. This sometimes rapid rebalancing sent volatility to unprecedented highs during the financial crisis and contributed to the chaos of the last two years. By definition this kind of trading causes volatility when markets are under stress."

"Imagine a stock under stress from sellers such was the case in the fall of 2008. There is a sell imbalance unfolding over some period of time. Any HFT market making firm is being hit repeatedly and ends up long the stock and wants to readjust its position. The firm times its entrance into the market as an aggressive seller and then cancels its bid and starts selling its inventory, exacerbating the stock's decline."

"So in exchange for the short-term liquidity HFT firms provide, and provide only when they are in equilibrium (however they define it), the public pays the price of the volatility they create and the illiquidity they cause while they rebalance."

Finally, the report should put paid to the notion that HFT firms are simple liquidity providers and that they don't withdraw in volatile markets, claims that have been floating around for quite a while.

What happens next?
In a follow-up message, Leuchtkafer adds: 
I'd like to note there were many other critics who got it right, including (most importantly) Senator Kaufman, Themis Trading, David Weild, and others. They all deserve a shout out.
To this list I would add Paul Kedrosky.
Firms that began to "take liquidity" during the crash would have suffered significant losses were it not for the fact that many of their trades were subsequently broken. I have argued repeatedly that this cancellation of trades was a mistake, not simply on fairness grounds but also from the perspective of market stability:
By canceling trades, the exchanges reversed a redistribution of wealth that would have altered the composition of strategies in the trading population. I'm sure that many retail investors whose stop loss orders were executed at prices far below anticipated levels were relieved. But the preponderance of short sales among trades at the lowest prices and the fact that aberrant price behavior also occurred on the upside suggests to me that the largest beneficiaries of the cancellation were proprietary trading firms making directional bets based on rapid responses to incoming market data. The widespread cancellation of trades following the crash served as an implicit subsidy to such strategies and, from the perspective of market stability, is likely to prove counter-productive. 
The report does appear to confirm that some of the major beneficiaries of the decision to cancel trades were algorithmic trading outfits. But I need to read it more closely before offering further comment. 

Saturday, September 04, 2010

Economic Consequences of Speculative Side Bets

The following column was written jointly with Yeon-Koo Che and is crossposted from Vox EU with minor edits and links to references.
---
There is arguably no class of financial transactions that has attracted more impassioned commentary over the past couple of years than naked credit default swaps. Robert Waldmann has equated such contracts with financial arson, Wolfgang Münchau with bank robberies, and Yves Smith with casino gambling. George Soros argues that they facilitate bear raids, as does Richard Portes who wants them banned altogether, and Willem Buiter considers them to be a prime example of harmful finance. In sharp contrast, John Carney believes that any attempt to prohibit such contracts would crush credit markets, Felix Salmon thinks that they benefit distressed debtors, and Sam Jones argues that they smooth out the cost of borrowing over time, thus reducing interest rate volatility.
One reason for the continuing controversy is that arguments for and against such contracts have been expressed informally, without the benefit of a common analytical framework within which the economic consequences of their use can be carefully examined. Since naked credit default swaps necessarily have a long and a short side and the aggregate payoff nets to zero, it is not immediately apparent why their existence should have any effect at all on the availability and terms of financing or the likelihood of default. And even if such effects do exist, it is not clear what form and direction they take, or the implications they have for the allocation of a society's productive resources.
In a recent paper we have attempted to develop a framework within which such questions can be addressed, and to provide some preliminary answers. We argue that the existence of naked credit default swaps has significant effects on the terms of financing, the likelihood of default, and the size and composition of investment expenditures. And we identify three mechanisms through which these broader consequences of speculative side bets arise: collateral effects, rollover risk, and project choice.
A fundamental (and somewhat unorthodox) assumption underlying our analysis is that the heterogeneity of investor beliefs about the future revenues of a borrower is due not simply to differences in information, but also to differences in the interpretation of information. Individuals receiving the same information can come to different judgments about the meaning of the data. They can therefore agree to disagree about the likelihood of default, interpreting such disagreement as arising from different models rather than different information. As in prior work by John Geanakoplos on the leverage cycle, this allows us to speak of a range of optimism among investors, where the most optimistic do not interpret the pessimism of others as being particularly informative. We believe that this kind of disagreement is a fundamental driver of speculation in the real world.
When credit default swaps are unavailable, the investors with the most optimistic beliefs about the future revenues of a borrower are natural lenders: they are the ones who will part with their funds on terms most favorable to the borrower. The interest rate then depends on the beliefs of the threshold investor, who in turn is determined by the size of the borrowing requirement. The larger the borrowing requirement, the more pessimistic this threshold investor will be (since the size of the group of lenders has to be larger in order for the borrowing requirement to be met). Those more optimistic than this investor will lend, while the rest find other uses for their cash.
Now consider the effects of allowing for naked credit default swaps. Those who are most pessimistic about the future prospects of the borrower will be inclined to buy naked protection, while those most optimistic will be willing to sell it. However, pessimists also need to worry about counterparty risk - if the optimists write too many contracts they may be unable to meet their obligations in the event that a default does occur, an event that the pessimists consider to be likely. Hence the optimists have to support their positions with collateral, which they do by diverting funds that would have gone to borrowers in the absence of derivatives. The borrowing requirement must then be met by appealing to a different class of investors, who are neither so optimistic that they wish to sell protection, nor so pessimistic that they wish to buy it. The threshold investor is now clearly more pessimistic than in the absence of derivatives, and the terms of financing are accordingly shifted against the borrower. As a result, for any given borrowing requirement, the bond issue is larger and the price of bonds accordingly lower when investors are permitted to purchase naked credit default swaps.
This effect does not arise if credit default swaps can only be purchased by holders of the underlying security. In fact, it can be shown that allowing for only “covered” credit default swaps has much the same consequences as allowing optimists to buy debt on margin: it leads to higher bond prices, a smaller issue size for any given borrowing requirement, and a lower likelihood of eventual default. While optimists take a long position in the debt by selling such contracts, they facilitate the purchase of bonds by more pessimistic investors by absorbing much of the credit risk. In contrast with the case of naked credit default swaps, therefore, the terms of lending are shifted in favor of the borrower. The difference arises because pessimists can enter directional positions on default in one case but not the other.
While this simple model sheds some light on the manner in which the terms of financing can be affected by the availability of credit derivatives, it does not deal with one of the major objections to such contracts: the possibility of self-fulfilling bear raids. To address this issue it is necessary to allow for a mismatch between the maturity of debt and the life of the borrower. This raises the possibility that a borrower who is unable to meet contractual obligations because of a revenue shortfall can roll over the residual debt, thereby deferring payment into the future.
As many economists have previously observed, multiple self-fulfilling paths arise naturally in this setting (see, for instance, Calvo, Cole and Kehoe, and Cohen and Portes). If investors are confident that debt can be rolled over in the future they will accept lower rates of interest on current lending, which in turn implies reduced future obligations and allows the debt to be rolled over with greater ease. But if investors suspect that refinancing may not be available in certain states, they demand greater interest rates on current debt, resulting in larger future obligations and an inability to refinance if the revenue shortfall is large.
A key question then is the following: how does the availability of naked credit default swaps affect the range of borrowing requirements for which pessimistic paths (with significant rollover risk) exist? And conditional on the selection of such a path, how are the terms of borrowing affected by the presence of these credit derivatives?
For reasons that are already clear from the baseline model, we find that pessimistic paths involve more punitive terms for the borrower when naked credit default swaps are present than when they are not. More interestingly, we find that there is a range of borrowing requirements for which a pessimistic path exists if and only if such contracts are allowed. That is, there exist conditions under which fears about the ability of the borrower to repay debt can be self-fulfilling only in the presence of credit derivatives. It is in this precise sense that the possibility of self-fulfilling bear raids can be said to arise when the use of such derivatives is unrestricted.
The finding that borrowers can more easily raise funds and obtain better terms when the use of credit derivatives is restricted does not necessarily imply that such restrictions are desirable from a policy perspective. A shift in terms against borrowers will generally reduce the number of projects that are funded, but some of these ought not to have been funded in the first place. Hence the efficiency effects of a ban are ambiguous. However, such a shift in terms against borrowers can also have a more subtle effect with respect to project choice: it can tilt managerial incentives towards the selection of riskier projects with lower expected returns. This happens because a larger debt obligation makes projects with greater upside potential more attractive to the firm, as more of the downside risk is absorbed by creditors.
The central message of our work is that the existence of zero sum side bets on default has major economic repercussions. These contracts induce investors who are optimistic about the future revenues of borrowers, and would therefore be natural purchasers of debt, to sell credit protection instead. This diverts their capital away from potential borrowers and channels it into collateral to support speculative positions. As a consequence, the marginal bond buyer is less optimistic about the borrower's prospects, and demands a higher interest rate in order to lend. This can result in an increased likelihood of default, and the emergence of self-fulfilling paths in which firms are unable to rollover their debt, even when such trajectories would not arise in the absence of credit derivatives. And it can influence the project choices of firms, leading not only to lower levels of investment overall but also in some cases to the selection of riskier ventures with lower expected returns.
James Tobin (1984) once observed that the advantages of greater “liquidity and negotiability of financial instruments” come at the cost of facilitating speculation, and that greater market completeness under such conditions could reduce the functional efficiency of the financial system, namely its ability to facilitate “the mobilization of saving for investments in physical and human capital... and the allocation of saving to their more socially productive uses.” Based on our analysis, one could make the case that naked credit default swaps are a case in point.
This conclusion, however, is subject to the caveat that there exist conditions under which the presence of such contracts can prevent the funding of inefficient projects. Furthermore, an outright ban may be infeasible in practice due to the emergence of close substitutes through financial engineering. Even so, it is important to recognize that the proliferation of speculative side bets can have significant effects on economic fundamentals such as the terms of financing, the patterns of project selection, and the incidence of corporate and sovereign default.

Saturday, August 28, 2010

Lessons from the Kocherlakota Controversy

In a speech last week the President of the Minneapolis Fed, Narayana Kocherlakota, made the following rather startling claim:
Long-run monetary neutrality is an uncontroversial, simple, but nonetheless profound proposition. In particular, it implies that if the FOMC maintains the fed funds rate at its current level of 0-25 basis points for too long, both anticipated and actual inflation have to become negative. Why? It’s simple arithmetic. Let’s say that the real rate of return on safe investments is 1 percent and we need to add an amount of anticipated inflation that will result in a fed funds rate of 0.25 percent. The only way to get that is to add a negative number—in this case, –0.75 percent.

To sum up, over the long run, a low fed funds rate must lead to consistent—but low—levels of deflation.
The proposition that a commitment by the Fed to maintain a low nominal interest rate indefinitely must lead to deflation (rather than accelerating inflation) defies common sense, economic intuition, and the monetarist models of an earlier generation. This was pointed out forcefully and in short order by Andy Harless, Nick Rowe, Robert Waldmann, Scott Sumner, Mark Thoma, Ryan Avent, Brad DeLongKarl Smith, Paul Krugman and many other notables.

But Kocherlakota was not without his defenders. Stephen Williamson and Jesus Fernandez-Villaverde both argued that his claim was innocuous and completely consistent with modern monetary economics. And indeed it is, in the following sense: the modern theory is based on equilibrium analysis, and the only equilibrium consistent with a persistently low nominal interest rate is one in which there is a stable and low level of deflation. If one accepts the equilibrium methodology as being descriptively valid in this context, one is led quite naturally to Kocherlakota's corner.

But while Williamson and Fernandez-Villaverde interpret the consistency of Kocherlakota's claim with the modern theory as a vindication of the claim, others might be tempted to view it as an indictment of the theory. Specifically, one could argue that equilibrium analysis unsupported by a serious exploration of disequilibrium dynamics could lead to some very peculiar and misleading conclusions. I have made this point in a couple of earlier posts, but the argument is by no means original. In fact, as David Andolfatto helpfully pointed out in a comment on Williamson's blog, the same point was made very elegantly and persuasively in a 1992 paper by Peter Howitt.

Howitt's paper is concerned with the the inflationary consequences of a pegged nominal interest rate, which is precisely the subject of Kocherlakota's thought experiment. He begins with an old-fashioned monetarist model in which output depends positively on expected inflation (via the expected real rate of interest), realized inflation depends on deviations of output from some "natural" level, and expectations adjust adaptively. In this setting it is immediately clear that there is a "rational expectations equilibrium with a constant, finite rate of inflation that depends positively on the nominal rate of interest" chosen by the central bank. This is the equilibrium relationship that Kocherlakota has in mind: lower interest rates correspond to lower inflation rates and a sufficiently low value for the former is associated with steady deflation. 

The problem arises when one examines the stability of this equilibrium. Any attempt by the bank to shift to a lower nominal interest rate leads not to a new equilibrium with lower inflation, but to accelerating inflation instead. The remainder of Howitt's paper is dedicated to showing that this instability, which is easily seen in the simple old-fashioned model with adaptive expectations, is in fact a robust insight and holds even if one moves to a "microfounded" model with intertemporal optimization and flexible prices, and even if one allows for a broad range of learning dynamics. The only circumstance in which a lower nominal rate results in lower inflation is if individuals are assumed to be "capable of forming rational expectations ab ovo".

Howitt places this finding in historical context as follows (emphasis added):
In his 1968 presidential address to the American Economic Association, Milton Friedman argued, among other things, that controlling interest rates tightly was not a feasible monetary policy. His argument was a variation on Knut Wicksell's cumulative process. Start in full employment with no actual or expected inflation. Let the monetary authority peg the nominal interest rate below the natural rate. This will require monetary expansion, which will eventually cause inflation. When expected inflation rises in response to actual inflation, the Fisher effect will put upward pressure on the interest rate. More monetary expansion will be required to maintain the peg. This will make inflation accelerate until the policy is abandoned. Likewise, if the interest rate is pegged above the natural rate, deflation will accelerate until the policy is abandoned. Since no one knows the natural rate, the policy is doomed one way or another.

This argument, which was once quite uncontroversial, at least among monetarists, has lost its currency. One reason is that the argument invokes adaptive expectations, and there appears to be no way of reformulating it under rational expectations... in conventional rational expectations models, monetary policy can peg the nominal rate... without producing runaway inflation or deflation... Furthermore... pegging the nominal rate at a lower value will produce a lower average rate of inflation, not the ever-higher inflation predicted by Friedman...

Thus the rational expectations revolution has almost driven the cumulative process from the literature. Modern textbooks treat it as a relic of pre-rational expectations thought... contrary to these rational expectations arguments, the cumulative process is not only possible but inevitable, not just in a conventional Keynesian macro model but also in a flexible-price, micro-based, finance constraint model, whenever the interest rate is pegged... the essence of the cumulative process lies not in an economy's rational expectations equilibria but in the disequilibrium adjustment process by which people try to acquire rational expectations... under a wide set of assumptions, the process cannot converge if the monetary authority keeps interest rates pegged... the cumulative process is a manifestation of this nonconvergence. 
Thus the cumulative process should be regarded not as a relic but as an implication of real-time belief formation of the sort studied in the literature on convergence (or nonconvergence) to rational expectations equilibrium... Perhaps the most important lesson of the analysis is that the assumption of rational expectations can be misleading, even when used to analyze the consequences of a fixed monetary regime. If the regime is not conducive to expectational stability, then the consequences can be quite different from those predicted under rational expectations... in general, any rational expectations analysis of monetary policy should be supplemented with a stability analysis... to determine whether or not the rational expectations equilibrium could ever be observed. 
To this I would add only that a stability analysis is a necessary supplement to equilibrium reasoning not just in the case of monetary policy debates, but in all areas of economics. For as Richard Goodwin said a long time ago, an "equilibrium state that is unstable is of purely theoretical interest, since it is the one place the system will never remain."

---

Update (8/29). From a comment by Robert Waldmann:
I think that it is important that in monetary models there are typically two equilibria -- a monetary equilibrium and a non-monetary equilibrium.

The assumption that the economy will end up in a rational expectations equilibrium does not imply that a low nominal interest rate leads to an equilibrium with deflation. It might lead to an equilibrium in which dollars are worthless.

I'd say the experiment has been performed. From 1918 through (most of) 1923 the Reichsbank kept the discount rate low (3.5% IIRC) and met demand for money at that rate.

The result was not deflation. By October 1923 the Reichsmark was no longer used as a medium of exchange.
In fact, the only stable steady state under a nominal interest rate peg in the Howitt model is the non-monetary one.

Thursday, August 19, 2010

On Broken Trades and Bailouts

Back in 1980, Avraham Beja and Barry Goldman published a theoretical paper in the Journal of Finance that explored the manner in which the composition of trading strategies in an asset market affects the volatility of prices. Their main insight was that if the prevalence of momentum based strategies was too large relative to that of strategies based on fundamental analysis, then the dynamics of asset prices would be locally unstable: departures of prices from fundamentals would be amplified rather than corrected over time. More importantly, they argued that the relationship between the composition of strategies and market stability was discontinuous: there was a threshold (bifurcation) value of this population mixture that separated the stable from the unstable regime, and an imperceptible change in composition that took the market across the threshold could result in dramatic increases in volatility.

The Beja/Goldman analysis can be taken a step further: not only does market stability depend on the composition of trading strategies, but the profitability of different trading strategies, and hence changes in their relative population shares over time, depend very much on whether one is in a stable or an unstable regime. In a stable regime prices track fundamentals reasonably well, which makes it possible for technical strategies to extract information from incoming market data without going through the trouble and expense of fundamental research. Such strategies can therefore prosper and proliferate, provide that they remain sufficiently rare. But if they become too common, markets are destabilized, asset price bubbles can form, and the value of fundamental information rises. When a major correction arrives, it is the fundamental strategies that prosper, the composition of trading strategies is shifted accordingly, and market stability is restored for a time. This process of endogenous regime switching provides one possible interpretation of the empirical phenomenon known as volatility clustering.

From this perspective, it is critically important that technical trading strategies to be allowed to suffer losses when market instability arises. The cancellation of trades in almost 300 securities after the flash crash of May 6 did exactly the opposite, by providing an implicit subsidy to destabilizing strategies. The excuse that this was done to protect retail investors whose stop orders were executed as prices fell to insane levels is unconvincing. According to the SEC's own report on the crash, most trades against stub quotes of five cents or less were short sales, and there was also considerable upward instability, with prices rising well beyond the reach of ordinary retail investors. (Shares in Sotheby's, for instance, changed hands at ten million dollars per round lot.) The cancellation of trades was therefore a bailout of some funds (heavily reliant on algorithmic trading) at the expense of others, and this prevented a stabilizing shift in the market composition of trading strategies.

A similar argument could be made about the effects of the Troubled Asset Relief Program. It has recently been claimed, for instance by Alan Blinder and Mark Zandi, that TARP has been a "substantial success" because it averted a second Great Depression at a cost to taxpayers that is turning out to be much lower than originally feared:
The Troubled Asset Relief Program was controversial from its inception. Both the program’s $700 billion headline price tag and its goal of “bailing out” financial institutions—including some of the same institutions that triggered the panic in the first place—were hard for citizens and legislators to swallow. To this day, many believe the TARP was a costly failure. In fact, TARP has been a substantial success, helping to restore stability to the financial system and to end the freefall in  housing and auto markets. Its ultimate cost to taxpayers will be a small fraction of the headline $700 billion figure: A number below $100 billion seems more likely to us, with the bank bailout component probably turning a profit.
Yves Smith is unpersuaded by such figures, which she attributes to "back door, less visible bailouts, super cheap interest rates, [and] regulatory forbearance." But even if one were to take at face value the Blinder-Zandi estimates of the revenue consequences of TARP, there remain potentially harmful effects on the size composition of firms and the distribution of financial practices. The institutions that were bailed out made directional bets that either failed directly, or were with counterparties that would have failed in the absence of government support. Smaller institutions making such mistakes were allowed to go under, while larger ones were bailed out. Quite apart from the unfairness of this, the policy could be severely damaging to the stability of the system over the medium run.

This point was made a couple of months ago in a speech by Richard Fisher of the Dallas Fed (and expanded upon by Tyler Durden and Ashwin Parameswaran shortly thereafter):
Big banks that took on high risks and generated unsustainable losses received a public benefit... As a result, more conservative banks were denied the market share that would have been theirs if mismanaged big banks had been allowed to go out of business. In essence, conservative banks faced publicly backed competition...
The system has become slanted not only toward bigness but also high risk... Clearly, if the central bank and regulators view any losses to big bank creditors as systemically disruptive, big bank debt will effectively reign on high in the capital structure. Big banks would love leverage even more, making regulatory attempts to mandate lower leverage in boom times all the more difficult. In this manner, high risk taking by big banks has been rewarded, and conservatism at smaller institutions has been penalized...

It is not difficult to see where this dynamic leads—to more pronounced financial cycles and repeated crises.
Fisher goes on to argue for strict limits on the size of individual financial institutions relative to that of the industry. So does Nouriel Roubini:
Greed has to be controlled by fear of loss, which derives from knowledge that the reckless institutions and agents will not be bailed out. The systematic bailouts of the latest crisis – however necessary to avoid a global meltdown – worsened this moral-hazard problem. Not only were “too big to fail” financial institutions bailed out, but the distortion has become worse as these institutions have become – via financial-sector consolidation – even bigger. If an institution is too big to fail, it is too big and should be broken up.
But were the bailouts really necessary to avoid a global meltdown? Blinder and Zandi argue that the alternative would have been completely catastrophic:
The financial policy responses were especially important. In the scenario without them, but including the fiscal stimulus, the recession would only now be winding down, a full year after the downturn’s actual end... The differences between the baseline and the scenario based on no financial policy responses... represent our estimates of the combined effects of the various policy efforts to stabilize the financial system — and they are very large. By 2011, real GDP is almost $800 billion (6%) higher because of the policies, and the unemployment rate is almost 3 percentage points lower. By the second quarter of 2011 — when the difference between the baseline and this scenario is at its largest — the financial-rescue policies are credited with saving almost 5 million jobs.
Here the baseline is the set of policies actually pursued (including fiscal and financial policies) and it is being compared to the case of "no financial policy responses." However, as Yves Smith and Barry Ritholtz have pointed out, this is an absurd counterfactual. Barry argues that  the proper point of comparison ought to be what should have been done, which in his view is the following:
One by one, we should have put each insolvent bank into receivership, cleaned up the balance [sheet], sold off the bad debts for 15-50 cents on the dollar, fired the management, wiped out the shareholders, and spun out the proceeds, with the bondholders taking the haircut, and the taxpayers on the hook for precisely zero dollars. Citi, Bank of America, Wamu, Wachovia, Countrywide, Lehman, Merrill, Morgan, etc. all of them should have been handled this way.

The net result of this would have been more turmoil, lower stock prices, and a sharper, but much shorter economic contraction. It would have been painful and disruptive — like emergency surgery is — but its better than an exploded appendix.

And today, we would have a much healthier economy.
Whether or not one agrees with this assessment, Yves and Barry are surely correct in arguing that counterfactuals other than the hands-off policy ought to be considered before one accepts the emerging conventional wisdom that the authorities handled the crisis well.

What the broken trades trades of May 6 and the bailouts of 2008 have in common is that they were both impulsive decisions, designed to deal with immediate concerns, and executed with little regard for their long term consequences. As I said in an earlier post, these decisions were made under enormous pressure with little time for reflection, and mistakes made in such circumstances would ordinarily be forgivable. But to insist that the best available course of action was taken, and that any alternative would have had devastating economic costs, is neither credible nor wise.

---

Update (8/20). The comments on this post by Andy Harless, David Merkel and Economics of Contempt are worth reading. Andy thinks that I am attacking a straw man and that the Ritholtz proposal was not even feasible, let alone optimal. David questions the use by Blinder and Zandi of a forecasting model to generate counterfactuals, given the appalling performance of such models in predicting the crisis in the first place. And here's Economics of Contempt:
"Smaller institutions making such mistakes were allowed to go under, while larger ones were bailed out."

I have to take issue with that statement. Yes, large banks were bailed out, but hundreds upon hundreds of small banks were bailed out too! Fully 836 financial institutions were bailed out using TARP money, the vast majority of which were small banks. While it's true that most of the bank failures have been small banks, there were large banks that were allowed to fail too -- e.g., Lehman, WaMu.

As for Barry Ritholtz's alternative scenario, there are too many basic factual errors to take it seriously. For one thing, receivership wasn't available to non-commercial banks. It was also legally impossible to separate AIGFP from AIG, since AIG had unconditionally guaranteed all of AIGFP's liabilities, and all their trades included cross-default provisions. A lot of the actions Barry proposes were literally impossible to do. It's simply not a credible list, and I'm surprised that you would fall for it.

Finally, I think it's unfair to say that the bailouts created bad precedents without also mentioning that we now have a resolution authority for non-bank financial institutions. How are decisions that were made without the availability of a resolution authority proper precedents for decisions that will be made with a resolution authority? You would never say that decisions made in pre-FDIC bank failures are proper precedents for post-FDIC bank failures, would you?
These are all good points. I probably should have been a bit more skeptical when discussing the Ritholtz scenario. I did not intend to endorse his proposal, only to suggest that we need to think through a broad range of counterfactuals in evaluating the response to the crisis. But of course these counterfactuals must be feasible given the tools available at the time, and his point about the resolution authority is well taken.

What bothered me most about Geithner's congressional testimony was his claim that "the government’s strategy regarding AIG was essential to our success in confronting the worst financial crisis in generations." That is, in averting an economic calamity, there was no alternative to the government making massive payouts on privately negotiated speculative bets. This is a bold claim with very serious consequences and ought not to be made lightly. In particular, the consequences of alternative scenarios has to be traced out with some seriousness.