Sunday, November 29, 2009

Maturity Transformation and Liquidity Crises

William Dudley's keynote address at a recent CEPS symposium on the financial system is worth reading in full. What I found especially interesting were the following remarks on structural sources of instability:
The risks of liquidity crises are also exacerbated by some structural sources of instability in the financial system. Some of these sources are endemic to the nature of the financial intermediation process and banking. Others are more specific to the idiosyncratic features of our particular system. Both types deserve attention because they tend to amplify the pressures that lead to liquidity runs.

Turning first to the more inherent sources of instability, there are at least two that are worthy of mention. The first instability stems from the fact that most financial firms engage in maturity transformation — the maturity of their assets is longer than the maturity of their liabilities. The need for maturity transformation arises from the fact that the preferred habitat of borrowers tends toward longer-term maturities used to finance long-lived assets such as a house or a manufacturing plant, compared with the preferred habitat of investors, who generally have a preference to be able to access their funds quickly. Financial intermediaries act to span these preferences, earning profits by engaging in maturity transformation — borrowing shorter-term in order to finance longer-term lending.
If a firm engages in maturity transformation so that its assets mature more slowly than its liabilities, it does not have the option of simply allowing its assets to mature when funding dries up. If the liabilities cannot be rolled over, liquidity buffers will soon be weakened. Maturity transformation means that if funding is not forthcoming, the firm will have to sell assets. Although this is easy if the assets are high-quality and liquid, it is hard if the assets are lower quality. In that case, the forced asset sales are likely to lead to losses, which deplete capital and raise concerns about insolvency.
The second inherent source of instability stems from the fact that firms are typically worth much more as going concerns than in liquidation. This loss of value in liquidation helps to explain why liquidity crises can happen so suddenly. Initially, no one is worried about liquidation. The firm is well understood to be solvent... But once counterparties start to worry about liquidation, the probability distribution can shift very quickly toward the insolvency line... because the liquidation value is lower than the firm’s value as a going concern...
These sources of instability create the risk of a cascade... Once the firm’s viability is in question and it is does not have access to an insured deposit funding base, the next stop is often a full-scale liquidity crisis that often cannot be stopped without massive government intervention.
As Dudley notes, maturity transformation is "endemic to the nature of the financial intermediation process and banking." But non-financial firms (and the United States Treasury) can also engage in maturity transformation by borrowing short relative to their expected revenue streams. This is what Hyman Minsky called speculative (as opposed to hedge) financing. One of Minsky's key insights was that over a period of stable growth with relatively tranquil financial markets, there is a progressive shift away from hedge and towards speculative financing:
The natural starting place for analyzing the relation between debt and income is to take an economy with a cyclical past that is now doing well. The inherited debt reflects the history of the economy, which includes a period in the not too distant past in which the economy did not do well. Acceptable liability structures are based upon some margin of safety, so that expected cash flows, even in periods when the economy is not doing well, will cover contractual debt payments. As the period over which the economy does well lengthens, two things become evident in board rooms. Existing debts are easily validated and units that were heavily in debt prospered; it paid to lever. After the event it becomes apparent that the margins of safety built into debt structures were too great. As a result, over a period in which the economy does well, views about acceptable debt structure change. In the deal making that goes on between banks, investment bankers, and businessmen, the acceptable amount of debt to use in financing various types of activity and positions increases. (Minsky 1982, p.65)
Short-term financing of long-lived capital assets is lucrative as long as debts can be rolled over easily at relatively stable interest rates. But this induces more firms to engage in speculative rather than hedge financing, making the demand for refinancing increasingly inelastic. The eventual result is a crisis of liquidity and a shift back towards hedge financing.
Many economists (myself included) have tried to construct formal models of the process described by Minsky, but with limited success to date. This may be a good time to give it another shot. 

Thursday, November 26, 2009

On Prediction Markets for Climate Change

There's an interesting debate in progress between Nate Silver and Matt Yglesias on the merits of introducing prediction markets for climate change. Nate is enthusiastic about Robin Hanson's proposal that such markets be developed, Matt is concerned about manipulation of prices by coal and oil interests, and Nate thinks that these concerns are a bit overblown and could be overcome by creating markets that have broad participation and high levels of liquidity.

Nate's argument is roughly as follows: the broader the participation and the greater the volume of trade, the more expensive it will be for an individual or organization to consistently manipulate prices over a period of months or years. If this argument is correct, then markets with limited participation and low volume (such as the Iowa Electronic Markets) should be less efficient at aggregating information than markets with relatively broad participation and much higher volume (such as Intrade). The logic of this argument is so compelling that I was once certain it must be true. But after watching these two markets closely during the 2008 election season, I became convinced that it was IEM rather than Intrade that was sending the more reliable signals, and for some very interesting and subtle reasons.

First of all, let's think for a minute about how one might determine which of two markets is aggregating information more efficiently. We can't just look at events that occurred and examine which of the two markets assigned such events greater probability, because low probability events do indeed sometimes occur. If we had a very large number of events (as in weather forecasting) then one could construct calibration curves to compare markets, but the number of contracts on IEM is very small and this option is not available. So what do we do?

Fortunately, there is a reliable method for comparing the efficiency of the two markets, by looking for and exploiting cross-market arbitrage opportunities. Here's how it works. Open an account in each market with the same level of initial investment. There is a limit of $500 on initial account balances at IEM, so let's take this as our initial investment also at Intrade. Next, look for arbitrage opportunities: differences in prices for the same asset across markets that are large enough for you to make a certain profit, net of trading fees (these are zero on IEM but not on Intrade). Such opportunities do arise, and sometimes last for hours or even days: here's an example. Act on these opportunities, by selling where the price is high and buying where it is low. When prices in the two markets converge, reverse these trades: buy where you initially sold and sell where you initially bought. You will not make much money doing this, since the price differences in general will be small. But what you will do is transfer funds across accounts without making a loss.

How does this help in answering the question of which market is efficient? After a few weeks or months have passed, your overall balance will have grown slightly, but will now be unevenly distributed across markets. The market in which you have made more money is the one that is less efficient. This is because on average, prices in the less efficient market will move towards those in the more efficient one, and when you reverse your arbitrage position, the profit you will make will be concentrated in the market in which the price has moved most.

Let me state for the record that I did not, in fact, carry out this experiment although I think it would be a good (and probably publishable) research project. But I did try to see informally which market was better predicting future prices in the other, and came to the conclusion that it was IEM. This surprised me, and I started to wonder about the reasons why a small, illiquid market with severe restrictions on participation and account balances could be more efficient.

There are two possible reasons. First, Intrade was highly visible in the news media, and changes in prices were regularly reported on blogs across the political spectrum. A fall in the price of a contract could signal weakness in a campaign, generate pessimism about its viability, and result in a collapse in fundraising. Propping up the price during a difficult period therefore made a lot of sense, and could pay for itself several times over with its impact on donations. Dollar for dollar, it was probably a much better investment than television advertising in prime time. I'm not suggesting that the campaigns themselves did or encouraged this, but it does seem likely that some well-financed supporters took it upon themselves to help out in this way.

The second reason is more interesting. The extent of participation and the volume of trade in a market are not determined simply by the market design; they also depend on the availability of profit opportunities, which itself depends in part on the extent of attempted manipulation. There is an active user's forum on Intrade, and it was clear at the time that a small, smart group of traders were on the lookout for mispriced assets, well aware that such mispricing could arise out of political enthusiasm (as in the nominee contract for Ron Paul) or through active manipulation (as in the Obama and McCain contracts discussed by Nate here).

In other words, the breadth of participation and the volume of trade will be higher when market manipulation is suspected than when it is not. If the climate change futures market is assumed to be efficient, it will probably attract fewer traders and lower volumes of investment. So Nate's solution - the design of a market with high participation and liquidity in order to generate efficiency - contains at its heart a paradox. It is inefficiency that will generate high participation and liquidity should such a market come into existence.

I do believe that the introduction of prediction markets for climate change is a good idea. But I would like to see similar contracts offered across multiple markets, including at least one like the IEM in which participation is limited with respect to both membership and initial balance. This will allow us to carry out an ongoing evaluation of the reliability of market signals, as well as the effectiveness of different market designs.


Update (12/12): Thanks to Paul Hewitt for an extended discussion of this post, and to Chris Masse for linking both here and to Paul's commentary.

Tuesday, November 24, 2009

On Buiter, Goodwin, and Nonlinear Dynamics

A few months ago, Willem Buiter published a scathing attack on modern macroeconomics in the Financial Times. While a lot of attention has been paid to the column's sharp tone and rhetorical flourishes, it also contains some specific and quite constructive comments about economic theory that deserve a close reading. One of these has to do with the limitations of linearity assumptions in models of economic dynamics:
When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner.  There is no ‘bounded instability’ in such models.  The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearized model, the explosive solution trajectories.  What they were left with was something that, following an exogenous  random disturbance, would return to the deterministic steady state pretty smartly.  No L-shaped recessions.  No processes of cumulative causation and bounded but persistent decline or expansion.  Just nice V-shaped recessions.
Buiter is objecting here to a vision of the economy as a stable, self-correcting system in which fluctuations arise only in response to exogneous shocks or impulses. This has come to be called the Frisch-Slutsky approach to business cycles, and its intellectual origins date back to a memorable metaphor introduced by Knut Wicksell more than a century ago: "If you hit a wooden rocking horse with a club, the movement of the horse will be very different to that of the club" (translated and quoted in Frisch 1933). The key idea here is that irregular, erratic impulses can be transformed into fairly regular oscillations by the structure of the economy. This insight can be captured using linear models, but only if the oscillations are damped - in the absence of further shocks, there is convergence to a stable steady state. This is true no matter how large the initial impulse happens to be, because local and global stability are equivalent in linear models.

A very different approach to business cycles views fluctuations as being caused by the local instability of steady states, which leads initially to cumulative divergence away from balanced growth. Nonlinearities are then required to ensure that trajectories remain bounded. Shocks to the economy can make trajectories more erratic and unpredictable, but are not required to account for persistent fluctuations. An energetic and  life-long proponent of this approach to business cycles was Richard Goodwin, who also produced one of the earliest such models in economics (Econometrica, 1951). Most of the literature in this vein has used aggregate investment functions and would not be considered properly microfounded by contemporary standards (see, for instance, Chang and Smyth 1971Varian 1979, or Foley 1987). But endogenous bounded fluctuations can also arise in neoclassical models with overlapping generations (Benhabib and Day 1982Grandmont 1985).

The advantage of a nonlinear approach is that it can accommodate a very broad range of phenomena. Locally stable steady states need not be globally stable, so an economy that is self-correcting in the face of small shocks may experience instability and crisis when hit by a large shock. This is Axel Leijonhufvud's corridor hypothesis, which its author has discussed in a recent column. Nonlinear models are also required to capture Hyman Minsky's financial instability hypothesis, which argues that periods of stable growth give rise to underlying behavioral changes that eventually destabilize the system. Such hypotheses cannot possibly be explored formally using linear models.

This, I think, is the point that Buiter was trying to make. It is the same point made by Goodwin in his 1951 Econometrica paper, which begins as follows:
Almost without exception economists have entertained the hypothesis of linear structural relations as a basis for cycle theory. As such it is an oversimplified special case and, for this reason, is the easiest to handle, the most readily available. Yet it is not well adapted for directing attention to the basic elements in oscillations - for these we must turn to nonlinear types. With them we are enabled to analyze a much wider range of phenomena, and in a manner at once more advanced and more elementary. 
By dropping the highly restrictive assumptions of linearity we neatly escape the rather embarrassing special conclusions which follow. Thus, whether we are dealing with difference or differential equations, so long as they are linear, they either explode or die away with the consequent disappearance of the cycle or the society. One may hope to avoid this unpleasant dilemma by choosing that case (as with the frictionless pendulum) just in between. Such a way out is helpful in the classroom, but it is nothing more than a mathematical abstraction. Therefore, economists will be led, as natural scientists have been led, to seek in nonlinearities an explanation of the maintenance of oscillation. Advice to this effect, given by Professor Le Corbeiller in one of the earliest issues of this journal, has gone largely unheeded.
And sixty years later, it remains largely unheeded.

Update (11/27): Thanks to Mark Thoma for reposting this.
Update (11/28): Mark has an interesting follow up post on Varian (1979).
Update (11/29): Barkley Rosser continues the conversation.

Monday, November 23, 2009

A Further Comment on the Term Structure of Interest Rates

In my last post, I raised some questions about Paul Krugman's view that the government should not be deterred from implementing job creation policies by the fear of raising long term interest rates:
What Krugman seems to be advocating is the following: if long term rates should start to rise, the Treasury should finance the deficit by issuing more short-term (and less long-term) debt, thereby flattening the yield curve and holding long term rates low. This would prevent capital losses for carry traders (although it would lower the continuing profitability of the carry trade if short rates rise).
In effect, Krugman is arguing that the Treasury should itself act like a carry trader: rolling over short term debt to finance a long-term structural deficit. But why is this not being done already? Take a look at the current Treasury yield curve... What is currently preventing the Treasury from borrowing at much more attractive short rates to finance the deficit? Is it is a fear of driving up short rates? And if so, won't the same concerns be in place if long term rates start to rise?
From today's New York Times comes a partial answer:
Treasury officials now face a trifecta of headaches: a mountain of new debt, a balloon of short-term borrowings that come due in the months ahead, and interest rates that are sure to climb back to normal as soon as the Federal Reserve decides that the emergency has passed.
Even as Treasury officials are racing to lock in today’s low rates by exchanging short-term borrowings for long-term bonds, the government faces a payment shock similar to those that sent legions of overstretched homeowners into default on their mortgages.
So the Treasury is currently swapping short term obligations for long term ones. Given their reasons for doing this, I don't see that the solution proposed by Krugman - that "the government issue more short-term debt" if long term rates start to rise - is going to be feasible. On the other hand, his suggestion that the Fed buy more long term bonds may still be an option.
Update: Both Dean Baker and Brad DeLong are unhappy with the Times column I linked to above, and probably for good reason. But as long as it accurately describes the current behavior of the Treasury, my argument still stands. Further increases deficit spending may be a good idea but they have to be financed in some way, and it's worth thinking about the implications of different maturity dates for new issues.

Saturday, November 21, 2009

On Carry Traders and Long Term Interest Rates

Tyler Cowan thinks that this post by Paul Krugman on long term interest rates and a follow up by Brad DeLong are critically important and "two of the best recent economics blog posts, in some time".
Krugman's post deals with the question of why some economists in the administration are concerned that further increases in deficit financing could cause long term rates to spike:
Well, what I hear is that officials don’t trust the demand for long-term government debt, because they see it as driven by a “carry trade”: financial players borrowing cheap money short-term, and using it to buy long-term bonds. They fear that the whole thing could evaporate if long-term rates start to rise, imposing capital losses on the people doing the carry trade; this could, they believe, drive rates way up, even though this possibility doesn’t seem to be priced in by the market.

What’s wrong with this picture?

First of all, what would things look like if the debt situation were perfectly OK? The answer, it seems to me, is that it would look just like what we’re seeing.

Bear in mind that the whole problem right now is that the private sector is hurting, it’s spooked, and it’s looking for safety. So it’s piling into “cash”, which really means short-term debt. (Treasury bill rates briefly went negative yesterday). Meanwhile, the public sector is sustaining demand with deficit spending, financed by long-term debt. So someone has to be bridging the gap between the short-term assets the public wants to hold and the long-term debt the government wants to issue; call it a carry trade if you like, but it’s a normal and necessary thing.

Now, you could and should be worried if this thing looked like a great bubble — if long-term rates looked unreasonably low given the fundamentals. But do they? Long rates fluctuated between 4.5 and 5 percent in the mid-2000s, when the economy was driven by an unsustainable housing boom. Now we face the prospect of a prolonged period of near-zero short-term rates — I don’t see any reason for the Fed funds rate to rise for at least a year, and probably two — which should mean substantially lower long rates even if you expect yields eventually to rise back to 2005 levels. And if we’re facing a Japanese-type lost decade, which seems all too possible, long rates are in fact still unreasonably high.

Still, what about the possibility of a squeeze, in which rising rates for whatever reason produce a vicious circle of collapsing balance sheets among the carry traders, higher rates, and so on? Well, we’ve seen enough of that sort of thing not to dismiss the possibility. But if it does happen, it’s a financial system problem — not a deficit problem. It would basically be saying not that the government is borrowing too much, but that the people conveying funds from savers, who want short-term assets, to the government, which borrows long, are undercapitalized.
And the remedy should be financial, not fiscal. Have the Fed buy more long-term debt; or let the government issue more short-term debt. Whatever you do, don’t undermine recovery by calling off jobs creation.
What Krugman seems to be advocating is the following: if long term rates should start to rise, the Treasury should finance the deficit by issuing more short-term (and less long-term) debt, thereby flattening the yield curve and holding long term rates low. This would prevent capital losses for carry traders (although it would lower the continuing profitability of the carry trade if short rates rise).
In effect, Krugman is arguing that the Treasury should itself act like a carry trader: rolling over short term debt to finance a long-term structural deficit. But why is this not being done already? Take a look at the current Treasury yield curve:

What is currently preventing the Treasury from borrowing at much more attractive short rates to finance the deficit? Is it is a fear of driving up short rates? And if so, won't the same concerns be in place if long term rates start to rise?

Friday, November 20, 2009

Econometric Society Fellows: A Tale of Two Duncans

The Fellows of the Econometric Society are an elite group of economists, numbering less than 500, nominated and elected by their peers:
To be eligible for nomination as a Fellow, a person must have published original contributions to economic theory or to such statistical, mathematical, or accounting analyses as have a definite bearing on problems in economic theory... Candidates elected to Fellowship are those with a total number of check marks at least equal to 30 percent of the number of mail ballots submitted by Fellows. Over the past decade about 15 candidates per year have been elected as new Fellows.
Among the most recently elected fellows is 84 year old R. Duncan Luce, who by most accounts should have been elected decades ago. Jeff Ely (himself a newly elected fellow) explains why it took so long:
The problem is that there are many economists and its costly to investigate each one to see if they pass the bar. So you pick a shortlist of candidates who are contenders and you investigate those.  Some pass, some don’t.  Now, the next problem is that there are many fellows and many non-fellows and its hard to keep track of exactly who is in and who is out.  And again it’s costly to go and check every vita to find out who has not been admitted yet.
So when you pick your shortlist, you are including only economists who you think are not already fellows.  Someone like Duncan Luce, who certainly should have been elected 30 years ago most likely was elected 30 years ago so you would never consider putting him on your shortlist.
Indeed, the simple rule of thumb you would use is to focus on young people for your shortlist.  Younger economists are more likely to be both good enough and not already fellows.
This makes sense. But the proliferation of blogs makes the costs of identifying individuals who have been unfairly overlooked much lower, because the task can be decentralized. Anyone anywhere in the world can make a case and hope that some existing fellows take notice.
In this spirit, let me make a case for Duncan Foley. While still a graduate student in the 1960's, Foley introduced what is now a standard concept of fairness into general equilibrium theory. Here's what Andrew Postlewaite wrote in 1988 about this innovation:
Nearly 20 years ago Duncan Foley introduced a notion of fairness which was completely consistent with standard economic models. This notion was that of envy, or more precisely, lack of envy. An economic outcome was said to be envy-free if no one preferred another's final bundle of goods and services to his or her own bundle. The concept is both compelling and easily accommodated in standard economic models. It is attractive on several grounds. First, it is ordinal - it does not depend upon the particular utility function representing one's preferences, and thus avoids all the problems associated with interpersonal comparison of utilities. Second, the concept relies on precisely the same economic data necessary to determine the efficiency or nonefficiency of the outcomes associated with a particular policy. After Foley introduced this concept into modern economics a number of economists, including Pazner, Schmeidler, Varian, Vind, and Yaari, analyzed and extended the concept.
It is now more than 40 years since Foley developed these ideas. For the concept of envy-free allocations alone he deserves to be elected, but this is just one of several notable contributions. His papers on the core of an economy with public goods (Econometrica 1970), equilibrium with costly marketing (Journal of Economic Theory 1970), portfolio choice and growth (American Economic Review 1970), asset management with trading uncertainty (Review of Economic Studies 1975), and asset equilibrium in macroeconomic models (Journal of Political Economy 1975) all continue to be widely cited. He has made influential contributions to Marxian economics (Journal of Economic Theory 1982) and used ideas from classical thermodynamics to understand price dispersion (Journal of Economic Theory 1994). And he has written several books, including a pioneering effort in 1971 with Miguel Sidrauski on monetary and fiscal policy in a growing economy.

It is my hope that someday soon he will be nominated and elected a new fellow of the Econometric Society, an honor he so richly deserves.

Thursday, November 19, 2009

On Rational Expectations and Equilibrium Paths

Via Mark Thoma, I recently came across this post by Paul De Grauwe:
There is a general perception today that the financial crisis came about as a result of inefficiencies in the financial markets and economic actors’ poor understanding of the nature of risks. Yet mainstream macroeconomic models, as exemplified by the dynamic stochastic general equilibrium (DSGE) models, are populated by agents who are maximising their utilities in an intertemporal framework using all available information including the structure of the model... In other words, agents in these models have incredible cognitive abilities. They are able to understand the complexities of the world, and they can figure out the probability distributions of all the shocks that can hit the economy. These are extraordinary assumptions that leave the outside world perplexed about what macroeconomists have been doing during the last decades.
De Grauwe goes on to argue that rational expectations models are "intellectual heirs of central planning" and makes a case for a "bottom-up" or agent-based approach to macroeconomics.
The rational expectations hypothesis is actually even more demanding than De Grauwe's post suggests, since it is an equilibrium assumption rather than just a behavioral hypothesis. It therefore requires not only that agents have "incredible cognitive abilities" but also that this fact is common knowledge among them, and that they are able to coordinate their behavior in order to jointly traverse an equilibrium path. This point has been made many times; for a particularly clear statement of it see the chapter by Mario Henrique Simonsen in The Economy as an Evolving Complex System

Equilibrium analysis can be very useful in economics provided that the conclusions derived from it are robust to minor changes in specification. In order for this to be the case, it is important that equilibrium paths are stable with respect to plausible disequilibrium dynamics. As Richard Goodwin once said, an unstable equilibrium is "the one place the system will never be found." But while equilibrium dynamics are commonplace in economics now, the stability of equilibrium paths with respect to disequilibrium dynamics is seldom considered worth exploring. 

Wednesday, November 18, 2009

On Efficient Markets and Practical Purposes

Eugene Fama continues to believe that the efficient markets hypothesis "provides a good view of the world for almost all practical purposes" and Robert Lucas seems to agree:
One thing we are not going to have, now or ever, is a set of models that forecasts sudden falls in the value of financial assets, like the declines that followed the failure of Lehman Brothers in September. This is nothing new. It has been known for more than 40 years and is one of the main implications of Eugene Fama’s “efficient-market hypothesis” (EMH), which states that the price of a financial asset reflects all relevant, generally available information. If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier.
It is surely true that if a crash could reliably be predicted to occur a week from today, then it would occur at once. But what if it were widely believed that stock prices were well above fundamental values, and that barring any major changes in fundamentals, a crash could reliably be predicted to occur at some point over the next couple of years? Since the timing of the crash remains uncertain, any fund manager who attacks the bubble too soon stands to lose a substantial sum. For instance, many major market players entered large short positions in technology stocks in 1999 but were unable or unwilling to meet margin calls as the Nasdaq continued to rise. Some were wiped out entirely, while others survived but took heavly losses because they called an end to the bubble too soon:
Quantum, the flagship fund of the world's biggest hedge fund investment group, is suffering its worst ever year after a wrong call that the "internet bubble" was about to burst... Quantum bet heavily that shares in internet companies would fall. Instead, companies such as, the online retailer, and Yahoo, the website search group, rose to all-time highs in April. Although these shares have fallen recently, it was too late for Quantum, which was down by almost 20%, or $1.5bn (£937m), before making up some ground in the past month. Shawn Pattison, a group spokesman, said yesterday: "We called the bursting of the internet bubble too early."
Note that this was written in August 1999, several months before the Nasdaq peaked at above 5000, and therefore cannot be said to reflect what Kenneth French might call the false precision of hindsight.

Along similar lines, a 1986 paper by Frankel and Froot contained survey evidence on expectations suggesting that investors believed both that the dollar was overvalued at the time, and that it would appreciate further in the short term. They were unwilling, therefore, to short the dollar despite believing that it would decline substantially sooner or later.
A crash will occur when there is coordinated selling by many investors making independent decentralized decisions, and a bubble may continue to grow until such coordination arises endogenously. In his response to Lucas, Markus Brunnermeier sums up this view as follows:
Of course, as Bob Lucas points out, when it is commonly known among all investors that a bubble will burst next week, then they will prick it already today. However, in practice each individual investor does not know when other investors will start trading against the bubble. This uncertainty makes each individual investors nervous about whether he can be out of (or short) the market sufficiently long until the bubble finally bursts. Consequently, each investor is reluctant to lean against the wind. Indeed, investors may in fact prefer to ride a bubble for a long time such that price corrections only occur after a long delay, and often abruptly. Empirical research on stock price predictability supports this view. Furthermore, since funding frictions limit arbitrage activity, the fact that you can’t make money does not imply that the “price is right”.
This way of thinking suggests a radically different approach for the future financial architecture. Central banks and financial regulators have to be vigilant and look out for bubbles, and should help investors to synchronize their effort to lean against asset price bubbles. As the current episode has shown, it is not sufficient to clean up after the bubble bursts, but essential to lean against the formation of the bubble in the first place.
This argument is made with a great deal of care and technical detail in a 2003 Econometrica paper by Abreu and Brunnermeier. If true, then clearly there are some terribly important practical purposes for which the EMH does not provide a good view of the world.

Tuesday, November 17, 2009

Eric Maskin's Reading Recommendations

Thanks to Tomas Sjöström, I recently came across an interview with Eric Maskin in which he states:
I don’t accept the criticism that economic theory failed to provide a framework for understanding this crisis... I think most of the pieces for understanding the current financial mess were in place well before the crisis occurred.
Maskin identifies five contributions that he considers to be particularly useful: Diamond and Dybvig on bank runs, Holmstrom and Tirole on moral hazard and liquidity crises in bank lending, Dewatripont and Tirole on the regulation of bank capitalization, Kiyotaki and Moore on the amplification and spread of declines in collateral values, and Fostel and Geanakoplos on leverage cycles.
What's striking to me about this set of readings is that they skew heavily towards microeconomic theory, and are essentially independent of canonical models in contemporary macroeconomics. At some point perhaps graduate textbooks in macroeconomics will feature a fully integrated analysis of goods, labor and financial markets in which collateral and leverage are linked to output, employment, and prices in serious way. In the meantime, there are two recent (post-crisis) papers that I would add to Maskin's list: Adrian and Shin and Brunnermeier and Pedersen.

By the way, if you follow the link to the complete interview, the photograph at the top of the page depicts anxious depositors outside a branch office of Northern Rock, the first British financial institution since 1866 to experience  a classic bank run. Hyun Shin's paper on the failure of Northern Rock is also well worth reading.

Saturday, November 14, 2009

A Puzzling Takedown Notice

In my post on the Gates arrest I linked to a video clip of Anderson Cooper's interview with Leon Lashley. This has now been taken down by YouTube, presumably in response to (or anticipation of) a copyright infringement claim by CNN.
I find this behavior puzzling and appalling. Puzzling because the clip is not available anywhere, not even on CNN's own site where it could generate traffic and advertising revenue. Appalling because the clip has educational value, for reasons discussed in my earlier post.
Screen shots of the interview are available at YouTomb, along with the precise date and time of removal. In case you're wondering:
YouTomb is a research project of MIT Free Culture. The purpose of the project is to investigate what kind of videos are subject to takedown notices due to allegations of copyright infringement with particular emphasis on those for which the takedown may be mistaken.
I'd say this takedown was mistaken...

Thursday, November 12, 2009

Leon Lashley and the Gates Arrest

Over the past few years I have been working with Dan O'Flaherty on the manner in which racial stereotypes condition behavior in interactions between strangers. Our focus has been on interactions involving criminal offenders, victims, witnesses and law enforcement officials. I wrote down the following thoughts in July of this year, after watching a short television segment on the Gates arrest. 

In a now famous photograph depicting Henry Louis Gates in handcuffs outside his Cambridge residence, there is a black police officer standing prominently in the foreground. The officer, Sergeant Leon Lashley, recently defended the actions of his colleague James Crowley in an interview with CNN’s Anderson Cooper, maintaining that the arrest of Gates was warranted under the circumstances. But Lashley also made the following conjecture: “Would it have been different if I had shown up first? I think it probably would have been different.” When asked to elaborate, he said simply “black man to black man, it probably would have been different.”

I suspect that most of us would agree with Lashley that the event would have played out differently had he been the first officer on the scene, although we might disagree about the reasons for this. Some might argue that Lashley would have been quicker to recognize that Gates was an educated professional in his own home rather than a legitimate burglary suspect. Accordingly, he may have shown him greater courtesy and respect, quickly verified his identification, and left the scene without a fuss.

But even if Lashley had acted in every respect exactly as Crowley did, events would probably have developed quite differently, because there would have been less uncertainty in the mind of Gates regarding the officer’s motives. Just as Crowley could not immediately know whether Gates was the homeowner or a burglar, Gates could not know whether or not Crowley’s behavior was racially motivated. He may have been mistaken in his belief that he was dealing with a racist cop, but the suspicion itself was not without empirical foundation. As long as there are white officers who take particular satisfaction in intimidating and arresting black suspects, such uncertainties will remain widespread.

This problem is pervasive when communication between strangers occurs across racial lines in America. If a white store clerk or parking attendant is rude to a white customer, the latter is likely to attribute it to an abrasive personality or a bad mood. If the customer is black, however, there is the additional suspicion that the behavior is motivated by racial animosity. The same action can be given different interpretations and meanings that depend crucially on the racial identities of the transacting parties.

As a result, equal treatment need not result in equal outcomes. Even if a white police officer behaves in exactly the same way towards all suspects, regardless of race, he will be viewed and treated in a manner that is not similarly neutral. Black men who suspect his motives may react with an abundance of caution, taking elaborate steps to avoid being seen as provocative. Or they may react, as Gates did, with anger and outrage. In either case, the reaction will be race-contingent, even if the officer’s behavior is not.

There is a lesson to be learned here as far as the training of police is concerned. Striving towards the goal of equal treatment may not be adequate under current conditions because the same cues can have different, race-contingent interpretations. Officers should be alert to this possibility, and perhaps respond by being especially courteous in interactions that cross racial lines. The costs of doing so would appear to be small relative to the potential benefits.

Sunday, November 08, 2009

More on Ostrom

A few years ago I wrote a review of Polycentric Games and Institutions, a wide-ranging collection of papers written by affiliates of the Workshop in Political Theory and Policy Analysis at Indiana University. What unites the various chapters in the book is a shared commitment to the analytical vision of Elinor Ostrom, a co-recipient of this year's Nobel Prize in Economics. I thought I would post a couple of extracts from the review as an addendum to my earlier post in appreciation of Ostrom's work (the complete review is here):
Although several distinguished scholars have been affiliated with the workshop over the years, Ostrom remains its leading light and creative force. It is fitting, therefore, that the book concludes with her 1988 Presidential Address to the American Political Science Association. In this chapter, she identifies serious shortcomings in prevailing theories of collective action. Approaches based on the hypothesis of unbounded rationality and material self-interest often predict a “tragedy of the commons” and prescribe either privatization of common property or its appropriation by the state. Policies based on such theories, in her view, “have been subject to major failure and have exacerbated the very problems they were intended to ameliorate”. What is required, instead, is an approach to collective action that places reciprocity, reputation and trust at its core. Any such theory must take into account our evolved capacity to learn norms of reciprocity, and must incorporate a theory of boundedly rational and moral behavior. It is only in such terms that the effects of communication on behavior can be understood. Communication is effective in fostering cooperation, in Ostrom’s view, because it allows subjects to build trust, form group identities, reinforce reciprocity norms, and establish mutual commitment. The daunting task of building rigorous models of economic and political choice in which reciprocity and trust play a meaningful role is only just beginning.
The review ends with the following paragraph:
The key conclusions drawn by the contributors are nuanced and carefully qualified, but certain policy implications do emerge from the analysis. The most important of these is that local communities can often find autonomous and effective solutions to collective-action problems when markets and states fail to do so. Such institutions of self-governance are fragile: large-scale interventions, even when well-intentioned, can disrupt and damage local governance structures, often resulting in unanticipated welfare losses. When a history of successful community resource management is in evidence, significant interventions should be made with caution. Once destroyed, evolved institutions are every bit as difficult to reconstruct as natural ecosystems, and a strong case can be made for conserving those that achieve acceptable levels of efficiency and equity. By ignoring the possibility of self-governance, one puts too much faith in the benevolence of a national government that is too large for local problems and too small for global ones. Moreover, as Ostrom points out in the concluding chapter, by teaching successive generations that the solution to collective-action problems lie either in the market or in the state, “we may be creating the very conditions that undermine our democratic way of life”. The stakes could not be higher.
Vernon Smith and Paul Romer have also written very nice tributes to Ostrom, which may be found here and here respectively.

Saturday, November 07, 2009

Brad DeLong on Modern Macroeconomic Theory

A couple of months ago Brad DeLong wrote a post on the state of modern macroeconomic theory that every graduate student in economics really ought to read. He finds the current situation profoundly disturbing:
There is, after all, no place for economic theory of any flavor to come from than from economic history. Someone observes some instructive case or some anecdotal or empirical regularity, says “this is interesting; let's build a model of this,” and economic theory is off and running. Theory is crystalized history—it can be nothing more. After the initial crystalization it does develop on its own according to its own intellectual imperatives and processes, true, but the seed is still there. What happened to the seed?
This situation is personally and professionally dismaying. I do not say that the macroeconomic model-building of the past generation has been pointless. I don’t think that it has been pointless. But I do think that the assembled modern macroeconomists need to be rounded up, on pain of loss of tenure, and sent to a year-long boot camp with the assembled monetary historians of the world as their drill sergeants. They need to listen to and learn from Dick Sylla about Cornelius Buller’s bank rescue of 1825 and Charlie Calomiris about the Overend, Gurney crisis and Michael Bordo about the first bankruptcy of Baring brothers and Barry Eichengreen and Christy Romer and Ben Bernanke about the Great Depression.
If modern macreconomics does not reconnect—if they do not realize just what their theories are crystallized out of, and what the point of the enterprise is—then they will indeed wither and die.
What else should one be reading to make sense of the recent past? If I had to choose one book, it would be Hyman Minsky's Stabilizing an Unstable Economy, which has recently been republished. It is a work of macroeconomic theory of the highest order. And for a flavor of the arguments contained in it, take a look at The Plankton Theory Meets Minsky by Paul McCully of PIMCO.

Friday, November 06, 2009

On Elinor Ostrom

The choice of Elinor Ostrom as a co-recipient of the 2009 Nobel Prize in Economics has taken many, if not most, economists by surprise.  The annual betting pool at Harvard did not receive a single entry for Ostrom, so half the winnings were shared by those who predicted that there would be no correct guess.  On his widely read blog, Steve Levitt reacted as follows:
If you had done a poll of academic economists yesterday and asked who Elinor Ostrom was, or what she worked on, I doubt that more than one in five economists could have given you an answer. I personally would have failed the test… the economics profession is going to hate the prize going to Ostrom even more than Republicans hated the Peace prize going to Obama.
I, for one, am thrilled at the choice. Ostrom’s extensive research on local governance has shattered the myth of inevitability surrounding the “tragedy of the commons” and curtailed the uncritical application of the free-rider hypothesis to collective action problems. Prior to her work it was widely believed that scarce natural resources such as forests and fisheries would be wastefully used and degraded or exhausted under common ownership, and therefore had to be either state owned or held as private property in order to be efficiently managed. Ostrom demonstrated that self-governance was possible when a group of users had collective rights to the resource, including the right to exclude outsiders, and the capacity to enforce rules and norms through a system of decentralized monitoring and sanctions. This is clearly a finding of considerable practical significance.
As importantly, the award recognized an approach to research that is practically extinct in contemporary economics. Ostrom developed her ideas by reading and generalizing from a vast number of case studies of forests, fisheries, groundwater basins, irrigation systems, and pastures.  Her work is rich in institutional detail and interdisciplinary to the core. She used game theoretic models and laboratory experiments to refine her ideas, but historical and institutional analysis was central to this effort.  She deviated from standard economic assumptions about rationality and self-interest when she felt that such assumptions were at variance with observed behavior, and did so long before behavioral economics was in fashion.
The decision by the Nobel Committee to recognize and reward work that is so methodologically eclectic and interdisciplinary might be viewed as a signal to the profession that it is insufficiently tolerant of heterogeneity and dissent. This is a particularly salient criticism in light of the recent financial crisis and the severity of the accompanying economic contraction. Could it not be argued that economists would have been less surprised by the events of the past couple of years, and better able to contain the damage, had there been more methodological pluralism and less reliance on canonical models in the training of economists at our leading universities?
It may be countered that economics has indeed imported many methods and ideas from other disciplines. Behavioral finance and experimental economics are both areas of intensely active research, and work in these fields is routinely published in top journals.  However, even such interdisciplinary cross-fertilization has something of a faddish character to it, with excessively high expectations of what it can accomplish. Behavioral economics, for instance, has been very successful in identifying the value of commitment devices in household savings decisions, and accounting for certain anomalies in asset price behavior. But regularities identified in controlled laboratory experiments with standard subject pools have limited application to environments in which the distribution of behavioral propensities is both endogenous and psychologically rare. This is the case in financial markets, which are subject to selection at a number of levels. Those who enter the profession are unlikely to be psychologically typical, and market conditions determine which behavioral propensities survive and thrive at any point in historical time.
Hence it could be argued that the recognition of Ostrom’s work this year was not just appropriate but also wise. There is no doubt that her research has dramatically transformed our thinking about the feasibility and efficiency of common property regimes. In addition, it serves as a reminder that her eclectic and interdisciplinary approach to social science can be enormously fruitful. In making this selection at this time, it is conceivable that the Nobel Committee is sending a message that methodological pluralism is something our discipline would do well to restore, preserve and foster.

Thursday, November 05, 2009

On Michał Kalecki

Brad DeLong posted the following earlier today, referring to Michal (not Michel) Kalecki:
Back in the 1930s there was a Polish Marxist economist, Michel Kalecki, who argued that recessions were functional for the ruling class and for capitalism because they created excess supply of labor, forced workers to work harder to keep their jobs, and so produced a rise in the rate of relative surplus-value.
For thirty years, ever since I got into this business, I have been mocking Michel Kalecki. I have been pointing out that recessions see a much sharper fall in profits than in wages. I have been saying that the pace of work slows in recessions--that employers are more concerned with keeping valuable employees in their value chains than using a temporary high level of unemployment to squeeze greater work effort out of their workers.
I don't think that I can mock Michel Kalecki any more, ever again.
Few economists have been more unjustly neglected by the profession over the past century than Kalecki, so I was happy to see him mentioned on DeLong's widely read blog. But I feel obliged to point out that Kalecki was well aware that "recessions see a much sharper fall in profits than in wages". His two-sector model predicts precisely this. Kalecki assumes that all wages (and no profits) are spent on consumption goods, which implies that total sales in the consumption good sector are equal to the sum of wages in both (consumption and investment good) sectors. Hence total investment expenditures equal total profits in the economy. Since total investment expenditure is determined by the aggregate (uncoordinated) decisions of firms, Kalecki concluded that while workers spend what they get, capitalists get what they spend. When private investment investment collapses (the hallmark of a recession) then so do profits.

Kalecki's work anticipated significant parts of Keynes' General Theory, and it's a pity he doesn't get more credit for this.