Monday, February 22, 2010

Some Readings on Liquidity, Leverage and Crisis

In an earlier post I mentioned an interview with Eric Maskin in which he claimed that "most of the pieces for understanding the current financial mess were in place well before the crisis occurred," and identified five contributions that in his view were particularly insightful. 
Along similar lines, Yeon-Koo Che has assembled a weekly reading group consisting of faculty and graduate students in the Columbia community to discuss articles that might be helpful in shedding light on recent events. Included among these is a paper by John Geanakoplos that I have surveyed previously on this blog, and several that I hope to discuss in the future. Ten of the contributions we hope to tackle over the coming weeks are the following:
  1. Financial Intermediation, Loanable Funds and the Real Sector by Holmstrom and Tirole
  2. The Limits of Arbitrage by Shleifer and Vishny
  3. Understanding Financial Crises by Allen and Gale
  4. Credit-Worthiness Tests and Interbank Competition by Broeker
  5. Credit Cycles by Kiyotaki and Moore
  6. The Leverage Cycle by Geanakoplos
  7. Collective Moral Hazard, Maturity Mismatch and Systemic Bailouts by Farhi and Tirole
  8. Liquidity and Leverage by Adrian and Shin
  9. Market Liquidity and Funding Liquidity by Brunnermeir and Pedersen
  10. Outside and Inside Liquidity by Bolton, Santos and Scheinkman
I would welcome any comments on these, or suggestions for others that we may have overlooked.

Sunday, February 14, 2010

The Invincible Markets Hypothesis

There has been a lot of impassioned debate over the efficient markets hypothesis recently, but some of the disagreement has been semantic rather than substantive, based on a failure to distinguish clearly between informational efficiency and allocative efficiency. Roughly speaking, informational efficiency states that active management strategies that seek to identify mispriced securities cannot succeed systematically, and that individuals should therefore adopt passive strategies such as investments in index funds. Allocative efficiency requires more than this, and is satisfied when the price of an asset accurately reflects the (appropriately discounted) stream of earnings that it is expected to yield over the course of its existence. If markets fail to satisfy this latter condition, then resource allocation decisions (such as residential construction or even career choices) that are based on price signals can result in significant economic inefficiencies.
Some of the earliest and most influential work on market efficiency was based on the (often implicit) assumption that informational efficiency implied allocative efficiency. Consider, for instance, the following passage from Eugene Fama's 1965 paper on random walks in stock market prices (emphasis added):
The assumption of the fundamental analysis approach is that at any point in time an individual security has an intrinsic value... which depends on the earning potential of the security. The earning potential of the security depends in turn on such fundamental factors as quality of management, outlook for the industry and the economy, etc...

In an efficient market, competition among the many intelligent participants leads to a situation where, at any point in time, actual prices of individual securities already reflect the effects of information based both on events that have already occurred and on events which, as of now, the market expects to take place in the future. In other words, in an efficient market at any point in time the actual price of a security will be a good estimate of its intrinsic value.
Or consider the opening paragraph of his enormously influential 1970 review of the theory and evidence for market efficiency:
The primary role of the capital market is allocation of ownership of the economy's capital stock. In general terms, the ideal is a market in which prices provide accurate signals for resource allocation: that is, a market in which firms can make production-investment decisions, and investors can choose among the securities that represent ownership of firms' activities under the assumption that security prices at any time “fully reflect” all available information. A market in which prices always “fully reflect” available information is called “efficient.”
The above passage is quoted by Justin Fox, who argues that proponents of the hypothesis have recently been defining efficiency down:
That leaves us with an efficient market hypothesis that merely claims, as John Cochrane puts it, that "nobody can tell where markets are going." This is an okay theory, and one that has held up reasonably well—although there are well-documented exceptions such as the value and momentum effects.
The most effective recent criticisms of the efficient markets hypothesis have not focused on these exceptions or anomalies, which for the most part are quite minor and impermanent. The critics concede that informational efficiency is a reasonable approximation, at least with respect to short-term price forecasts, but deny that prices consistently provide "accurate signals for resource allocation." This is the position taken by Richard Thaler in his recent interview with John Cassidy (h/t Mark Thoma):
I always stress that there are two components to the theory. One, the market price is always right. Two, there is no free lunch: you can’t beat the market without taking on more risk. The no-free-lunch component is still sturdy, and it was in no way shaken by recent events: in fact, it may have been strengthened. Some people thought that they could make a lot of money without taking more risk, and actually they couldn’t. So either you can’t beat the market, or beating the market is very difficult—everybody agrees with that...
The question of whether asset prices get things right is where there is a lot of dispute. Gene [Fama] doesn’t like to talk about that much, but it’s crucial from a policy point of view. We had two enormous bubbles in the last decade, with massive consequences for the allocation of resources.
The same point is made somewhat more tersely by The Economist:
Markets are efficient in the sense that it's hard to make an easy buck off of them, particularly when they're rushing maniacally up the skin of an inflating bubble. But are they efficient in the sense that prices are right? Tens of thousands of empty homes say no.
And again, by Jason Zweig, building on the ideas of Benjamin Graham:
Mr. Graham proposed that the price of every stock consists of two elements. One, "investment value," measures the worth of all the cash a company will generate now and in the future. The other, the "speculative element," is driven by sentiment and emotion: hope and greed and thrill-seeking in bull markets, fear and regret and revulsion in bear markets.

The market is quite efficient at processing the information that determines investment value. But predicting the shifting emotions of tens of millions of people is no easy task. So the speculative element in pricing is prone to huge and rapid swings that can swamp investment value.

Thus, it's important not to draw the wrong conclusions from the market's inefficiency... even after the crazy swings of the past decade, index funds still make the most sense for most investors. The market may be inefficient, but it remains close to invincible.
This passage illustrates very clearly the limited value of informational efficiency when allocative efficiency fails to hold. Prices may indeed contain "all relevant information" but this includes not just beliefs about earnings and discount rates, but also beliefs about "sentiment and emotion." These latter beliefs can change capriciously, and are notoriously difficult to track and predict. Prices therefore send messages that can be terribly garbled, and resource allocation decisions based on these prices can give rise to enormous (and avoidable) waste. Provided that major departures of prices from intrinsic values can be reliably identified, a case could be made for government intervention in affecting either the prices themselves, or at least the responses to the signals that they are sending.
Under these conditions it makes little sense to say that markets are efficient, even if they are essentially unpredictable in the short run. Lorenzo at Thinking Out Aloud suggests a different name:
...like other things in economics, such as rational expectations, EMH needs a better name. It is really something like the "all-information-is-incorporated hypothesis" just as rational expectations is really consistent expectations. If they had more descriptive names, people would not misconstrue them so easily and there would be less argument about them.
But a name that emphasizes informational efficiency is also misleading, because it does not adequately capture the range of non-fundamental information on market psychology that prices reflect. My own preference (following Jason Zweig) would be to simply call it the invincible markets hypothesis.

---

Update (2/16). Mark Thoma has more on the subject, as does Cyril Hédoin. Brad DeLong and Robert Waldmann have also linked here, which gives me an excuse to mention two papers of theirs (both written with Shleifer and Summers, and both published in 1990) that were among the first to try and grapple with the question of how rational arbitrageurs would adjust their behavior in response to the presence of noise traders. In a related article that I have discussed previously on this blog (here and here, for instance), Abreu and Brunnermeier have shown how the difficulty of coordinated attack can result in prolonged departures of prices from fundamentals.

---

Update (2/17). My purpose here was to characterize a hypothesis and not to endorse it. In a comment on the post (and also here), Rob Bennett makes the claim that market timing based on aggregate P/E ratios can be a far more effective strategy than passive investing over long horizons (ten years or more.) I am not in a position to evaluate this claim empirically but it is consistent with Shiller's analysis and I can see how it could be true. Over short horizons, however, attempts at market timing can be utterly disastrous, as I have discussed previously. This is what makes bubbles possible. In fact, I believe that market timing over short horizons is much riskier than it would be if markets satisfied allocative efficiency and the only risk came from changes in fundamentals and one's own valuation errors.

---

Update (2/20). Scott Sumner jumps into the fray, but grossly mischaracterizes my position:
Then there is talk (here and here) of a new type of inefficient markets; Rajiv Sethi calls it the invincible market hypothesis.  I don’t buy it, nor do I think the more famous anti-EMH types would either. The claim is that markets are efficient, but they are also so irrational that there is no way for investors to take advantage of that fact.  This implies that the gap between actual price and fundamental value doesn’t tend to close over time, but rather follows a sort of random walk, drifting off toward infinity.
This is obviously not what I claimed. There is absolutely no chance of the gap between prices and fundamentals "drifting off toward infinity." All bubbles are followed by crashes or bear markets, and prices do track fundamentals pretty well over long horizons. The problem lies in the fact that attempts to time the market over short horizons can be utterly disastrous, as I have discussed at length in a previous post; this is what makes asset bubbles possible in the first place. I used the term "invincible markets hypothesis" not as a "new type of inefficient markets hypothesis" but rather as a description of the claim that markets satisfy informational (but not allocative) efficiency. And I did not endorse this claim, except as a "reasonable approximation... with respect to short-term price forecasts."

Sumner continues as follows:
Sethi argues Shiller might be right in the long run, but may be wrong in the short run. I don’t buy that distinction. If Shiller’s right then the anti-EMH position has useful investment implications, even for short term investors...
This too is false. I believe that Shiller is right in the short run (since he argues that prices can depart significantly from fundamental values) and also right in the long run (since he believes that in the long run prices track fundamentals quite well). This does not have useful investment implications for short term investors because short run price movements are so unpredictable and taking short positions during a bubble is so risky. Over long horizons, however, Shiller's analysis does suggest that risk-adjusted returns will be greater if the P/E ratio is lower at the time of the initial investment. One could rationalize this with suitable assumptions about time-varying discount rates, but as Thaler points out, such rational choice models are incredibly flexible and lacking in discipline. Everyone acknowledges, however, that for most investors passive investing is far superior to short-term attempts at market timing. The disagreement is about whether prices can deviate significantly from fundamentals from time to time, resulting in severe economic dislocations and inefficiencies.

---

Update (2/24). For a sober assessment of why passive investing remains the best strategy for most investors despite modest violations of informational efficiency, see this post at Pop Economics.

Robert Waldmann's comments at Angry Bear are excellent, but need to be read with some care. His main point is this: there exist certain (standard but restrictive) general equilibrium models in which informational efficiency does imply allocative efficiency. But minor deviations from informational efficiency do not imply that deviations from allocative efficiency will also be minor.
Anomalies in risk adjusted returns on the order of 1% per year can't be detected. We can't be sure of exactly how to adjust for risk. However, they can make the difference between allocative efficiency and gross inefficiency.

For policy makers there is a huge huge difference between "markets are approximately informational efficient" and "markets are informational efficient." The second claim (plus standard false assumptions) implies that markets are allocatively efficient. The [first] implies nothing about allocative efficiency.
In other words, the link between informational efficiency and allocative efficiency is not robust. This is why the market can be hard to beat, and yet generate significant departures of prices from fundamentals from time to time. 

Saturday, February 06, 2010

A Case for Agent-Based Models in Economics

In a recent essay in Nature, Doyne Farmer and Duncan Foley have made a strong case for the use of agent-based models in economics. These are computational models in which a large numbers of interacting agents (individuals, households, firms, and regulators, for example) are endowed with behavioral rules that map environmental cues onto actions. Such models are capable of generating complex dynamics even with simple behavioral rules because the interaction structure can give rise to emergent properties that could not possibly be deduced by examining the rules themselves. As such, they are capable of providing microfoundations for macroeconomics in a manner that is both more plausible and more authentic than is the case with highly aggregative representative agent models.

Among the most famous (and spectacular) agent-based models is John Conway's Game of Life (if you've never seen a simulation of this you really must). In economics, the earliest such models were developed by Thomas Schelling in the 1960s, and included his celebrated checkerboard model of residential segregation. But with the exception of a few individuals (some of whom are mentioned below) there has been limited interest among economists in the further development of such approaches.

Farmer and Foley hope to change this. They begin their piece with a critical look at contemporary modeling practices:
In today's high-tech age, one naturally assumes that US President Barack Obama's economic team and its international counterparts are using sophisticated quantitative computer models to guide us out of the current economic crisis. They are not. 
The best models they have are of two types, both with fatal flaws. Type one is econometric: empirical statistical models that are fitted to past data. These successfully forecast a few quarters ahead as long as things stay more or less the same, but fail in the face of great change. Type two goes by the name of 'dynamic stochastic general equilibrium'. These models... by their very nature rule out crises of the type we are experiencing now. 
As a result, economic policy-makers are basing their decisions on common sense, and on anecdotal analogies to previous crises such as Japan's 'lost decade' or the Great Depression...The leaders of the world are flying the economy by the seat of their pants.
This is hard for most non-economists to believe. Aren't people on Wall Street using fancy mathematical models? Yes, but for a completely different purpose: modelling the potential profit and risk of individual trades. There is no attempt to assemble the pieces and understand the behaviour of the whole economic system.
The authors suggest a shift in orientation:
There is a better way: agent-based models. An agent-based model is a computerized simulation of a number of decision-makers (agents) and institutions, which interact through prescribed rules. The agents can be as diverse as needed — from consumers to policy-makers and Wall Street professionals — and the institutional structure can include everything from banks to the government. Such models do not rely on the assumption that the economy will move towards a predetermined equilibrium state, as other models do. Instead, at any given time, each agent acts according to its current situation, the state of the world around it and the rules governing its behaviour. An individual consumer, for example, might decide whether to save or spend based on the rate of inflation, his or her current optimism about the future, and behavioural rules deduced from psychology experiments. The computer keeps track of the many agent interactions, to see what happens over time. Agent-based simulations can handle a far wider range of nonlinear behaviour than conventional equilibrium models. Policy-makers can thus simulate an artificial economy under different policy scenarios and quantitatively explore their consequences.
Such methods are unfamiliar (or unappealing) to most theorists in the leading research departments and rarely published in the top professional journals. Farmer and Foley attribute this in part to the failure of a particular set of macroeconomic policies, and the resulting ascendancy of the rational expectations hypothesis:
Why is this type of modelling not well-developed in economics? Because of historical choices made to address the complexity of the economy and the importance of human reasoning and adaptability.
The notion that financial economies are complex systems can be traced at least as far back as Adam Smith in the late 1700s. More recently John Maynard Keynes and his followers attempted to describe and quantify this complexity based on historical patterns. Keynesian economics enjoyed a heyday in the decades after the Second World War, but was forced out of the mainstream after failing a crucial test during the mid-seventies. The Keynesian predictions suggested that inflation could pull society out of a recession; that, as rising prices had historically stimulated supply, producers would respond to the rising prices seen under inflation by increasing production and hiring more workers. But when US policy-makers increased the money supply in an attempt to stimulate employment, it didn't work — they ended up with both high inflation and high unemployment, a miserable state called 'stagflation'. Robert Lucas and others argued in 1976 that Keynesian models had failed because they neglected the power of human learning and adaptation. Firms and workers learned that inflation is just inflation, and is not the same as a real rise in prices relative to wages...

The cure for macroeconomic theory, however, may have been worse than the disease. During the last quarter of the twentieth century, 'rational expectations' emerged as the dominant paradigm in economics... Even if rational expectations are a reasonable model of human behaviour, the mathematical machinery is cumbersome and requires drastic simplifications to get tractable results. The equilibrium models that were developed, such as those used by the US Federal Reserve, by necessity stripped away most of the structure of a real economy. There are no banks or derivatives, much less sub-prime mortgages or credit default swaps — these introduce too much nonlinearity and complexity for equilibrium methods to handle...
Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. In some other areas of science, such as epidemiology or traffic control, agent-based models already help policy-making.
One problem that must be addressed if agent-based models are to gain widespread acceptance in economics is that of quality control. For methodologies that are currently in common use, there exist well-understood (though imperfect) standards for assessing the value of any given contribution. Empirical researchers are concerned with identification and external validity, for instance, and theorists with robustness. But how is one to judge the robustness of a set of simulation results?
The major challenge lies in specifying how the agents behave and, in particular, in choosing the rules they use to make decisions. In many cases this is still done by common sense and guesswork, which is only sometimes sufficient to mimic real behaviour. An attempt to model all the details of a realistic problem can rapidly lead to a complicated simulation where it is difficult to determine what causes what. To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.
This recognizes the problem of quality control, but does not offer much in the way of guidance for editors or referees in evaluating submissions. Presumably such standards will emerge over time, perhaps through the development of a few contributions that are commonly agreed to be outstanding and can serve as templates for future work.

There do exist a number of researchers using agent-based methodologies in economics, and Farmer and Foley specifically mention Blake LeBaron, Rob Axtell, Mauro Gallegati, Robert Clower and Peter Howitt. To this list I would add Joshua Epstein, Marco Janssen, Peter Albin, and especially Leigh Tesfatsion, whose ACE (agent-based computational economics) website provides a wonderful overview of what such methods are designed to achieve. (Tesfatsion also mentions not just Smith but also Hayek as a key figure in exploring the "self-organizing capabilities of decentralized market economies.")

A recent example of an agent-based model that deals specifically with the financial crisis may be found in a paper by Thurner, Farmer, and Geanakoplos. Farmer and Foley provide an overview:
Leverage, the investment of borrowed funds, is measured as the ratio of total assets owned to the wealth of the borrower; if a house is bought with a 20% down-payment the leverage is five. There are four types of agents in this model. 'Noise traders', who trade more or less at random, but are slightly biased toward driving prices towards a fundamental value; hedge funds, which hold a stock when it is under-priced and otherwise hold cash; investors who decide whether to invest in a hedge fund; and a bank that can lend money to the hedge funds, allowing them to buy more stock. Normally, the presence of the hedge funds damps volatility, pushing the stock price towards its fundamental value. But, to contain their risk, the banks cap leverage at a predetermined maximum value. If the price of the stock drops while a fund is fully leveraged, the fund's wealth plummets and its leverage increases; thus the fund has to sell stock to pay off part of its loan and keep within its leverage limit, selling into a falling market.
This agent-based model shows how the behaviour of the hedge funds amplifies price fluctuations, and in extreme cases causes crashes. The price statistics from this model look very much like reality. It shows that the standard ways banks attempt to reduce their own risk can create more risk for the whole system.
Previous models of leverage based on equilibrium theory showed qualitatively how leverage can lead to crashes, but they gave no quantitative information about how this affects the statistical properties of prices. The agent approach simulates complex and nonlinear behaviour that is so far intractable in equilibrium models. It could be made more realistic by adding more detailed information about the behaviour of real banks and funds, and this could shed light on many important questions. For example, does spreading risk across many financial institutions stabilize the financial system, or does it increase financial fragility? Better data on lending between banks and hedge funds would make it possible to model this accurately. What if the banks themselves borrow money and use leverage too, a process that played a key role in the current crisis? The model could be used to see how these banks might behave in an alternative regulatory environment.
I have discussed Geanakoplos' more methodologically orthodox papers on leverage cycles in an earlier post. That work uses standard methods in general equilibrium theory to address related questions, suggesting that the two approaches are potentially quite complementary. In fact, the nature of agent-based modeling is such that it is best conducted in interdisciplinary teams, and is therefore unlikely to ever become the dominant methodology in use:
Creating a carefully crafted agent-based model of the whole economy is, like climate modelling, a huge undertaking. It requires close feedback between simulation, testing, data collection and the development of theory. This demands serious computing power and multi-disciplinary collaboration among economists, computer scientists, psychologists, biologists and physical scientists with experience in large-scale modelling. A few million dollars — much less than 0.001% of the US financial stimulus package against the recession — would allow a serious start on such an effort.
Given the enormity of the stakes, such an approach is well worth trying.
I agree. This kind of effort is currently being undertaken at the Santa Fe Institute, where Farmer and Foley are both on the faculty. And for graduate students interested in exploring these ideas and methods, John Miller and Scott Page hold a Workshop on Computational Economic Modeling in Santa Fe each summer (the program announcement for the 2010 workshop is here.) Their book on Complex Adaptive Systems provides a nice introduction to the subject, as does Epstein and Axtell's Growing Artificial Societies. But it is Thomas Schelling's Micromotives and Macrobehavior, first published in 1978, that in my view reveals most clearly the logic and potential of the agent-based approach.

---

Update (2/7). Cyril Hedoin at Rationalité Limitée points to a paper by Axtell, Axelrod, Epstein and Cohen that explicitly discusses the important issues of replication and comparative model evaluation for agent-based simulations. (He also mentions a nineteenth century debate on research methodology between Carl Menger and Gustav von Schmoller that seems relevant; I'd like to take a closer look at this if I can ever find the time.)

Also, in a pair of comments on this post, Barkley Rosser recommends a 2008 book on Emergent Macroeconomics by Delli Gatti, Gaffeo, Gallegati, Giulioni, and Palestrini, and provides an extensive review of agent-based computational models in regional science and urban economics.

Wednesday, February 03, 2010

Two Blog Birthdays and the Democratization of Discourse

Two notable economics blogs - Cheap Talk and The Money Illusion - celebrated their first birthdays yesterday. Each marked the occasion with highly readable (but very different) posts that got me thinking about the origins and purpose of my own blog, and the extraordinary democratization of economic discourse that the technology of blogging has set in motion.
Jeff Ely's birthday post at Cheap Talk describes how his collaboration with Sandeep Baliga finally got off the ground after a sequence of mislaid and misinterpreted emails, and how they finally settled on a name:
And so we started thinking of a name.  Sandeep had a lot of bad ideas for names
  1. hodgepodge hedgehog
  2. platypus
  3. bacon is a vegetable
  4. release the gecko
  5. coordination failure
  6. reaction function
and he is too much of a philistine to appreciate my ideas for names:
  1. banana seeds
  2. vapor mill
  3. el emenopi’
so we were at an impasse.  Somehow we hit upon the name Cheap Talk. Sandeep ran it by some folks at a party and it seemed like a hit.  (That name was taken by a then-defunct blog and wordpress does not recycle url’s so we had to morph it into cheeptalk.wordpress.com.)
I'm surprised they didn't consider babbling equilibrium. Here's the birthday message I left for them:
Jeff and Sandeep, congratulations! Reaction function would have been a good name but too modest for what you guys are doing. You have a mix of analytical clarity and offbeat humor that really appeals to my taste. I have to say, though, that your rational choice approach to torture made me a bit uneasy (not to mention queasy).

I know a bit about blogging stamina (or lack thereof). I started my blog in 2002 and had a total of 13 posts over the first seven years. Then I wrote a piece on the Gates arrest that the New York Times declined to publish, so I decided to bring the blog back to life. Most of us have more ideas than we could possibly turn into research papers – might as well make them available to everyone else.
It's not easy to find first-rate economic theorists with an abundance of style and wit, but the creators of Cheap Talk both qualify. I'm very glad they got this project going.

Meanwhile over at The Money Illusion, Scott Sumner's birthday post (which I reached via Tyler Cowen) was very different in content and tone. In part it was an attempt to justify his reasoning and policy recommendations over the past year, but it was much more than that: a serious and moving reflection on the blogging experience, the state of macroeconomic methodology, and the role of the public intellectual. Here are a few extracts from a long post that is worth reading in its entirety:
Be careful what you wish for.  Last February 2nd I started this blog with very low expectations... I knew I wasn’t a good writer, years ago I got a referee report back from an anonymous referee (named McCloskey) who said “if the author had used no commas at all, his use of commas would have been more nearly correct.”  Ouch!  But it was true, others said similar things.  And I was also pretty sure that the content was not of much interest to anyone.

Now my biggest problem is time—I spend 6 to 10 hours a day on the blog, seven days a week.  Several hours are spent responding to reader comments and the rest is spent writing long-winded posts and checking other economics blogs.  And I still miss many blogs that I feel I should be reading [...]

As you may know, I don’t think much of the official methodology in macroeconomics.  Many of my fellow economists seem to have a Popperian view of the social sciences.  You develop a model.  You go out and get some data.  And then you try to refute the model with some sort of regression analysis.  If you can’t refute it, then the model is assumed to be supported by the data, although papers usually end by noting “further research is necessary,” as models can never really be proved, only refuted.

My problem with this view is that it doesn’t reflect the way macro and finance actually work.  Instead the models are often data-driven.  Journals want to publish positive results, not negative.  So thousands of macroeconomists keep running tests until they find a “statistically significant” VAR model, or a statistically significant “anomaly” in the EMH.  Unfortunately, because the statistical testing is often used to generate the models, and determine which get published, the tests of statistical significance are meaningless.

I’m not trying to be a nihilist here, or a Luddite who wants to go back to the era before computers.  I do regressions in my research, and find them very useful.  But I don’t consider the results of a statistical regression to be a test of a model, rather they represent a piece of descriptive statistics, like a graph, which may or may not usefully supplement a more complex argument that relies on many different methods, not a single “Official Method.” [...]

I like Rorty’s pragmatism; his view that scientific models don’t literally correspond to reality, or mirror reality.  Rorty says that one should look for models that are “coherent,” that help us to make sense of a wide variety of facts.  I want people who read my blog to be saying to themselves “aha, now I understand why the economy continues to drag along despite low interest rates,” as they recall that low rates are not an indication of monetary stimulus... It’s all about persuasion.  And people are persuaded by coherent models [...] 
So that’s the goal of my blog, to constantly use theoretical arguments, empirical data, clever metaphors, and historical analogies that make people see the current situation in a new way.  Whatever works, as long as it is not dishonest [...]   
Regrets?  I’m pretty fatalistic about things.  I suppose it wasn’t a smart career move to spend so much time on the blog.  If I had ignored my commenters I could have had my manuscript revised by now.  But I think everything happens for a reason...The commenters played an important role in the blog.  By constantly having to defend myself against their criticism, I further refined my arguments.  In addition, I got a better idea of how other people look at monetary economics.  I don’t have any major regrets...
Happiness isn’t based on anything you achieve, but rather the anticipation of future happiness.  As sports fans know the most fun position to be in is the underdog challenging the evil empire... whether I in some sense “win” in the long run isn’t really that important to me.  I’ve already got most of what I wanted, which is for people I respect to find my arguments intriguing...
I used to think I had just a few ideas, and once I used those up I’d have nothing more to say.  As you’ve noticed (sometimes painfully) that is not my problem.  I suppose it came from being a loner for several decades... If you’d told me last year “write 1000 pages on monetary policy,” I would have recoiled in horror.  I figured I’d do a couple dozen posts, run out of ideas, and then merely comment on current events.  I had no idea that writing is thinking.  But now here I am a year later, and my blog is 1000 pages of sprawling essays.  Yes, there’s plenty of repetition, but even if you sliced out all the filler, I bet you could find a 200 page book in there somewhere.

Still, at the current pace my blog is gradually swallowing my life.  Soon I won’t be able to get anything else done.  And I really don’t get any support from Bentley, as far as I know the higher ups don’t even know I have a blog.  So I just did 2500 hours of uncompensated labor.  I hope someone got some value out of it.  Right now I just want my life back.

But I suppose I could do one more post.

And after that, maybe one more final post wouldn’t seem so difficult.

But please don’t ask me to become a blogger.  It’d be like asking me whether I ever considered becoming a heroin addict.  Just one more post.  One day at a time. . . .
This entry (not surprisingly) attracted a number of supportive and encouraging comments, among which was my own:
This is a wonderful, heartfelt post. You’re a far better writer than you give yourself credit for. Congratulations on the first birthday of your blog; I hope that there will be many more to come.
I really meant that. There was much in Sumner's post that struck a chord with me. I also see my own blog as a sequence of short interlocking essays that present what I hope is a coherent vision. And I too have been fortunate enough to have been visited by a number of thoughtful readers with whom I have had long, wide-ranging, and generally civil exchanges.
The community of academic economists is increasingly coming to be judged not simply by peer reviewers at journals or by carefully screened and selected cohorts of students, but by a global audience of curious individuals spanning multiple disciplines and specializations. Voices that have long been silenced in mainstream journals now insist on being heard on an equal footing. Arguments on blogs seem to be judged largely on their merits, independently of the professional stature of those making them. This has allowed economists in far-flung places with heavy teaching loads, or those who pursued non-academic career paths, to join debates. Even anonymous writers and autodidacts can wield considerable influence in this environment, and a number of genuinely interdisciplinary blogs have emerged (see, for instance, this fascinating post from one of my favorites.)
This has got to be a healthy development. One might persuade a referee or seminar audience that a particular assumption is justified simply because there is a large literature that builds on it, or that tractability concerns preclude reasonable alternatives. But this broader audience is not so easy to convince. Persuading a multitude of informed, thoughtful, intelligent readers of the relevance and validity of one's arguments using words rather than formal models is a far more challenging task than persuading one's own students or peers. If one can separate the wheat from the chaff, the reasoned argument from the noise, this process should result in a more dynamic and robust discipline in the long run.

Thursday, January 28, 2010

Identifying Bubbles

In response to the barrage of criticism that has been aimed at the efficient markets hypothesis recently, Robin Hanson makes a plea:
Look, everyone, this game should have rules. EMH (at least the interesting version) says prices are our best estimates, so to deny EMH is to assert that prices are predictably wrong. And for EHM violations to be relevant for regulatory policy, price errors must be so systematic as to allow a government agency to follow some bureaucratic process to identify when prices are too high, vs. too low, and act on that info.
The efficient markets hypothesis makes a stronger claim than just price unpredictability; it identifies prices with fundamental values. So one can indeed question the hypothesis without asserting that "prices are predictably wrong." But Hanson's broader point is surely correct: if the Federal Reserve is charged with reacting to asset price bubbles, then bubbles must be identifiable not just on the basis of hindsight, but in real time, as they occur. Can this be done?
For reasons discussed at length in a previous post, a belief that an asset is overpriced relative to fundamentals is consistent with a broad range of trading strategies, each of which carries significant risks. One cannot therefore deduce an individual's beliefs about the existence of a bubble simply by observing their trades or holdings of the asset in question. However, it might be possible to obtain information about the prevalence of beliefs about an asset bubble by looking at the prices of options.
Specifically, anyone who thinks that they have identified a bubble must also believe that the likelihood of a major correction (such as a crash or bear market) must be higher than would normally be the case. They may also believe that the likelihood of significant short term increases in price is higher than normal. If so, they are predicting greater volatility in the asset price than would arise in the absence of a bubble. And if such expectations are widely held, they should be reflected in the price of options strategies that are especially profitable in the face of major price movements.
In the case of a bubble involving a large class of securities (such as technology stocks) a widespread belief that prices exceed fundamental values should be reflected in higher prices for index straddles: a combination of put and call options with the same expiration date and strike price, written on a market index. The Chicago Board Options Exchange specifically recommends this strategy for investors who are convinced that "a particular index will make a major directional move" and those who anticipate "increased volatility." One possible approach to determining whether bubbles are identifiable as they occur is therefore to ask whether the price of an index straddle is a leading indicator of a crash or bear market.
This basic idea has been used previously in a number of event studies. David Bates, for instance, found that "out-of-the-money puts, which provide crash insurance, were unusually expensive relative to out-of-the-money calls" during the year preceding the 1987 stock market crash. He interprets this as reflecting "a strong perception of downside risk" over this period. Joseph Fung found that implied volatility deduced from the prices of index options rose sharply in May and June of 1997, predicting the Hong Kong stock market crash of October 1997. He concludes that "option implied volatility could be incorporated into an early warning system intended to indicate large market movements or crisis events." There were no index options traded at the time of the 1929 crash, but Rappoport and White used data on brokers' loans collateralized by stock (which they interpret as a option-like contract) to ask whether the crash was predicted. They found that:
During the stock-market boom, "the key attributes of brokers' loan contracts (the interest rate and the initial required margin) rose significantly, suggesting that lenders felt a need for protection from a sharp decline in the value of their collateral... The rise in the margin required and the interest rate charged suggest that those who lent money for investment in the stock market (bankers and brokers) radically revised their opinion of the risks inherent in making brokers' loans as the market climbed and once again when it collapsed.
Event studies such as these are not quite enough to address Hanson's concern, since they do not consider false alarms: situations in which the prices of options signaled an increase in volatility that did not eventually materialize. But it seems that the same approach could be used to determine whether or not bubbles are indeed identifiable: one simply needs to examine a long, uninterrupted time series to see if implied volatility (as reflected in the prices of options) is predictive of major market declines.
If it is, then perhaps the Federal Reserve should respond not only to the inflation rate, output gap, and system-wide leverage, but also to the implied volatility in index options. It is at least conceivable that such a policy might reduce the incidence and severity of asset price bubbles.

---

Update (1/29). Even if it were possible to reliably identify bubbles, it is not obvious that the Fed should respond in any systematic way. Bernanke and Gertler (2001) argued firmly that the costs of doing so would outweigh any benefits:
even if the central bank is certain that a bubble is driving the market, once policy performance is averaged over all possible realizations of the bubble process, by any reasonable metric there is no consequential advantage of responding to stock prices.
It would be interesting to know whether Bernanke has softened his position on this. An intriguing possibility is that the willingness of the central bank to intervene could influence asset market behavior in such a manner as to make actual interventions largely unnecessary. As Lucas observed in a hugely influential paper, one cannot assume that structural patterns in the data will persist if policy responses to such patterns are altered.

---

Update (1/31). In a comment on this post, Barkley Rosser points out that the clearest examples of bubbles may be found in closed-end funds that are trading at a significant premium over net asset value:
there is one category of assets where the fundamental is very well defined: closed-end funds, although one must account for the ability to buy and sell the underlying assets and must account for management fees and tax effect. Thus, most closed-end funds run single-digit discounts. But if one sees a closed-end fund with a soaring premium of the price over the net asset value, one can be about as sure as one can be that one is observing a bubble.
This is absolutely correct: a closed-end fund selling at a premium is overpriced by definition relative to the value of the underlying assets, and the premium can only be sustained if the overpricing is expected to become even larger at some point.  But how often do such bubbles arise in practice? Barkley directs us to some evidence (links added):
There is an existing [literature] on this that arose in response to the "misspecified fundamentals" arguments about bubbles put forward by Garber and others about 20 years ago. One of those was [by DeLong and Shleifer] in the Journal of Economic History. They noted the 100% premia that appeared on closed-end funds in the US in 1929, arguing that one might not be able to prove that there was a bubble on the stock market, but there most definitely was one on the closed-end funds at that time.

Ahmed, Koppl, Rosser, and White document the bubble on closed-end country funds that hit in 1989-90 (100% premia on the Germany and Spain funds before the crash in Frb. 1990) in "Complex bubble persistence in closed-end country funds" in the Jan. 1997 issue of JEBO.

Friday, January 22, 2010

On Efficient Markets and Cognitive Illusions

It takes a certain amount of audacity to appeal to cognitive illusions in defense of a hypothesis that denies any role for human psychology in the determination of asset prices. But this is precisely what Scott Sumner has done:
Now let’s ask why people have this mistaken notion that bubbles are easy to spot, and that Fama is deluded.  I believe it is a cognitive illusion.  People think they see lots of bubbles.  Future price changes seem to confirm their views.  This reinforces their perception that they were right all along.  Sometimes they were right, as when The Economist predicted the NASDAQ bubble, or the US housing bubble. But far more often people are wrong, but think they were right. 
He then goes on to argue that anyone with the ability to identify bubbles should be able to make significant sums of money:
So let’s say The Economist magazine really knows the fundamental value of assets in the various countries it covers.  It does cover a lot of countries, and probably knows more about those countries than almost any other magazine.  Also suppose The Economist started a mutual fund that invested based on its ability to spot fundamental values and deviations from those values.  That mutual fund should outperform other funds.  And not just by a little bit, but massively outperform them.
There are really two separate questions here: can bubbles be reliably identified in real time, while they are in the process of inflating, and if so, does this present opportunities for making abnormally high risk-adjusted returns? It is possible to answer the first question in the affirmative but not the second, for the simple reason that the eventual size of the bubble and the timing of the crash are unpredictable. Selling short too soon can result in huge losses if one is unable to continue meeting margin calls as the bubble expands. Trying to ride the bubble for a while can be disastrous if one doesn't get out of the market soon enough. And avoiding the market altogether can also be risky, if one's returns as a fund manager are compared with those of one's peers.
Each of these risks may be illustrated with some vivid examples from the bubble in technology stocks that eventually burst in April 2000. Many of those who sold these assets short could not profit from the decline because they were forced to liquidate their positions too soon:
Pity the short-sellers. Practically driven to extinction by a bull market run, they should be reveling now that many of the stocks they long considered overvalued have fallen sharply. But many of them have been left out of this market move, too.
The meteoric rise of technology stocks over the last few years forced many short-sellers to abandon positions, shut their operations or liquidate their portfolios and go into cash. So when the technology sector and the market over all finally had a bracing retreat in April, some of the investment funds that have specialized in selling stocks short, or betting that stock prices will drop, were not positioned to profit...
Many highflying stocks, including PMC-Sierra, MicroStrategy and Echelon, soared in February and March only to plummet in April. As the stocks soared, short-sellers began unwinding their positions because of mounting losses. By the time the stock of MicroStrategy, a Virginia software company, fell precipitously on reports of accounting problems, the shorts had largely given up. Its short interest declined to 724,630 shares in mid-March from 3.8 million shares in mid-December. The stock peaked in March and then fell 94 percent to its April trough.
A particularly interesting case is that of the Quantum fund, which suffered significant losses from short positions in 1999:
Quantum, the flagship fund of the world's biggest hedge fund investment group, is suffering its worst ever year after a wrong call that the "internet bubble" was about to burst... Quantum bet heavily that shares in internet companies would fall. Instead, companies such as Amazon.com, the online retailer, and Yahoo, the website search group, rose to all-time highs in April. Although these shares have fallen recently, it was too late for Quantum, which was down by almost 20%, or $1.5bn (£937m), before making up some ground in the past month. Shawn Pattison, a group spokesman, said yesterday: "We called the bursting of the internet bubble too early."
This caused the fund managers to reverse course and buy technology stocks, resulting in a rebound in late 1999. But they held on to these positions too long:
Stanley Druckenmiller knew technology stocks were overvalued, but he didn't think the party was going to end so rapidly.
''We thought it was the eighth inning, and it was the ninth,'' he said, explaining how the $8.2 billion Quantum Fund, which he managed for Soros Fund Management, wound up down 22 percent this year before he announced yesterday that he was calling it quits after a phenomenal record at Soros over the last 12 years. ''I overplayed my hand.''
Given the risks involved in taking positions on either side of the market during a bubble, one might be tempted to simply avoid the affected assets altogether. But this carries a different kind of risk:
After Julian Robertson, Mr. Druckenmiller is the second legendary hedge fund manager to walk away from the business in the last month after suffering reverses. Mr. Robertson's fund had performed poorly because he thought technology stocks were way overvalued, and he refused to play.

''The moral of this story is that irrational markets can kill you,'' said one Wall Street analyst who has dealt with both men. ''Julian said, 'This is irrational and I won't play,' and they carried him out feet first. Druckenmiller said, 'This is irrational and I will play,' and they carried him out feet first.''
The last two examples are mentioned by Abreu and Brunnermeier in their 2003 Econometrica paper on bubbles and crashes. One of the key points made in that paper is that even sophisticated, forward looking investors face a dilemma when they become aware of a bubble, because they know that it will continue to expand unless there is coordinated selling by enough of them. And such coordination is not easily achieved, resulting in the possibility of prolonged departures of prices from fundamental values.
As a result, identifying bubbles as they occur is a lot easier than cashing in on this knowledge. Free Exchange (via Brad DeLong) sums up this position neatly in a direct response to Sumner:
Markets are efficient in the sense that it's hard to make an easy buck off of them, particularly when they're rushing maniacally up the skin of an inflating bubble. But are they efficient in the sense that prices are right? Tens of thousands of empty homes say no. And despite the great extent to which markets depart from the theoretician's ideal, people did manage to put together models predicting the fall, bet on those models, and make a great deal of money off of those bets.
The same point is made by Richard Thaler in his recent interview with John Cassidy (via Mark Thoma). Here's Thaler's response to a question about what remains of the efficient markets hypothesis:
I always stress that there are two components to the theory. One, the market price is always right. Two, there is no free lunch: you can’t beat the market without taking on more risk. The no-free-lunch component is still sturdy, and it was in no way shaken by recent events: in fact, it may have been strengthened. Some people thought that they could make a lot of money without taking more risk, and actually they couldn’t. So either you can’t beat the market, or beating the market is very difficult—everybody agrees with that. My own view is that you can [beat the market] but it is difficult.
The question of whether asset prices get things right is where there is a lot of dispute. Gene [Fama] doesn’t like to talk about that much, but it’s crucial from a policy point of view. We had two enormous bubbles in the last decade, with massive consequences for the allocation of resources.
This is why the separation of the prediction question from the profitability question is so important. If the Federal Reserve is to adopt policies that respond to asset price bubbles, it is necessary only that such phenomena be reliably diagnosed, not that the identification of bubbles be hugely profitable for private investors. And those who deny the possibility of predicting bubbles really ought to provide some direct evidence for this view, independently of the fact that the market is hard to beat. Consider, for instance, this excerpt from Cassidy's interview with Fama:
I guess most people would define a bubble as an extended period during which asset prices depart quite significantly from economic fundamentals.
 
That’s what I would think it is, but that means that somebody must have made a lot of money betting on that, if you could identify it. It’s easy to say prices went down, it must have been a bubble, after the fact. I think most bubbles are twenty-twenty hindsight. Now after the fact you always find people who said before the fact that prices are too high. People are always saying that prices are too high. When they turn out to be right, we anoint them. When they turn out to be wrong, we ignore them. They are typically right and wrong about half the time.
Like Sumner, Fama here is alleging that those who take bubbles seriously are suffering from a cognitive illusion. But it's the very last sentence that I find most troubling. How do we know that such individuals are typically right and wrong about half the time? This is an empirical question, and needs to be addressed with data. And studies showing that it is difficult if not impossible to beat the market are not helpful in answering it.

Monday, January 18, 2010

John Geanakoplos on the Leverage Cycle

In a series of papers starting with Promises Promises in 1997, John Geanakoplos has been developing general equilibrium models of asset pricing in which collateral, leverage and default play a central role. This work has attracted a fair amount of media attention since the onset of the financial crisis. While the public visibility will surely pass, I believe that the work itself is foundational, and will give rise to an important literature with implications for both theory and policy.
The latest paper in the sequence is The Leverage Cycle, to be published later this year in the NBER Macroeconomics Annual. Among the many insights contained there is the following: the price of an asset at any point in time is determined not simply by the stream of revenues it is expected to yield, but also by the manner in which wealth is distributed across individuals with varying beliefs, and the extent to which these individuals have access to leverage. As a result, a relatively modest decline in expectations about future revenues can result in a crash in asset prices because of two amplifying mechanisms: changes in the degree of equilibrium leverage, and the bankruptcy of those who hold the most optimistic beliefs.
This has some rather significant policy implications:
In the absence of intervention, leverage becomes too high in boom times, and too low in bad times. As a result, in boom times asset prices are too high, and in crisis times they are too low. This is the leverage cycle.

Leverage dramatically increased in the United States and globally from 1999 to 2006. A bank that in 2006 wanted to buy a AAA-rated mortgage security could borrow 98.4% of the purchase price, using the security as collateral, and pay only 1.6% in cash. The leverage was thus 100 to 1.6, or about 60 to 1. The average leverage in 2006 across all of the US$2.5 trillion of so-called ‘toxic’ mortgage securities was about 16 to 1, meaning that the buyers paid down only $150 billion and borrowed the other $2.35 trillion. Home buyers could get a mortgage leveraged 20 to 1, a 5% down payment. Security and house prices soared.
Today leverage has been drastically curtailed by nervous lenders wanting more collateral for every dollar loaned. Those toxic mortgage securities are now leveraged on average only about 1.2 to 1. Home buyers can now only leverage themselves 5 to 1 if they can get a government loan, and less if they need a private loan. De-leveraging is the main reason the prices of both securities and homes are still falling.
Geanakoplos concludes that the Fed should actively "manage system wide leverage, curtailing leverage in normal or ebullient times, and propping up leverage in anxious times." This seems consistent with Paul Volcker's views (as expressed in his 1978 Moskowitz lecture) and with Hyman Minsky's financial instability hypothesis. But it is inconsistent with the adoption of any monetary policy rule (such as the Taylor rule) that is responsive only to inflation and the output gap.
It is worth examining in some detail the theoretical analysis on which these conclusions rest. Start with a simple model with a single asset, two periods, and two future states in which the asset value will be either high or low. Beliefs about the relative likelihood of the two states vary across individuals. These belief differences are primitives of the model, and not based on differences in information (technically, individuals have heterogeneous priors). Suppose initially that there is no borrowing. Then the price of the asset will be such that those who wish to sell their holdings at that price collectively own precisely the amount that those who wish to buy can collectively afford. Specifically, the price will partition the public into two groups, with those more pessimistic about the future price selling to those who are more optimistic.
Now allow for borrowing, with the asset itself as collateral (as in mortgage contracts). Suppose, for the moment, that the amount of lending is constrained by the lowest possible future value of the collateral, so lenders are fully protected against loss. Even in this case, the asset price will be higher than it would be without borrowing: the most optimistic individuals will buy the asset on margin, while the remainder sell their holdings and lend money to the buyers. Already we see something interesting: despite the fact that there has been no change in beliefs about the future value of the asset, the price is higher when margin purchases can take place:
The lesson here is that the looser the collateral requirement, the higher will be the prices of assets... This has not been properly understood by economists. The conventional view is that the lower is the interest rate, then the higher will asset prices be, because their cash flows will be discounted less. But in the example I just described... fundamentals do not change, but because of a change in lending standards, asset prices rise. Clearly there is something wrong with conventional asset pricing formulas. The problem is that to compute fundamental value, one has to use probabilities. But whose probabilities?

The recent run up in asset prices has been attributed to irrational exuberance because conventional pricing formulas based on fundamental values failed to explain it. But the explanation I propose is that collateral requirements got looser and looser.
So far, the extent of leverage has been assumed to be fixed (either at zero or at the level at which the lender is certain to be repaid even in the worst-case outcome). But endogenous leverage is an important part of the story, and the extent of leverage must be determined jointly with the interest rate in the market for loans. To accomplish this, one has to recognize that loan contracts can differ independently along both dimensions:
It is not surprising that economists have had trouble modeling equilibrium haircuts or leverage. We have been taught that the only equilibrating variables are prices. It seems impossible that the demand equals supply equation for loans could determine two variables.

The key is to think of many loans, not one loan. Irving Fisher and then Ken Arrow taught us to index commodities by their location, or their time period, or by the state of nature, so that the same quality apple in different places or different periods might have different prices. So we must index each promise by its collateral...

Conceptually we must replace the notion of contracts as promises with the notion of contracts as ordered pairs of promises and collateral.
Even though the universe of possible contracts is large, only a small subset of these contracts will actually be traded in equilibrium. In the simple version of the model considered here, equilibrium leverage is uniquely determined (given the distribution of beliefs about future asset values).
To derive the amplifying mechanisms which give rise to the leverage cycle, the model must be extended to allow for three periods. In each period after the initial one the news can be good or bad, so there are now four possible paths through the tree of uncertainly. As before, suppose that at the end of the final period the asset price can be either high or low, and that it will be low only if bad news arrives in both periods. Short term borrowing (with repayment after one period) is possible, and the degree of leverage in each period is determined in equilibrium. It turns out that in the first period the equilibrium margin is just enough to protect lenders from loss even if the initial news is bad. The most optimistic individuals borrow and buy the asset, the remainder sell what they hold and lend.
Now suppose that the initial news is indeed bad. Geanakoplos shows that the asset price will fall dramatically, much more than changing expectations about its eventual value could possibly warrant. This happens for two reasons. First, the most optimistic individuals have been wiped out and can no longer afford to purchase the asset at any price. And second, the amount of equilibrium leverage itself falls sharply. There is less borrowing by less optimistic individuals resulting in a much lower price than would arise if those who had borrowed in the initial period had not lost their collateral.
There is much more in the paper than I have been able to describe, but these simple examples should suffice to illuminate some of the key ideas. As I said at the start of this post, I suspect that a lot of research over the next few years will build on these foundations. There is still a large gap between the rigorous and tightly focused analysis of Geanakoplos on the one hand, and the expansive but informal theories of Minsky on the other. An attempt to bridge this gap seems like it would be a worthwhile endeavor.

---

Update (1/19). Mark Thoma has more on the topic, including an excerpt from an interview with Eric Maskin in which a related paper by Fostel and Geanakoplos is discussed. This is one of five contributions recommended by Maskin, all of which are worth reading.