Tuesday, November 24, 2009

On Buiter, Goodwin, and Nonlinear Dynamics

A few months ago, Willem Buiter published a scathing attack on modern macroeconomics in the Financial Times. While a lot of attention has been paid to the column's sharp tone and rhetorical flourishes, it also contains some specific and quite constructive comments about economic theory that deserve a close reading. One of these has to do with the limitations of linearity assumptions in models of economic dynamics:
When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner.  There is no ‘bounded instability’ in such models.  The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearized model, the explosive solution trajectories.  What they were left with was something that, following an exogenous  random disturbance, would return to the deterministic steady state pretty smartly.  No L-shaped recessions.  No processes of cumulative causation and bounded but persistent decline or expansion.  Just nice V-shaped recessions.
Buiter is objecting here to a vision of the economy as a stable, self-correcting system in which fluctuations arise only in response to exogneous shocks or impulses. This has come to be called the Frisch-Slutsky approach to business cycles, and its intellectual origins date back to a memorable metaphor introduced by Knut Wicksell more than a century ago: "If you hit a wooden rocking horse with a club, the movement of the horse will be very different to that of the club" (translated and quoted in Frisch 1933). The key idea here is that irregular, erratic impulses can be transformed into fairly regular oscillations by the structure of the economy. This insight can be captured using linear models, but only if the oscillations are damped - in the absence of further shocks, there is convergence to a stable steady state. This is true no matter how large the initial impulse happens to be, because local and global stability are equivalent in linear models.

A very different approach to business cycles views fluctuations as being caused by the local instability of steady states, which leads initially to cumulative divergence away from balanced growth. Nonlinearities are then required to ensure that trajectories remain bounded. Shocks to the economy can make trajectories more erratic and unpredictable, but are not required to account for persistent fluctuations. An energetic and  life-long proponent of this approach to business cycles was Richard Goodwin, who also produced one of the earliest such models in economics (Econometrica, 1951). Most of the literature in this vein has used aggregate investment functions and would not be considered properly microfounded by contemporary standards (see, for instance, Chang and Smyth 1971Varian 1979, or Foley 1987). But endogenous bounded fluctuations can also arise in neoclassical models with overlapping generations (Benhabib and Day 1982Grandmont 1985).

The advantage of a nonlinear approach is that it can accommodate a very broad range of phenomena. Locally stable steady states need not be globally stable, so an economy that is self-correcting in the face of small shocks may experience instability and crisis when hit by a large shock. This is Axel Leijonhufvud's corridor hypothesis, which its author has discussed in a recent column. Nonlinear models are also required to capture Hyman Minsky's financial instability hypothesis, which argues that periods of stable growth give rise to underlying behavioral changes that eventually destabilize the system. Such hypotheses cannot possibly be explored formally using linear models.

This, I think, is the point that Buiter was trying to make. It is the same point made by Goodwin in his 1951 Econometrica paper, which begins as follows:
Almost without exception economists have entertained the hypothesis of linear structural relations as a basis for cycle theory. As such it is an oversimplified special case and, for this reason, is the easiest to handle, the most readily available. Yet it is not well adapted for directing attention to the basic elements in oscillations - for these we must turn to nonlinear types. With them we are enabled to analyze a much wider range of phenomena, and in a manner at once more advanced and more elementary. 
By dropping the highly restrictive assumptions of linearity we neatly escape the rather embarrassing special conclusions which follow. Thus, whether we are dealing with difference or differential equations, so long as they are linear, they either explode or die away with the consequent disappearance of the cycle or the society. One may hope to avoid this unpleasant dilemma by choosing that case (as with the frictionless pendulum) just in between. Such a way out is helpful in the classroom, but it is nothing more than a mathematical abstraction. Therefore, economists will be led, as natural scientists have been led, to seek in nonlinearities an explanation of the maintenance of oscillation. Advice to this effect, given by Professor Le Corbeiller in one of the earliest issues of this journal, has gone largely unheeded.
And sixty years later, it remains largely unheeded.

---
Update (11/27): Thanks to Mark Thoma for reposting this.
Update (11/28): Mark has an interesting follow up post on Varian (1979).
Update (11/29): Barkley Rosser continues the conversation.

Monday, November 23, 2009

A Further Comment on the Term Structure of Interest Rates

In my last post, I raised some questions about Paul Krugman's view that the government should not be deterred from implementing job creation policies by the fear of raising long term interest rates:
What Krugman seems to be advocating is the following: if long term rates should start to rise, the Treasury should finance the deficit by issuing more short-term (and less long-term) debt, thereby flattening the yield curve and holding long term rates low. This would prevent capital losses for carry traders (although it would lower the continuing profitability of the carry trade if short rates rise).
In effect, Krugman is arguing that the Treasury should itself act like a carry trader: rolling over short term debt to finance a long-term structural deficit. But why is this not being done already? Take a look at the current Treasury yield curve... What is currently preventing the Treasury from borrowing at much more attractive short rates to finance the deficit? Is it is a fear of driving up short rates? And if so, won't the same concerns be in place if long term rates start to rise?
From today's New York Times comes a partial answer:
Treasury officials now face a trifecta of headaches: a mountain of new debt, a balloon of short-term borrowings that come due in the months ahead, and interest rates that are sure to climb back to normal as soon as the Federal Reserve decides that the emergency has passed.
Even as Treasury officials are racing to lock in today’s low rates by exchanging short-term borrowings for long-term bonds, the government faces a payment shock similar to those that sent legions of overstretched homeowners into default on their mortgages.
So the Treasury is currently swapping short term obligations for long term ones. Given their reasons for doing this, I don't see that the solution proposed by Krugman - that "the government issue more short-term debt" if long term rates start to rise - is going to be feasible. On the other hand, his suggestion that the Fed buy more long term bonds may still be an option.
Update: Both Dean Baker and Brad DeLong are unhappy with the Times column I linked to above, and probably for good reason. But as long as it accurately describes the current behavior of the Treasury, my argument still stands. Further increases deficit spending may be a good idea but they have to be financed in some way, and it's worth thinking about the implications of different maturity dates for new issues.

Saturday, November 21, 2009

On Carry Traders and Long Term Interest Rates

Tyler Cowan thinks that this post by Paul Krugman on long term interest rates and a follow up by Brad DeLong are critically important and "two of the best recent economics blog posts, in some time".
Krugman's post deals with the question of why some economists in the administration are concerned that further increases in deficit financing could cause long term rates to spike:
Well, what I hear is that officials don’t trust the demand for long-term government debt, because they see it as driven by a “carry trade”: financial players borrowing cheap money short-term, and using it to buy long-term bonds. They fear that the whole thing could evaporate if long-term rates start to rise, imposing capital losses on the people doing the carry trade; this could, they believe, drive rates way up, even though this possibility doesn’t seem to be priced in by the market.

What’s wrong with this picture?

First of all, what would things look like if the debt situation were perfectly OK? The answer, it seems to me, is that it would look just like what we’re seeing.

Bear in mind that the whole problem right now is that the private sector is hurting, it’s spooked, and it’s looking for safety. So it’s piling into “cash”, which really means short-term debt. (Treasury bill rates briefly went negative yesterday). Meanwhile, the public sector is sustaining demand with deficit spending, financed by long-term debt. So someone has to be bridging the gap between the short-term assets the public wants to hold and the long-term debt the government wants to issue; call it a carry trade if you like, but it’s a normal and necessary thing.

Now, you could and should be worried if this thing looked like a great bubble — if long-term rates looked unreasonably low given the fundamentals. But do they? Long rates fluctuated between 4.5 and 5 percent in the mid-2000s, when the economy was driven by an unsustainable housing boom. Now we face the prospect of a prolonged period of near-zero short-term rates — I don’t see any reason for the Fed funds rate to rise for at least a year, and probably two — which should mean substantially lower long rates even if you expect yields eventually to rise back to 2005 levels. And if we’re facing a Japanese-type lost decade, which seems all too possible, long rates are in fact still unreasonably high.

Still, what about the possibility of a squeeze, in which rising rates for whatever reason produce a vicious circle of collapsing balance sheets among the carry traders, higher rates, and so on? Well, we’ve seen enough of that sort of thing not to dismiss the possibility. But if it does happen, it’s a financial system problem — not a deficit problem. It would basically be saying not that the government is borrowing too much, but that the people conveying funds from savers, who want short-term assets, to the government, which borrows long, are undercapitalized.
And the remedy should be financial, not fiscal. Have the Fed buy more long-term debt; or let the government issue more short-term debt. Whatever you do, don’t undermine recovery by calling off jobs creation.
What Krugman seems to be advocating is the following: if long term rates should start to rise, the Treasury should finance the deficit by issuing more short-term (and less long-term) debt, thereby flattening the yield curve and holding long term rates low. This would prevent capital losses for carry traders (although it would lower the continuing profitability of the carry trade if short rates rise).
In effect, Krugman is arguing that the Treasury should itself act like a carry trader: rolling over short term debt to finance a long-term structural deficit. But why is this not being done already? Take a look at the current Treasury yield curve:



What is currently preventing the Treasury from borrowing at much more attractive short rates to finance the deficit? Is it is a fear of driving up short rates? And if so, won't the same concerns be in place if long term rates start to rise?

Friday, November 20, 2009

Econometric Society Fellows: A Tale of Two Duncans

The Fellows of the Econometric Society are an elite group of economists, numbering less than 500, nominated and elected by their peers:
To be eligible for nomination as a Fellow, a person must have published original contributions to economic theory or to such statistical, mathematical, or accounting analyses as have a definite bearing on problems in economic theory... Candidates elected to Fellowship are those with a total number of check marks at least equal to 30 percent of the number of mail ballots submitted by Fellows. Over the past decade about 15 candidates per year have been elected as new Fellows.
Among the most recently elected fellows is 84 year old R. Duncan Luce, who by most accounts should have been elected decades ago. Jeff Ely (himself a newly elected fellow) explains why it took so long:
The problem is that there are many economists and its costly to investigate each one to see if they pass the bar. So you pick a shortlist of candidates who are contenders and you investigate those.  Some pass, some don’t.  Now, the next problem is that there are many fellows and many non-fellows and its hard to keep track of exactly who is in and who is out.  And again it’s costly to go and check every vita to find out who has not been admitted yet.
So when you pick your shortlist, you are including only economists who you think are not already fellows.  Someone like Duncan Luce, who certainly should have been elected 30 years ago most likely was elected 30 years ago so you would never consider putting him on your shortlist.
Indeed, the simple rule of thumb you would use is to focus on young people for your shortlist.  Younger economists are more likely to be both good enough and not already fellows.
This makes sense. But the proliferation of blogs makes the costs of identifying individuals who have been unfairly overlooked much lower, because the task can be decentralized. Anyone anywhere in the world can make a case and hope that some existing fellows take notice.
In this spirit, let me make a case for Duncan Foley. While still a graduate student in the 1960's, Foley introduced what is now a standard concept of fairness into general equilibrium theory. Here's what Andrew Postlewaite wrote in 1988 about this innovation:
Nearly 20 years ago Duncan Foley introduced a notion of fairness which was completely consistent with standard economic models. This notion was that of envy, or more precisely, lack of envy. An economic outcome was said to be envy-free if no one preferred another's final bundle of goods and services to his or her own bundle. The concept is both compelling and easily accommodated in standard economic models. It is attractive on several grounds. First, it is ordinal - it does not depend upon the particular utility function representing one's preferences, and thus avoids all the problems associated with interpersonal comparison of utilities. Second, the concept relies on precisely the same economic data necessary to determine the efficiency or nonefficiency of the outcomes associated with a particular policy. After Foley introduced this concept into modern economics a number of economists, including Pazner, Schmeidler, Varian, Vind, and Yaari, analyzed and extended the concept.
It is now more than 40 years since Foley developed these ideas. For the concept of envy-free allocations alone he deserves to be elected, but this is just one of several notable contributions. His papers on the core of an economy with public goods (Econometrica 1970), equilibrium with costly marketing (Journal of Economic Theory 1970), portfolio choice and growth (American Economic Review 1970), asset management with trading uncertainty (Review of Economic Studies 1975), and asset equilibrium in macroeconomic models (Journal of Political Economy 1975) all continue to be widely cited. He has made influential contributions to Marxian economics (Journal of Economic Theory 1982) and used ideas from classical thermodynamics to understand price dispersion (Journal of Economic Theory 1994). And he has written several books, including a pioneering effort in 1971 with Miguel Sidrauski on monetary and fiscal policy in a growing economy.

It is my hope that someday soon he will be nominated and elected a new fellow of the Econometric Society, an honor he so richly deserves.

Thursday, November 19, 2009

On Rational Expectations and Equilibrium Paths

Via Mark Thoma, I recently came across this post by Paul De Grauwe:
There is a general perception today that the financial crisis came about as a result of inefficiencies in the financial markets and economic actors’ poor understanding of the nature of risks. Yet mainstream macroeconomic models, as exemplified by the dynamic stochastic general equilibrium (DSGE) models, are populated by agents who are maximising their utilities in an intertemporal framework using all available information including the structure of the model... In other words, agents in these models have incredible cognitive abilities. They are able to understand the complexities of the world, and they can figure out the probability distributions of all the shocks that can hit the economy. These are extraordinary assumptions that leave the outside world perplexed about what macroeconomists have been doing during the last decades.
De Grauwe goes on to argue that rational expectations models are "intellectual heirs of central planning" and makes a case for a "bottom-up" or agent-based approach to macroeconomics.
The rational expectations hypothesis is actually even more demanding than De Grauwe's post suggests, since it is an equilibrium assumption rather than just a behavioral hypothesis. It therefore requires not only that agents have "incredible cognitive abilities" but also that this fact is common knowledge among them, and that they are able to coordinate their behavior in order to jointly traverse an equilibrium path. This point has been made many times; for a particularly clear statement of it see the chapter by Mario Henrique Simonsen in The Economy as an Evolving Complex System

Equilibrium analysis can be very useful in economics provided that the conclusions derived from it are robust to minor changes in specification. In order for this to be the case, it is important that equilibrium paths are stable with respect to plausible disequilibrium dynamics. As Richard Goodwin once said, an unstable equilibrium is "the one place the system will never be found." But while equilibrium dynamics are commonplace in economics now, the stability of equilibrium paths with respect to disequilibrium dynamics is seldom considered worth exploring. 

Wednesday, November 18, 2009

On Efficient Markets and Practical Purposes

Eugene Fama continues to believe that the efficient markets hypothesis "provides a good view of the world for almost all practical purposes" and Robert Lucas seems to agree:
One thing we are not going to have, now or ever, is a set of models that forecasts sudden falls in the value of financial assets, like the declines that followed the failure of Lehman Brothers in September. This is nothing new. It has been known for more than 40 years and is one of the main implications of Eugene Fama’s “efficient-market hypothesis” (EMH), which states that the price of a financial asset reflects all relevant, generally available information. If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier.
It is surely true that if a crash could reliably be predicted to occur a week from today, then it would occur at once. But what if it were widely believed that stock prices were well above fundamental values, and that barring any major changes in fundamentals, a crash could reliably be predicted to occur at some point over the next couple of years? Since the timing of the crash remains uncertain, any fund manager who attacks the bubble too soon stands to lose a substantial sum. For instance, many major market players entered large short positions in technology stocks in 1999 but were unable or unwilling to meet margin calls as the Nasdaq continued to rise. Some were wiped out entirely, while others survived but took heavly losses because they called an end to the bubble too soon:
Quantum, the flagship fund of the world's biggest hedge fund investment group, is suffering its worst ever year after a wrong call that the "internet bubble" was about to burst... Quantum bet heavily that shares in internet companies would fall. Instead, companies such as Amazon.com, the online retailer, and Yahoo, the website search group, rose to all-time highs in April. Although these shares have fallen recently, it was too late for Quantum, which was down by almost 20%, or $1.5bn (£937m), before making up some ground in the past month. Shawn Pattison, a group spokesman, said yesterday: "We called the bursting of the internet bubble too early."
Note that this was written in August 1999, several months before the Nasdaq peaked at above 5000, and therefore cannot be said to reflect what Kenneth French might call the false precision of hindsight.

Along similar lines, a 1986 paper by Frankel and Froot contained survey evidence on expectations suggesting that investors believed both that the dollar was overvalued at the time, and that it would appreciate further in the short term. They were unwilling, therefore, to short the dollar despite believing that it would decline substantially sooner or later.
A crash will occur when there is coordinated selling by many investors making independent decentralized decisions, and a bubble may continue to grow until such coordination arises endogenously. In his response to Lucas, Markus Brunnermeier sums up this view as follows:
Of course, as Bob Lucas points out, when it is commonly known among all investors that a bubble will burst next week, then they will prick it already today. However, in practice each individual investor does not know when other investors will start trading against the bubble. This uncertainty makes each individual investors nervous about whether he can be out of (or short) the market sufficiently long until the bubble finally bursts. Consequently, each investor is reluctant to lean against the wind. Indeed, investors may in fact prefer to ride a bubble for a long time such that price corrections only occur after a long delay, and often abruptly. Empirical research on stock price predictability supports this view. Furthermore, since funding frictions limit arbitrage activity, the fact that you can’t make money does not imply that the “price is right”.
This way of thinking suggests a radically different approach for the future financial architecture. Central banks and financial regulators have to be vigilant and look out for bubbles, and should help investors to synchronize their effort to lean against asset price bubbles. As the current episode has shown, it is not sufficient to clean up after the bubble bursts, but essential to lean against the formation of the bubble in the first place.
This argument is made with a great deal of care and technical detail in a 2003 Econometrica paper by Abreu and Brunnermeier. If true, then clearly there are some terribly important practical purposes for which the EMH does not provide a good view of the world.

Tuesday, November 17, 2009

Eric Maskin's Reading Recommendations

Thanks to Tomas Sjöström, I recently came across an interview with Eric Maskin in which he states:
I don’t accept the criticism that economic theory failed to provide a framework for understanding this crisis... I think most of the pieces for understanding the current financial mess were in place well before the crisis occurred.
Maskin identifies five contributions that he considers to be particularly useful: Diamond and Dybvig on bank runs, Holmstrom and Tirole on moral hazard and liquidity crises in bank lending, Dewatripont and Tirole on the regulation of bank capitalization, Kiyotaki and Moore on the amplification and spread of declines in collateral values, and Fostel and Geanakoplos on leverage cycles.
What's striking to me about this set of readings is that they skew heavily towards microeconomic theory, and are essentially independent of canonical models in contemporary macroeconomics. At some point perhaps graduate textbooks in macroeconomics will feature a fully integrated analysis of goods, labor and financial markets in which collateral and leverage are linked to output, employment, and prices in serious way. In the meantime, there are two recent (post-crisis) papers that I would add to Maskin's list: Adrian and Shin and Brunnermeier and Pedersen.

By the way, if you follow the link to the complete interview, the photograph at the top of the page depicts anxious depositors outside a branch office of Northern Rock, the first British financial institution since 1866 to experience  a classic bank run. Hyun Shin's paper on the failure of Northern Rock is also well worth reading.