Thursday, January 14, 2010

Paul Volcker's Moskowitz Lecture

Back in 1978, Paul Volcker delivered the annual Charles C. Moscowitz memorial lecture at New York University's College of Business and Public Administration. The lecture was published (along with the remarks of two discussants) under the title The Rediscovery of the Business Cycle. Ten years later the College had been renamed after Leonard Stern, and the book was out of print. When I looked for it about a month ago the Columbia library came up empty and I couldn't find a single copy available for purchase online. I finally managed to get one on inter-library loan from NYU.
It's been fifteen years since I last looked at this book and it was well worth reading again. In it, Volcker develops a theory of economic and financial crisis focused not on routine short-term fluctuations, but rather on serious disruptions arising after a prolonged period of relatively low volatility. His analysis is based on changes over time in financial practices, and the macroeconomic implications of such changes:
Mood is too intangible to be accurately measured directly. However a gradual increase in confidence and increasing willingness to take risks does seem a natural consequence of a period of general prosperity. Conversely, the experience of a major recession is a chastening experience. Households, businesses, and other economic units have witnessed bankruptcies, unemployment and loss of income. Earlier plans are disrupted. Those taking the largest risks and without financial reserves tend to be hit the hardest. So, at first, caution prevails, even as recovery unfolds. But if the recovery is sustained and downturns are minor, the new surprises are likely to be favorable: productivity typically rises rapidly as capacity is more fully utilized; profits exceed expectations; jobs are easier to find; and real incomes rise. The aggressive risk-taker profits handsomely; the rewards of caution seem less evident as memories of hard times recede.

As confidence increases, that in itself gives further thrust to the expansion. Business embarks more freely on modernization and expansion, and it finds more lenders ready to underwrite its plans and also finds willing equity investors. More buoyant prices may, for a time at least, help encourage aggressive inventory or capital spending. On the consumer side, as job opportunities expand, future income seems more secure. As stock market and home prices go up, the consumer's estimate of his current and future wealth may rise.

Financial markets and financial institutions will share in the altered mood. Equity is more highly leveraged, more borrowing may be done at shorter terms, and banks and other lenders draw down their liquidity and other financial reserves. Almost imperceptibly -- until they only seem lax in retrospect -- traditional credit standards may be eased precisely because the new economic environment seems more secure. And so long as the forward thrust of the economy is maintained, losses are small.

Even the professional economists may be caught up in the euphoria. They may even be inclined to agree that we have finally licked the business cycle and thus help reinforce the climate of confidence!

But in the end the process is self-limiting. There are limits to economic growth over the short term: to employment, to productivity, to the need for capital goods or inventory, and to risk and leverage. When manpower is fully occupied, the economy cannot continue to improve as fast as before, and financial reserves can be exhausted. And sooner or later some exogenous force may provide a rude shock that forces a reappraisal of risks.

The result is disappointment. Reality falls short of anticipation. With past excesses suddenly exposed, a recession can quite suddenly turn severe. Risks that were blithely discounted earlier now loom large. The income stream no longer seems so certain. Jobs are harder to get and capital values may fall. Households and business firms alike try to cut their spending and rebuild liquidity. Risk premiums increase. And the new caution inhibits recovery.
Volcker does not stop at this general characterization of economic fluctuations; he goes on to provide evidence for the theory based on changes in a broad range of variables during the post-war period. For non-financial firms, these include the debt-asset ratio and the ratio of liquid assets to short term liabilities. For commercial banks he examines the ratio of loans to bank credit and the ratio of capital to risk assets. For the stock market he looks at the price-earnings ratio and the dividend yield. In all cases he finds evidence of declining margins of safety.
Regardless of whether one agrees with Volcker's interpretation of the data, it would be difficult to make a case that such changes in financial structure should be ignored in the formulation of monetary policy. In light of this, I find it puzzling that the Taylor rule, which responds only to the inflation rate and the output gap, plays such a prominent role in the evaluation of Federal Reserve actions. For instance, Ben Bernanke recently appealed to a modified version of the Taylor rule (based on expected rather than realized inflation) in justifying the Fed's interest rate policies over the 2002-2006 period. In response, John Taylor argued that the Fed's inflation forecasts were in fact too low, and that there is no evidence to suggest that the modified rule used by Bernanke would result in better central bank performance.
To an outsider, it seems odd that this debate is about different specifications of a rule that disregards key determinants of financial fragility, such as the measures of leverage and exposure examined in Volcker's lecture. Is it really possible to evaluate the tightness or ease of monetary policy while neglecting such factors entirely?

---

Update (1/16). Thanks to Mark Thoma and Yves Smith for linking here. First time visitors might find my earlier post on Hyman Minsky to be of some interest. I'm sure I'm not the first to have noticed a striking resemblance between Volcker's approach to financial fragility and that of Minsky.

Sunday, January 10, 2010

Paul Samuelson on Nonlinear Dynamics

There have been a number of tributes to Paul Samuelson over the past couple of weeks applauding both his intellectual contributions and his character. In his appreciation, Paul Krugman identifies eight distinct seminal ideas, "each of which gave rise to a vast and continuing research literature." An even more comprehensive list of accomplishments spanning six decades may be found in Avinash Dixit's moving eulogy.
One of the articles mentioned in passing by Dixit is a 1939 paper that was published in the Review of Economics and Statistics when Samuelson was just 24 years old. Dixit describes it as the "first workhorse model of business cycles" but that is a bit too generous: earlier contributions by FrischSlutsky, and Kalecki each have a stronger claim. Furthermore, the model in this paper is linear and therefore generates oscillations that are either damped or explosive.
A far more interesting paper by Samuelson appeared a few months later in the Journal of Political Economy. By coincidence, Barkley Rosser mentioned this work in an intriguing comment on Mark Thoma's page just two weeks before Samuelson's death. I recently took another look at the paper and it does indeed contain one of the earliest models capable of generating persistent oscillations without exogenous shocks, thus anticipating the seminal work of Richard Goodwin by more than a decade.
Samuelson took the linear multiplier-accelerator model of his earlier paper and extended it in two ways. First, he allowed for a nonlinear consumption function with the property that the marginal propensity to consume decreased with income, "approaching zero in the limit." Second, he observed that "net investment can only be negative to the extent of deferred replacement or consumption," which necessarily implies a nonlinear investment function. If the steady state is locally unstable, this model generates fluctuations that are bounded and persistent even in the absence of exogenous shocks.
Samuelson recognized the possibility that in his two-dimensional difference equation system "successive cycles need not be similar in timing or amplitude." We now know that highly irregular trajectories are possible even in one-dimensional discrete time models (though at least three dimensions are required in continuous time.) Furthermore, in footnote 7 of the paper, Samuelson made the following cryptic comment:
There remains one interesting problem still to be explored. Mathematical analysis of the nonlinear case may reveal that for certain equilibrium values of α and β a periodic motion of definite amplitude will always be approached regardless of initial conditions. Such a relation can never result from systems of difference equations with constant coefficients, involving assumptions of linearity. This illustrates the inadequacy of such assumptions except for the analysis of small oscillations.
Here Samuelson is not only conjecturing the possibility of a stable limit cycle, but also arguing that the existence of such a cycle may be proved mathematically. In a continuous time model this would be possible using the Poincaré-Bendixson Theorem, but this result has no counterpart in discrete time systems. Hence the existence of a limit cycle in Samuelson's model would have to be demonstrated numerically rather than analytically.
Samuelson's model is outdated in many respects, and one could raise objections to a number of his core assumptions. But the paper does offer a perspective on economic dynamics that stands in sharp contrast to the currently dominant Frisch-Slutsky approach, and is worth reading for that reason alone.

---

Update (1/11): Barkley Rosser (via Mark Thoma) has more on the subject. My earlier discussion of Buiter, Goodwin, and nonlinear dynamics may also be of some interest; this is the post to which Barkley was originally responding.

Wednesday, January 06, 2010

On Inference and Coordination in Speculative Markets

In my previous post I argued that the incentive to manipulate prices in prediction markets is strongest when there is a positive feedback between subjective beliefs and objective probabilities. In response, Robin Hanson made the following observation:
The possibility of self-fulfilling or self-defeating prophecies is an issue with any forecasting mechanism where forecasters have any incentives to offer more, vs. less, accurate forecasts. It is not a problem particular to prediction markets.
This is certainly true but (as I said in my reply) the anonymity of participation in prediction markets means that in interpreting the data, we cannot discount the forecasts of those who have the greatest incentives to mislead us. Traders who try to manipulate beliefs will typically lose money, while pollsters and academics who do so will lose reputation and credibility. This is why polling done on behalf of political parties is often discounted and excluded from aggregates, and why house effects play such an important role in the interpretation of polling data.
On the other hand, if attempts at manipulation in prediction markets are too blatant, they can result in strong and rapid push back by other traders. In fact, the possibility of manipulation increases market participation and liquidity because it generates a profit opportunity for those who can quickly detect and exploit it. But how might manipulation be detected in practice?
Put yourself in the position of a trader who notices a significant, unexplained rise in the price of a contract. How should such a movement be interpreted? It could reflect some new information that has not yet filtered into the public sphere, in which case it might be profitable to buy ahead of the news. On the other hand, it might reflect an attempt at price manipulation (or simply irrational exuberance) on the part of some individuals, in which case it might be profitable to sell short before the price returns to more reasonable levels.
In reacting to such price movements, therefore, traders face an inference problem. Identifying the cause of the change in price is important in predicting the direction of subsequent movements, and hence in selecting the positions to enter. But even if one is fairly confident about the cause, trading on the opportunity carries risks unless it is done in concert with others. A single trader will typically not be able to arrest movements in price even if these are shifts away from fundamentals. One could enter a position and wait, but this could tie up margin and result in lost opportunities elsewhere. Even worse, if the waiting period is long, a shift in fundamentals could occur that reverses the expected value of one's position. This gives rise to a coordination problem: traders could all diminish the risk they face if they act against market manipulation in unison.
Both the inference problem and the coordination problem can be solved by effective communication, and there are several examples on the Intrade forum of traders trying to make sense of price movements and coordinate a collective response. One such incident pertains to a suspicious movement in the price of the contract for Bill Richardson in the democratic vice-presidential nominee market on February 28, 2008.
At the start of the day, and for several days previously, the price of this contract was around 6. (The price is expressed as a percentage of the $10 contract face value, so each contract was selling for around 60 cents.) In the late afternoon, the price suddenly doubled, and kept rising until it reached 20 before falling back down to single digits in a matter of hours. A trader spotted the initial jump in price, and began a thread on the forum that is quite revealing about the manner in which the inference and coordination problems are sometimes tackled. Let's pick up the thread at around 5pm, when "speedo" notices a sharp, unexplained rise in price:
28/02/2008 17:01:31
richardson.vp just doubled, there is a standing offer to buy 128 at 12. any idea why?
28/02/2008 18:16:52
I can't find any news that would drive up the Richardson VP contract this much (last trade at 15, high bid at 13.2).
28/02/2008 19:30:36
now its trading at 18 - cant see anything either
28/02/2008 19:44:07
Could it be that someone heard a rumor richardson was set to endorse, and is planning to get a small bump from that, then get out?

I really can't find any rumors even online of anything happening today. Seems like only 1-2 people are propping this contract up.
28/02/2008 19:52:01
Yes - seems like is about to endorse one of the candidates.

http://www.upi.com/NewsTrack/Top_News/2008/02/27/bill_richardson_may_endorse_by_friday/2675/

I assume there can be no doubt that it will be Obama? But would that be enough to earn him a spot?
28/02/2008 19:56:36
That news has been out there for a while. So it wouldnt really justify such a big bump.
28/02/2008 21:12:46
I can't see anything either. I've gone ahead and sold some at 18.
28/02/2008 21:37:03
It's up to 20 now, but unfortunately I'm out of margin.
28/02/2008 21:50:58
I'm out of margin too ... I really can't figure this one out. The person who is doing this seems to have taken out a $1-$2k bet on Richardson...
About an hour and half later, the price falls back to a high bid of 11 and keeps sliding:
28/02/2008 22:10:21
Now there are some big orders on the buy side at 11 and 8. The guy buying it up seems to have run out of steam...
28/02/2008 22:13:00
He STONGLY hinted last week on Wolf Blitzer that he would endorse Obama. He has a snowball's chance in hell of getting the VP spot though...
28/02/2008 22:20:02
Yes, he's out of steam, thanks to you, me, and whoever else jumped at the opportunity. We should be able to cover these shorts pretty soon. This is a good thread.
29/02/2008 00:22:02
... Wish I read this thread a couple of hours ago as shorting Richardson @ 18 is TREMENDOUS VALUE.
From now on, I will check this section of the political thread first.
This example suggests to me that if the Intrade forum did not exist, market manipulation would be easier and less costly.
The inference and coordination problems are not confined to prediction markets: they arise in speculative markets more generally. A central finding in a 2003 Econometrica paper by Abreu and Brunnermeier is that even if traders are perfectly able to solve the inference problem, their inability to coordinate their actions can give rise to bubbles and crashes. I consider this to be a robust insight, and have discussed it at some length in an earlier post on market efficiency.

---

Update (1/7): Brad DeLong has an interesting post on the efficient markets hypothesis, with links to recent pieces by Justin Fox and Paul Kedrosky. One of the many unfortunate consequences of the EMH is that it inhibits serious research into the process through which information (and disinformation) comes to be reflected in prices.

---

Update (1/9). Robin Hanson's reply to DeLong and Kedrosky is worth reading. I think he's right to point out that Monday morning quarterbacking is too easy, but disagree with his claim that "to deny EMH is to assert that prices are predictably wrong." The EMH makes a stronger claim than price unpredictability; it identifies prices with fundamental values. For instance, unpredictability is consistent with excess volatility in the sense of Shiller (1981), but the EMH is not. Neverthless, if one is going to talk about the Federal Reserve identifying and reacting to bubbles in real time, it's important to settle the predictability question. 

Friday, January 01, 2010

On Prediction Markets and Self-Fulfilling Prophecies

Over on the Freakonomics blog, Ian Ayers writes:
One of the great unresolved questions of predictive analytics is trying to figure out when prediction markets will produce better predictions than good old-fashion mining of historic data. I think that there is fairly good evidence that either approach tends to beat the statistically unaided predictions of traditional experts.

But what is still unknown is whether prediction markets dominate statistical prediction.
In asking the "which is better" question, it is important to distinguish between two very different types of events for which prediction markets currently exist. Some events have a likelihood of occurrence that can safely be assumed to be independent of market predictions: they do not become more or less likely simply because beliefs about their likelihood change. Whether Justice Stevens will be next to depart the bench or snowfall in Central Park will exceed twenty inches this season are examples of such events (contracts on both are currently available on Intrade, and each is estimated to occur with 80% probability according to the price at last trade). Such events may be described as exogneous.
There is an entirely different class of events that may be termed endogenous: their likelihood of occurrence is sensitive to beliefs about this likelihood. Political campaigns, especially for party nominations in major elections, have this character. A candidate who is considered to be a prohibitive favorite will have a major fund-raising advantage, for instance if early donors believe that they will be rewarded with access, perks, or appointments. George W. Bush leveraged an aura of inevitability into a massive financial advantage in the contest for the Republican nomination in 2000, and Hillary Clinton attempted to do the same eight years later.  By the same token, a campaign that is perceived to have little chance of success may never get off the ground at all, regardless of the strengths of the candidate in question. Hence managing expectations about the likelihood of success is often a major campaign priority.
Paradoxically, the very same market characteristics that serve to enhance predictive accuracy in the case of exogenous events could undermine accuracy in forecasting endogenous events. Accurate forecasting of exogneous events requires broad participation and high levels of market visibility and liquidity, so that decentralized information can be effectively aggregated. But in the case of endogenous events, the more reliable a market is perceived by the public to be, the greater the incentives to manipulate prices at the margin. The problem is especially severe when there is a positive feedback loop between subjective beliefs and objective probabilities, as in the case of contested elections. The costs of such manipulation are small when compared with the costs of prime time advertising, and the returns can be enormous if the viability of one's campaign (or that of a competitor) is at stake.
In an earlier post I discussed some of these issues in the context of a proposal by Robin Hanson arguing for the development of prediction markets for climate change (Nate Silver was supportive of the idea, while Matt Yglesias was skeptical). Would such markets be dealing with exogenous or endogenous events? At first glance, it might seem that the events are exogenous, as in the case of this season's snowfall. But when forecasting temperatures several decades into the future there is an important feedback loop to be considered. A credible prediction that temperatures will remain stable will have the effect of stalling efforts to curtail greenhouse gas emissions, and this in turn could affect the future course of climate change. Note, however, that in this case the feedback is negative rather than positive: an decrease in the perceived likelihood of warming will result in less aggressive curtailment of emissions, and hence an increase in the objective probability of warming. As a result, any attempt at market manipulation by those who stand to lose from abatement policies will become progressively more expensive as temperatures rise. 
To put it another way, when the feedback between subjective beliefs and objective probabilities is positive, successful manipulation of prices can pay for itself by changing beliefs in a manner that becomes self-fulfilling. But when the feedback is negative, manipulation must eventually undermine its own success, since it results in beliefs that are systematically self-falsifying. For this reason I remain (cautiously) optimistic about the prospects for developing accurate prediction markets for climate change.

Saturday, December 26, 2009

Maturity Diversification

In an earlier post I linked to a provocative proposal by Andy Harless in which he argues that the Treasury should be shortening the maturity structure of government debt. His reasoning is roughly as follows:
  1. At current interest rates, money and bills are virtually identical assets: holders of bills are requiring no compensation for the additional liquidity or safety that money would provide. This makes conventional monetary policy (exchanging cash for bills) ineffective. On the other hand, an increase in the issue of bills can have expansionary effects, putting the Treasury effectively in charge of monetary policy.
  2. In addition to its expansionary effects, a shift to shorter maturities on government debt should lower the expected value of the costs of debt service, since there is a liquidity premium to be paid on longer term bonds. However, it would also increase the vulnerability of the Treasury to unexpected increases in short term rates (expected increases are already implicit in the yield curve). 
  3. Maintaining long maturities to insure against this risk would be hedging against good news, assuming that an unexpected increase in short term rates would signal a more rapid recovery than is currently forecast. In this case (unexpectedly) higher rates would be accompanied by (unexpectedly) greater federal revenues and lower benefit payments, so the financial position of the Treasury need not be worsened despite the greater costs of debt service.
That's his argument, if I understand it correctly.  The proposal is similar in some respects to one made recently by Joe Gagnon, in which he argues that the Fed should be buying substantial amounts of long term debt. Both proposals would result in roughly the same mix of short and long term securities in the hands of the public, and would lower long term interest rates. But there are two important differences. First, as Gagnon notes, "it would be better for the Fed to do it because they have the staff and expertise for gauging how much to do and when to stop." Second, there would be greater maturity diversification in Treasury issues, raising the expected value of debt service costs but reducing the vulnerability to unexpected fluctuations in interest rates.
How important is the issue of maturity diversification? A commenter (identified only as JKH) on Harless' blog thinks that it would be irresponsible to ignore it:
As far as the Treasury is concerned, it’s just acting according to prudent interest rate risk management considerations in locking in some interest cost on such a massive prospective debt load. It’s just a matter of judgement on how to diversify maturities, given the “risk” that the Fed may want to start tightening some time. Ignoring the issue of maturity mix is irresponsible. From there it’s judgement on the right mix.
This is fair enough as far as it goes, but what are the principles on the basis of which such judgment ought to be exercised? The trade-off here is between the expected costs of debt service and the risk of facing a situation in which costs are much greater than forecast. The basic problem was expressed very succinctly by Richard Roll (1971) as follows:
For example, consider a government agency borrowing for a specific long term project at currently high rate levels. It might be able to reduce the total expected interest payments (and expected taxes) by financing the project partly with short-term bonds rather than entirely with bonds whose term-to-maturity matches the project's life. On the other hand, even though the agency expects lower rates in the future, it would not feel secure in funding the entire project with short-term bonds that would require a later refinancing. It would prefer to pay the higher expected rate on a portfolio of long- and short-term bonds rather than accept the risk that rates will go higher contrary to expectations.
Given some specification of Treasury preferences and expectations about the future course of interest rates, this is a fairly standard optimization problem. But it is not obvious (at least to me) how the desired maturity structure should vary with changes in uncertainty about future rates. Other things equal, greater uncertainty should lengthen maturities. However, greater uncertainty will also steepen the yield curve and raise the expected costs of long-term (relative to short-term) financing, and this effect should reduce desired maturities. In any case, these effects should be possible to identify, given some specification of Treasury objectives.
Much more difficult is the question of what those objectives ought to be. How much weight should by placed on risk as opposed to the expected costs of debt service? And should the desired maturity structure of government debt be sensitive to macroeconomic conditions?
I don't have answers to these questions but (as I said in a comment on Mark Thoma's page) it is important to resist the temptation to view the problem solely from a corporate finance perspective. There will be a significant shortfall in government revenues over outlays for many years to come, so short-term Treasury issues will have to be rolled over repeatedly for an indefinite period, or converted to long-term debt at some point. If this were a corporation, such financial practices would be madness (for the company as well as its creditors). The company would be engaged in massive maturity transformation, and be highly vulnerable to changes in short term interest rates and credit availability. It would be unable to meet even interest obligations without borrowing - which is Hyman Minsky's definition of Ponzi finance.  
But the Treasury is not a private corporation, and it faces very different constraints and objectives. Matching maturities to expected net revenues is simply not an option, nor should it necessarily be a goal. This much is straightforward. Much less clear is the set of principles that ought to guide the decisions that ultimately determine the maturity structure of government debt.

---

Update (12/28). In an email to me (posted with permission), Joe Gagnon adds:
Since the government includes the Treasury and the central bank, we could have one objective function and find the optimal policy from first principles. But I suspect the same result would obtain if the Treasury acted as first mover and followed a reasonable, modestly risk-averse, cost-minimizing strategy for maturity of issuance given the Congressionally set taxes and spending. Then the central bank would take Treasury behavior as given when it decides on the optimal maturity structure from the overall social welfare viewpoint, taking into consideration unemployment and inflation as well as risks to government finance and likely future behavior of Congress.

My own sense (buttressed by Table 6 in my paper) is that the risks to government finance from shifting into short maturities is very small compared to the benefits, and much less costly than outright fiscal-spending stimulus.
Also:
The point about the table is that even in the inflation scare scenario, when future short rates rise quite high for a while, the debt burden is lower with monetary stimulus (via maturity transformation) than without it. Of course, one can construct even more extreme scenarios in which that is not true (or in which the debt burden is reduced by high inflation) but I would argue those scenarios are highly unlikely given the state of the economy and the current membership of the Fed’s policy committee.

Tuesday, December 22, 2009

Some Further Comments on Maturity Choice

In an earlier post, I discussed the issue of maturity choice for new Treasury issues, arguing that it affects not only the cost of financing the debt but also the shape of the yield curve, the extent of private sector maturity transformation, and the value of the currency (for instance if foreign lenders have different preferences over maturities relative to domestic lenders.) In many respects, therefore, the Treasury performs actions that are normally considered to be within the purview of the Federal Reserve. But while Fed policy is subject to extensive debate, as is the size of the deficit, there seems to be very little discussion of the manner in which the debt is financed by the Treasury.
Andy Harless (via Mark Thoma) has recently written a long and thoughtful post that deals with related issues. The post is worth reading in full, but here's the gist of his argument:
The inflation rate is now lower than most economists prefer, and the economy remains extremely weak despite the recent upturn in the business cycle. The burning issue is how to find the most cost-effective and politically feasible way to stimulate the US economy, and conventional monetary policy is not an option.
And today Treasury bills are not just more like money than like other assets; from a portfolio point of view, on the margin of new issuance, Treasury bills are exactly like money. Holders of short-term Treasury bills are willing to hold them without receiving interest. Anyone who is willing to hold them is placing no value whatsoever on any liquidity or safety advantage that might be had from holding those assets in the form of money.

Issuing more short-term Treasury bills will have exactly the same effect as issuing more money, since people are indifferent between the two. For practical purposes, as long as their interest rate remains at zero, short-term Treasury bills are part of the money stock. A Treasury bill is a million-dollar bill in the same sense that a Federal Reserve note with Abraham Lincoln on it is a five-dollar bill. Conventional monetary policy, which exchanges money for Treasury bills, is ineffective because it is no policy at all: it simply exchanges one form of money for another.

To put it another way, since the Treasury can issue bills that are exactly like money, it is now the Treasury that is in charge of monetary policy. And whatever one may think of the policy it chooses to follow, we should be holding the Treasury responsible. If you’re worried about “exit strategy” and the possibility of inflation in the near future, then perhaps you should congratulate the Treasury for its policy of financing more of its debt long-term. If you’re worried (as I am) about the persistence of a weak and potentially deflationary economic environment, then you should be critical of the Treasury’s policy. By increasing its maturities the Treasury is essentially following a tight-money policy exactly when a loose-money policy is needed.

The Treasury, of course, has its reasons. Officials expect interest rates to rise over the next several years and would like to lock in today’s low rates, to limit how much it will cost to service the national debt over a longer horizon. I’m skeptical, however, of the assumptions underlying these reasons.

Are interest rates going to rise over the next several years? Perhaps, but if so, then why are people being foolish enough to hold longer-term Treasury securities when they could be holding bills and waiting for a better deal? If it’s just a matter of the future course of interest rates, then it’s a zero-sum game. If the Treasury wins, bondholders lose – and bondholders usually make a point of trying not to lose. Are Treasury officials so much smarter than bondholders?

You might argue that it’s a matter of risk. When the Treasury locks in today’s low interest rates, it may not end up paying less (since it gives up even lower short-term rates), but it makes the payments more predictable. Even if the Treasury is likely to end up paying more, the hedge is worth the price, because the Treasury receives some insurance for the worst case, where rates rise more than expected.

But are rising interest rates really the worst case? Interest rates will rise when and if the economic recovery gains enough speed and traction to give the Fed and bond markets reasonable confidence in its eventual convergence toward our potential growth path. As an ordinary citizen, that’s not an outcome against which I would feel a need to hedge. I don’t want to buy insurance against good news. I’d rather hedge against the opposite outcome, where the recovery peters out and interest rates fall.
I urge you to read the whole thing. Regardless of whether or not you accept his prescriptions, the question of maturity choice deserves an airing.

Harless attributes some of his ideas to Benjamin Friedman, with whom he once studied. This reminded me of a paper that Friedman wrote with (a very young) David Laibson, published in a 1989 issue of the Brookings Papers on Economic Activity. This issue also contains an extended commentary by Hyman Minsky, whose work I have discussed previously. Both the paper and Minsky's comments on it are well worth reading, and I hope to post my thoughts on them in due course.

On Mattresses, Ideologues, and Cheerleaders

I suppose one ought to be grateful for small mercies; Bryan Caplan has learned how to spell mattress.  Here he is on December 16:
unless employers are unusually likely to put cash under their matresses...
And here again three days later:
the net effect on AD depends on the marginal propensity to stuff income under one's mattress.
Now it would a a major step forward if he were to discover the existence of bills, bonds, equities, mutual funds of various stripes, and rare stamps and coins, all of which can serve as channels for savings, and none of which automatically create a demand for current production in equal measure to the cost of acquiring them. But, as Winterspeak notes, that would be too much to ask:
Glibertarian Bryan Caplan reveals why microeconomics is just useless at analyzing the economy at a macro scale. If you cannot understand that spending equals income at an economy wide level, you'll spout a lot of two sentence nonsense.
I don't think that microeconomics is at fault. The blame lies with a particularly crude and naive textbook version of microeconomics and its uncritical application by ideologues and their cheerleaders.