Wednesday, August 15, 2012

On Prices, Narratives, and Market Efficiency

The fourth anniversary of the Lehman bankruptcy has been selected as the release date for a collection of essays edited by Diane Coyle with the provocative title: What's the Use of Economics? The timing is impeccable and the question legitimate.

The book collects together some very thoughtful responses by Andrew Haldane, John Kay, Wendy Carlin, Alan Kirman, Andrew Lo, Roger Farmer, and a host of other luminaries (the publishers were kind enough to send me an advance copy). There's enough material there for several posts but I'd like to start with the contribution by John Kay.

This one, as it happens, has been published before; I discussed Mike Woodford's reaction to it in a previous post. But reading it again I realized that it contains a perspective on market efficiency and price discovery that is concise, penetrating and worthy of some elaboration. Kay doesn't just provide a critique of the efficient markets hypothesis; he sketches out an alternative approach based on the idea of prices as the "product of a clash between competing narratives" that can form the basis of an entire research agenda.

He begins with a question famously posed by the Queen of England during a visit to the London School of Economics: Why had economists failed to predict the financial crisis? Robert Lucas pointed out in response that the inability to predict a financial crisis was in fact a prediction of economic theory. This is as pure a distillation of the efficient markets hypothesis is one is likely to find, and Kay uses it to evaluate the hypothesis itself:
Lucas’s assertion that ‘no one could have predicted it’ contains an important, though partial, insight. There can be no objective basis for a prediction of the kind ‘Lehman Bros will go into liquidation on September 15’, because if there were, people would act on that expectation and, most likely, Lehman would go into liquidation straight away. The economic world, far more than the physical world, is influenced by our beliefs about it. 
Such thinking leads, as Lucas explains, directly to the efficient market hypothesis – available knowledge is already incorporated in the price of securities. And there is a substantial amount of truth in this – the growth prospects of Apple and Google, the problems of Greece and the Eurozone, are all reflected in the prices of shares, bonds and currencies. The efficient market hypothesis is an illuminating idea, but it is not “Reality As It Is In Itself”. Information is reflected in prices, but not necessarily accurately, or completely. There are wide differences in understanding and belief, and different perceptions of a future that can be at best dimly perceived. 
In his Economist response, Lucas acknowledges that ‘exceptions and anomalies’ to the efficient market hypothesis have been discovered, ‘but for the purposes of macroeconomic analyses and forecasts they are too small to matter’. But how could anyone know, in advance not just of this crisis but also of any future crisis, that exceptions and anomalies to the efficient market hypothesis are ‘too small to matter’?
The literature on anomalies is not, in fact, concerned with macroeconomic analyses and forecasts. It is rather narrowly focused on predictability in asset prices and the possibility of constructing portfolios that can consistently beat the market on a risk-adjusted basis. And indeed, such anomalies are often found to be quite trivial, especially when one considers the costs of implementing the implied strategies. The inability of actively managed funds to beat the market on average, after accounting for costs and adjusting for risk, is often cited as providing empirical support for market efficiency. But Kay believes that these findings have not been properly interpreted:
What Lucas means when he asserts that deviations are ‘too small to matter’ is that attempts to construct general models of deviations from the efficient market hypothesis – by specifying mechanical trading rules or by writing equations to identify bubbles in asset prices – have not met with much success. But this is to miss the point: the expert billiard player plays a nearly perfect game, but it is the imperfections of play between experts that determine the result. There is a – trivial – sense in which the deviations from efficient markets are too small to matter – and a more important sense in which these deviations are the principal thing that matters. 
The claim that most profit opportunities in business or in securities markets have been taken is justified.  But it is the search for the profit opportunities that have not been taken that drives business forward, the belief that profit opportunities that have not been arbitraged away still exist that explains why there is so much trade in securities. Far from being ‘too small to matter’, these deviations from efficient market assumptions, not necessarily large, are the dynamic of the capitalist economy. 
Such anomalies are idiosyncratic and cannot, by their very nature, be derived as logical deductions from an axiomatic system. The distinguishing characteristic of Henry Ford or Steve Jobs, Warren Buffett or George Soros, is that their behaviour cannot be predicted from any prespecified model. If the behaviour of these individuals could be predicted in this way, they would not have been either innovative or rich. But the consequences are plainly not ‘too small to matter’. 
The preposterous claim that deviations from market efficiency were not only irrelevant to the recent crisis but could never be relevant is the product of an environment in which deduction has driven out induction and ideology has taken over from observation. The belief that models are not just useful tools but also are capable of yielding comprehensive and universal descriptions of the world has blinded its proponents to realities that have been staring them in the face. That blindness was an element in our present crisis, and conditions our still ineffectual responses. 
Fair enough, but how should one proceed? Kay suggests the adoption of more "eclectic analysis... not just deductive logic but also an understanding of processes of belief formation, anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses, and governments actually do."

I have no quarrel with this prescription, but I'd also like to make a case for more creative and versatile deductive logic. One of the key modeling hypotheses in the economics of information is the so-called Harsanyi doctrine (or common prior assumption), which stipulates that all differences in beliefs ought to be modeled as if they arise from differences in information. This hypothesis implies that individuals can only disagree if such disagreement is not itself common knowledge: they cannot agree to disagree. It is not hard to see that such a hypothesis could not possibly allow for pure speculation on asset price movements, and hence cannot account for the large volume of trade in financial markets. In fact, it implies that order books in many markets would be empty, since a posted price would only be met by someone with superior information.

The point is that over-reliance on deductive logic is not the only problem as far as financial modeling is concerned; the core assumptions to which deductive logic has been applied are themselves too restrictive. To my mind, the most interesting part of Kay's essay suggests how one might improve on this:
You can learn a great deal about deviations from the efficient market hypothesis, and the role they played in the recent financial crisis, from journalistic descriptions by people like Michael Lewis and Greg Zuckerman, who describe the activities of some individuals who did predict it. The large volume of such material that has appeared suggests many avenues of understanding that might be explored. You could develop models in which some trading agents have incentives aligned with those of the investors who finance them and others do not. You might describe how prices are the product of a clash between competing narratives about the world. You might appreciate the natural human reactions that made it difficult to hold short positions when they returned losses quarter after quarter.
There is definitely ongoing work in economics that explores many of these directions, some of which I have surveyed in previous posts. But the idea of prices as the product of a clash between competing narratives about the world reminded me of a paper by Harrison and Kreps, which was one of the earliest models in finance to shed the common prior assumption.

For anyone interested in developing models of heterogeneous beliefs in which trading occurs naturally over time, the Harrison-Kreps paper is the perfect place to start. They illustrate their model with an example that is easy to follow: a single asset provides title to a stream of dividend payments that may be either high or low, and investors disagree about the likelihood of transitions from high to low states and vice versa. This means that investors who value the asset most in one state differ from those who value it most in the other. Trading occurs as the asset is transferred across investors in the two different belief classes each time a transition to a different state occurs. The authors show that the price in both states is higher than it would be if investors were forced to hold the asset forever: there is a speculative premium that arises from the knowledge that someone else will, in due course and mistakenly in your opinion, value the asset more than you do. The contrast with the efficient markets hypothesis is striking and clear:
The basic tenet of fundamentalism, which goes back at least to J. B. Williams (1938), is that a stock has an intrinsic value related to the dividends it will pay, since a stock is a share in some enterprise and dividends represent the income that the enterprise gains for its owners. In one sense, we think that our analysis is consistent with the fundamentalist spirit, tempered by a subjectivist view of probability. Beginning with the view that stock prices are created by investors, and recognizing that investors may form different opinions even when they have the same substantive information, we contend that there can be no objective intrinsic value for the stock. Instead, we propose that the relevant notion of intrinsic value is obtained through market aggregation of diverse investor assessments. There are fundamentalist overtones in this position, since it is the market aggregation of investor attitudes and beliefs about future dividends with which we start. Under our assumptions, however, the aggregation process eventually yields prices with some curious characteristics. In particular, investors attach a higher value to ownership of the stock than they do to ownership of the dividend stream that it generates, which is not an immediately palatable conclusion from a fundamentalist point of view.
The idea that prices are "obtained through market aggregation of diverse investor assessments" is not too far from Kay's more rhetorically powerful claim that they are "the product of a clash between competing narratives".  What Harrison and Kreps do not consider is how diverse investor assessments change over time, since beliefs about transition probabilities are exogenously given in their analysis. But Kay's formulation suggests how progress on this front might be made. Beliefs change as some narratives gather influence relative to others, either though active persuasion (talking one's book for instance) or through differentials in profits accruing to those with different worldviews. While Kay is surely correct that a rich understanding of this process requires more than deductive reasoning, it is also true that deductive reasoning has not yet been pushed to its limits in facilitating our understanding of market dynamics.

Monday, August 13, 2012

Building a Better Dow

The following post, written jointly with Debraj Ray, is based on our recent note proposing a change in the method for computing the Dow.

---

With a market capitalization approaching 600 billion, Apple is currently the largest publicly traded company in the world. The previous title-holder, Exxon Mobil, now stands far behind at 400 billion. But Apple is not a component of the Dow Jones Industrial Average. Nor is Google, with a higher valuation than all but a handful of firms in the index. Meanwhile, firms with less than a tenth of Apple's market capitalization, including Alcoa and Hewlett-Packard, continue to be included.

The exclusion of firms like Apple and Google would appear to undermine the stated purpose of the index, which is "to provide a clear, straightforward view of the stock market and, by extension, the U.S. economy." But there are good reasons for such seemingly arbitrary omissions. The Dow is a price-weighted index, and the average price of its thirty components is currently around $58. Both Apple and Google have share prices in excess of $600, and their inclusion would cause day-to-day changes in the index to be driven largely by the behavior of these two securities. For instance, their combined weight in the Dow would be about 43% if they were to replace Alcoa and Travelers, which are the two current components with the lowest valuations. Furthermore, the index would become considerably more volatile even if the included stocks were individually no more volatile than those they replace. As John Presbo, chairman of the index oversight committee, has observed, such heavy dependence of the index on one or two stocks would "hamper its ability to accurately reflect the broader market."

Indeed, price-weighting is decidedly an odd methodology. IBM has a smaller market capitalization than Microsoft, but a substantially higher share price. Under current conditions, a 1% change in the price of IBM has an effect on the index that is almost seven times as great as a 1% change in the price of Microsoft. In fact, IBM's weight in the index is above 11%, although its valuation is less than 6% of the total among Dow components.

This issue does not arise with value-weighted indexes such as the S&P 500. But as Prestbo and others have pointed out, the Dow provides an uninterrupted picture of stock market movements dating back to 1896. An abrupt switch to value weighting would introduce a methodological discontinuity that would "essentially obliterate this history." Attention has therefore been focused on the desirability of a stock split, which would reduce Apple's share price to a level that could be accommodated by the questionable methodology of the Dow.

But an abrupt switch to a value weighting or the flawed artifice of a stock split are not the only available alternatives. In a recent paper we propose a modification that largely preserves the historical integrity of the Dow time series, while allowing for the inclusion of securities regardless of their market price. Our modified index also leads to a smooth and gradual transition, as incumbent stocks are replaced, to a fully value-weighted index in the long run.

The proposed index is composed of two subindices, one price-weighted to respect the internal structure of the Dow, and the other value-weighted to apply to new entrants. The index has two parameters, both of which are adjusted whenever a substitution is made. One of these maintains continuity in the value of the index, while the other ensures that the two subindices are weighted in proportion to their respective market capitalizations. Stock splits require a change in parameters (as in the case of the current Dow divisor) but only if the split occurs for a firm in the price-weighted subindex.

Once all incumbent firms are replaced, the result will be a fully value-weighted index. In practice this could take several decades, as some incumbent firms are likely to be remain components far into the future. But firms in the price-weighted component of the index that happen to have weights roughly commensurate with their market capitalization can be transferred with no loss of continuity to the value-weighted component. This procedure, which we call bridging, can accelerate the transition to a value-weighted index with minimal short-term disruption. Currently Coca Cola and Disney are prime candidates for bridging.

Under our proposed index, Apple would enter with a weight of less than 13% if it were to replace Alcoa. This is scarcely more than the weight currently associated with IBM, a substantially smaller company. Adding Google (in place of HP or Travelers) would further lower the weight of Apple since the total market capitalization of Dow components would rise. This is a relatively modest change that, we believe, would simultaneously serve the desirable goals of methodological continuity and market representativeness.

Friday, July 13, 2012

Market Overreaction: A Case Study

At 7:30pm yesterday the Drudge Report breathlessly broadcast the following:
ROMNEY NARROWS VP CHOICES; CONDI EMERGES AS FRONTRUNNER
Thu Jul 12 2012 19:30:01 ET 
**Exclusive** 
Late Thursday evening, Mitt Romney's presidential campaign launched a new fundraising drive, 'Meet The VP' -- just as Romney himself has narrowed the field of candidates to a handful, sources reveal. 
And a surprise name is now near the top of the list: Former Secretary of State Condoleezza Rice! 
The timing of the announcement is now set for 'coming weeks'.
The reaction on Intrade was immediate. The price of a contract that pays $10 if Rice is selected as Romney's running mate (and nothing otherwise) shot up from about 35 cents to $2, with about 2500 contracts changing hands within twenty minutes of the Drudge announcement. By the sleepy standards of the prediction market this constitutes very heavy volume. Nate Silver responded at 7:49 as follows:
The Condi Rice for VP contract at Intrade possibly the most obvious short since Pets.com
Good advice, as it turned out. By 9:45 pm the price had dropped to 90 cents a contract with about 5000 contracts traded in total since the initial announcement. Here's the price and volume chart:


One of the most interesting aspects of markets such as Intrade is that they offer sets of contracts on a list of exhaustive and mutually exclusive events. For instance, the Republican VP Nominee market contains not just the contract for Rice, but also for 56 other potential candidates, as well as a residual contract that pays off if none of the named contracts do. The sum of the bids for all these contracts cannot exceed $10, otherwise someone could sell the entire set of contracts and make an arbitrage profit. In practice, no individual is going to take the trouble to spot and exploit such opportunities, but it's a trivial matter to write a computer program that can do so as soon as they arise.

In fact, such algorithms are in widespread use on Intrade, and easy to spot. The sharp rise in the Rice contract caused the arbitrage condition to be momentarily violated and simultaneous sales of the entire set of contracts began to occur. While the price of one contract rose, the prices of the others (Portman, Pawlenty, and Ryan especially) were knocked back as existing bids started to be filled by algorithmic instruction. But as new bidders appeared for these other contracts the Rice contract itself was pushed back in price, resulting in the reversal seen in the above chart. All this in a matter of two or three hours.

Does any of this have relevance for the far more economically significant markets for equity and debt? There's a fair amount of direct evidence that these markets are also characterized by overreaction to news, and such overreaction is consistent with the excess volatility of stock prices relative to dividend flows. But overreactions in stock and bond markets can take months or years to reverse.  Benjamin Graham famously claimed that "the interval required for a substantial undervaluation to correct itself averages approximately 1½ to 2½ years," and DeBondt and Thaler found that "loser" portfolios (composed of stocks that had previously experienced sharp capital losses) continued to outperform "winner" portfolios (composed of those with significant prior capital gains) for up to five years after construction.

One reason why overreaction to news in stock markets takes so long to correct is that there is no arbitrage constraint that forces a decline in other assets when one asset rises sharply in price. 
In prediction markets, such constraints cause immediate reactions in related contracts as soon as one contract makes a major move. Similar effects arise in derivatives markets more generally: options prices respond instantly to changes in the price of the underlying, futures prices move in lock step with spot prices, and exchange-traded funds trade at prices that closely track those of their component securities. Most of this activity is generated by algorithms designed to sniff out and snap up opportunities for riskless profit. But the primitive assets in our economy, stocks and bonds, are constrained only by beliefs about their future values, and can therefore wander far and wide for long periods before being dragged back by their cash flow anchors.

---

Update (7/13). Mark Thoma and Yves Smith have both reposted this, with interesting preludes. Here's Yves:
I’d like to quibble with the notion that there is such a thing as a correct price for as vague a promise as a stock (by contrast, for derivatives, it is possible to determine a theoretical price in relationship to an actively traded underlying instrument, so even though the underlying may be misvalued, the derivative’s proper value given the current price and other parameters can be ascertained).  
Sethi suggests that stocks have “cash flow anchors”. I have trouble with that notion. A bond is a very specific obligation: to pay interest in specified amounts on specified dates, and to repay principal as of a date certain... By contrast, a stock is a very unsuitable instrument to be traded on an arm’s length, anonymous basis. A stock is a promise to pay dividends if the company makes enough money and the board is in the mood to do so. Yes, you have a vote, but your vote can be diluted at any time. There aren’t firm expectations of future cash flows; it’s all guess work and heuristics.
I chose the term "anchor" with some care, because the rode of an anchor is not always taut. I didn't mean to suggest that there is a single proper value for a stock that can be unambiguously deduced from the available information; heterogeneity in the interpretation of information alone is enough to generate a broad range of valuations. This can allow for drift in various directions as long as the price doesn't become too far detached from earnings projections.

Mark argues that the leak to Drudge was an attempt at distraction:
Rajiv Sethi looks at the reaction to the Romney campaign's attempt to change the subject from Romney's role at Bain to potential picks for vice-president (as far as I can tell, Rice has no chance -- she's "mildly pro-choice" for one -- so this was nothing more than an attempt to divert attention from Bain, an attempt one that seems to have worked, at least to some extent).
This view, which seems to be held left and right, was brilliantly summed up by Nate Silver as follows:
drudge (v.): To leak news to displace an unfavorable headline; to muddy up the news cycle.
I was tempted to reply to Nate's tweet with:
twartist (n.): One who is able by virtue of imagination and skill to create written works of aesthetic value in 140 characters or less.
But it seems that the term is already in use.

Saturday, June 30, 2012

Fighting over Claims

This brief segment from a recent speech by Joe Stiglitz sums up very neatly the nature of our current economic predicament (emphasis added):
We should realize that the resources in our economy... today is the same is at was five years ago. We have the same human capital, the same physical capital, the same natural capital, the same knowledge... the same creativity... we have all these strengths, they haven't disappeared. What has happened is, we're having a fight over claims, claims to resources. We've created more liabilities... but these are just paper. Liabilities are claims on these resources. But the resources are there. And the fight over the claims is interfering with our use of the resources
I think this is a very useful way to think about the potential effectiveness under current conditions of various policy proposals, including conventional fiscal and monetary stabilization policies.

Part of the reason for our anemic and fitful recovery is that contested claims, especially in the housing market, continue to be settled in a chaotic and extremely wasteful manner. Recovery from subprime foreclosures is typically a small fraction of outstanding principal, and properly calibrated principal write-downs can often benefit both borrowers and lenders. Modifications that would occur routinely under the traditional bilateral model of lending are much harder to implement when lenders are holders of complex structured claims on the revenues generated by mortgage payments. Direct contact between lenders and borrowers is neither legal nor practicable in this case, and the power to make modifications lies instead with servicers. But servicer incentives are not properly aligned with those of the lenders on whose behalf they collect and process payments. The result is foreclosure even when modification would be much less destructive of resources.

Despite some indications that home values are starting to rise again, the steady flow of defaults and foreclosures shows no sign of abating. Any policy that stands a chance of getting us back to pre-recession levels of resource utilization has to result in the quick and orderly settlement of these claims, with or without modification of the original contractual terms. And it's not clear to me that the blunt instruments of conventional stabilization policy can accomplish this.

Consider monetary policy for instance. The clamor for more aggressive action by the Fed has recently become deafening, with a long and distinguished line of advocates (see, for instance, recent posts by Miles KimballJoseph Gagnon, Ryan AventScott Sumner, Paul Krugman, and Tim Duy). While the various proposals differ with respect to details the idea seems to be the following: (i) the Fed has the capacity to increase inflation and nominal GDP should it choose to do so, (ii) this can be accomplished by asset purchases on a large enough scale, and (iii) doing this would increase not only inflation and nominal GDP but also output and employment.

It's the third part of this argument with which I have some difficulty, because I don't see how it would help resolve the fight over claims that is crippling our recovery. Higher inflation can certainly reduce the real value of outstanding debt in an accounting sense, but this doesn't mean that distressed borrowers will be able to meet their obligations at the originally contracted terms. In order for them to do so, it is necessary that their nominal income rises, not just nominal income in the aggregate. And monetary policy via asset purchases would seem to put money disproportionately in the pockets of existing asset holders, who are more likely to be creditors than debtors. Put differently, while the Fed has the capacity to raise nominal income, it does not have much control over the manner in which this increment is distributed across the population. And the distribution matters.

Similar issues arise with inflation. Inflation is just the growth rate of an index number, a weighted average of prices for a broad range of goods and services. The Fed can certainly raise the growth rate of this average, but has virtually no control over its individual components. That is, it cannot increase the inflation rate without simultaneously affecting relative prices. For instance, purchases of assets that drive down long term interest rates will lead to portfolio shifts and an increase in the price of commodities, which are now an actively traded asset class. This in turn will raise input costs for some firms more than others, and these cost increases will affect wages and prices to varying degrees depending on competitive conditions. As Dan Alpert has argued, expansionary monetary policy under these conditions could even "collapse economic activity, as limited per capita wages are shunted to oil and food, rather than to more expansionary forms of consumption."

I don't mean to suggest that more aggressive action by the Fed is unwarranted or would necessarily be counterproductive, just that it needs to be supplemented by policies designed to secure the rapid and efficient settlement of conflicting claims.

One of the most interesting proposals of this kind was floated back in October 2008 by John Geanakoplos and Susan Koniak, and a second article a few months later expanded on the original. It's worth examining the idea in detail. First, deadweight losses arising from foreclosure are substantial:
For subprime and other non-prime loans, which account for more than half of all foreclosures, the best thing to do for the homeowners and for the bondholders is to write down principal far enough so that each homeowner will have equity in his house and thus an incentive to pay and not default again down the line... there is room to make generous principal reductions, without hurting bondholders and without spending a dime of taxpayer money, because the bond markets expect so little out of foreclosures. Typically, a homeowner fights off eviction for 18 months, making no mortgage or tax payments and no repairs. Abandoned homes are often stripped and vandalized. Foreclosure and reselling expenses are so high the subprime bond market trades now as if it expects only 25 percent back on a loan when there is a foreclosure.
Second, securitization precludes direct contact between borrowers and lenders:
In the old days, a mortgage loan involved only two parties, a borrower and a bank. If the borrower ran into difficulty, it was in the bank’s interest to ease the homeowner’s burden and adjust the terms of the loan. When housing prices fell drastically, bankers renegotiated, helping to stabilize the market. 
The world of securitization changed that, especially for subprime mortgages. There is no longer any equivalent of “the bank” that has an incentive to rework failing loans. The loans are pooled together, and the pooled mortgage payments are divided up among many securities according to complicated rules. A party called a “master servicer” manages the pools of loans. The security holders are effectively the lenders, but legally they are prohibited from contacting the homeowners.
Third, the incentives of servicers are not aligned with those of lenders:
Why are the master servicers not doing what an old-fashioned banker would do? Because a servicer has very different incentives. Most anything a master servicer does to rework a loan will create big winners but also some big losers among the security holders to whom the servicer holds equal duties... By allowing foreclosures to proceed without much intervention, they avoid potentially huge lawsuits by injured security holders. 
On top of the legal risks, reworking loans can be costly for master servicers. They need to document what new monthly payment a homeowner can afford and assess fluctuating property values to determine whether foreclosing would yield more or less than reworking. It’s costly just to track down the distressed homeowners, who are understandably inclined to ignore calls from master servicers that they sense may be all too eager to foreclose.
And finally, the proposed solution:
To solve this problem, we propose legislation that moves the reworking function from the paralyzed master servicers and transfers it to community-based, government-appointed trustees. These trustees would be given no information about which securities are derived from which mortgages, or how those securities would be affected by the reworking and foreclosure decisions they make. 
Instead of worrying about which securities might be harmed, the blind trustees would consider, loan by loan, whether a reworking would bring in more money than a foreclosure... The trustees would be hired from the ranks of community bankers, and thus have the expertise the judiciary lacks...  
Our plan does not require that the loans be reassembled from the securities in which they are now divided, nor does it require the buying up of any loans or securities. It does require the transfer of the servicers’ duty to rework loans to government trustees. It requires that restrictions in some servicing contracts, like those on how many loans can be reworked in each pool, be eliminated when the duty to rework is transferred to the trustees... Once the trustees have examined the loans — leaving some unchanged, reworking others and recommending foreclosure on the rest — they would pass those decisions to the government clearing house for transmittal back to the appropriate servicers... 
Our plan would keep many more Americans in their homes, and put government money into local communities where it would make a difference. By clarifying the true value of each loan, it would also help clarify the value of securities associated with those mortgages, enabling investors to trade them again. Most important, our plan would help stabilize housing prices.
As with any proposal dealing with a problem of such magnitude and complexity, there are downsides to this. Anticipation of modification could induce borrowers who are underwater but current with their payments to default strategically in order to secure reductions in principal. Such policy-induced default could be mitigated by ensuring that only truly distressed households qualify. But since current financial distress is in part a reflection of past decisions regarding consumption and saving, some are sure to find the distributional effects of the policy galling. Nevertheless, it seems that something along these lines needs to be attempted if we are to get back to pre-recession levels of resource utilization anytime soon. And the urgency of action does seem to be getting renewed attention.

The bottom line, I think, is this: too much faith in the traditional tools of macroeconomic stabilization under current conditions is misplaced. One can conceive of dramatically different approaches to monetary policy, such as direct transfers to households, but these would surely face insurmountable legal and political obstacles. It is essential, therefore, that macroeconomic stabilization be supplemented by policies that are microeconomically detailed, fine grained, and directly confront the problem of balance sheet repair. Otherwise this enormously costly fight over claims will continue to impede the use of our resources for many more years to come.

Sunday, June 24, 2012

Reciprocal Fear and the Castle Doctrine Laws

In his timeless classic The Strategy of Conflict, Thomas Schelling began a chapter on the "reciprocal fear of surprise attack" as follows:
If I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to just leave quietly, and I wish him to, there is danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on. "Self-Defense" is ambiguous, when one is only trying to preclude being shot in self-defense.
This effect is empirically important, and is part of the reason why homicide rates vary so greatly across otherwise similar locations, and can change so sharply over time at a given location. In our attempt to understand why the Newark homicide rate doubled in just six years from 2000-2006 while the national rate remained essentially constant, Dan O'Flaherty and I found a substantial number of homicides to be the outcome of escalating disputes between strangers or acquaintances often over seemingly trivial matters. High rates of homicide make for a tense and fearful environment within which the preemptive motive for killing starts to loom large, and this itself reinforces the cycle of tension, fear, and continued killing. Incremental reductions in homicide under such circumstances are unlikely to be feasible, but sudden large scale reductions that transform the environment and break the cycle can sometimes be attained. Similar effects arise with international arms races.

In the jargon of economics, homicide is characterized by strategic complementarity: any increase in the willingness of one set of individuals to kill will be amplified by increases in the willingness of others to kill preemptively, and so on, in an expectations driven cascade. Any change in fundamentals can set this process off, such as a breakdown in law enforcement, easier availability of firearms, or increases in the value of a contested resource.

The logic of strategic complementarity implies that a broadening of the notion of justifiable homicide, in an attempt to benefit potential victims of crime, can have tragic and entirely counterproductive effects. Florida's 2005 stand-your-ground law is an example of this, and more than twenty other states have adopted similar legislation in its wake. These are sometimes called castle doctrine laws, since they extend to other locations the principle that one does not have a duty to retreat in one's own home (or "castle").

Enough time has elapsed since the passage of these laws for an empirical analysis of their effects to be be conducted, and a recent paper by Cheng and Hoekstra does exactly this. Determining the causal effects of any change in the legal environment is always a tricky business. The authors tackle the problem by grouping states into those that adopted such laws and those that did not, and comparing within-state changes in outcomes across the two groups of states (the so-called difference in differences identification strategy). Their findings are striking:
Results indicate that the prospect of facing additional self-defense does not deter crime. Specifically, we find no evidence of deterrence effects on burglary, robbery, or aggravated assault. Moreover, our estimates are sufficiently precise as to rule out meaningful deterrence effects. 
In contrast, we find significant evidence that the laws increase homicides... the laws increase murder and manslaughter by a statistically significant 7 to 9 percent, which translates into an additional 500 to 700 homicides per year nationally across the states that adopted castle doctrine. Thus, by lowering the expected costs associated with using lethal force, castle doctrine laws induce more of it... murder alone is increased by a statistically significant 6 to 11 percent. This is important because murder excludes non-negligent manslaughter classifications that one might think are used more frequently in self-defense cases. But regardless of how one interprets increases from various classifications, it is clear that the primary effect of strengthening self-defense law is to increase homicide.
These are statistical findings and refer to aggregate effects; no individual homicide can be attributed with certainty to a change in the legal environment, not even the one killing that has brought castle doctrine laws into national focus. Nevertheless, we now have compelling evidence that the adoption of such laws has led directly to several hundred deaths annually nationwide, with negligible deterrence effects on other crimes. While the latter finding may be surprising, the former should have been entirely predictable.

Tuesday, June 12, 2012

Elinor Ostrom, 1933-2012

The political scientist Elinor Ostrom, co-recipient of the 2009 Nobel Prize in Economics, died this morning at the age of 78. I met her just once, after a talk she gave at Columbia sometime in the 1990s. It was in a very interesting seminar series organized by Dick Nelson if I recall correctly.

I was a great admirer of Ostrom's research on common pool resources, and tried to interpret some of her insights from an evolutionary perspective in some joint work with E. Somanathan a while ago. I've written about her here on a couple of occasions, and once reviewed a book that was largely a celebration of her vision (she had a hand in no less than six chapters).

Here are some extracts from a post written soon after the Nobel announcement:
Ostrom’s extensive research on local governance has shattered the myth of inevitability surrounding the “tragedy of the commons” and curtailed the uncritical application of the free-rider hypothesis to collective action problems. Prior to her work it was widely believed that scarce natural resources such as forests and fisheries would be wastefully used and degraded or exhausted under common ownership, and therefore had to be either state owned or held as private property in order to be efficiently managed. Ostrom demonstrated that self-governance was possible when a group of users had collective rights to the resource, including the right to exclude outsiders, and the capacity to enforce rules and norms through a system of decentralized monitoring and sanctions. This is clearly a finding of considerable practical significance. 
As importantly, the award recognized an approach to research that is practically extinct in contemporary economics. Ostrom developed her ideas by reading and generalizing from a vast number of case studies of forests, fisheries, groundwater basins, irrigation systems, and pastures. Her work is rich in institutional detail and interdisciplinary to the core. She used game theoretic models and laboratory experiments to refine her ideas, but historical and institutional analysis was central to this effort. She deviated from standard economic assumptions about rationality and self-interest when she felt that such assumptions were at variance with observed behavior, and did so long before behavioral economics was in fashion... 
There is no doubt that her research has dramatically transformed our thinking about the feasibility and efficiency of common property regimes. In addition, it serves as a reminder that her eclectic and interdisciplinary approach to social science can be enormously fruitful. In making this selection at this time, it is conceivable that the Nobel Committee is sending a message that methodological pluralism is something our discipline would do well to restore, preserve and foster.
And from the book review:
Although several distinguished scholars have been affiliated with the workshop over the years, Ostrom remains its leading light and creative force. It is fitting, therefore, that the book concludes with her 1988 Presidential Address to the American Political Science Association. In this chapter, she identifies serious shortcomings in prevailing theories of collective action. Approaches based on the hypothesis of unbounded rationality and material self-interest often predict a “tragedy of the commons” and prescribe either privatization of common property or its appropriation by the state. Policies based on such theories, in her view, “have been subject to major failure and have exacerbated the very problems they were intended to ameliorate”. What is required, instead, is an approach to collective action that places reciprocity, reputation and trust at its core. Any such theory must take into account our evolved capacity to learn norms of reciprocity, and must incorporate a theory of boundedly rational and moral behavior. It is only in such terms that the effects of communication on behavior can be understood. Communication is effective in fostering cooperation, in Ostrom’s view, because it allows subjects to build trust, form group identities, reinforce reciprocity norms, and establish mutual commitment. The daunting task of building rigorous models of economic and political choice in which reciprocity and trust play a meaningful role is only just beginning... 
The key conclusions drawn by the contributors are nuanced and carefully qualified, but certain policy implications do emerge from the analysis. The most important of these is that local communities can often find autonomous and effective solutions to collective-action problems when markets and states fail to do so. Such institutions of self-governance are fragile: large-scale interventions, even when well-intentioned, can disrupt and damage local governance structures, often resulting in unanticipated welfare losses. When a history of successful community resource management is in evidence, significant interventions should be made with caution. Once destroyed, evolved institutions are every bit as difficult to reconstruct as natural ecosystems, and a strong case can be made for conserving those that achieve acceptable levels of efficiency and equity. By ignoring the possibility of self-governance, one puts too much faith in the benevolence of a national government that is too large for local problems and too small for global ones. Moreover, as Ostrom points out in the concluding chapter, by teaching successive generations that the solution to collective-action problems lie either in the market or in the state, “we may be creating the very conditions that undermine our democratic way of life”. The stakes could not be higher.
Earlier tributes to Ostrom from Vernon Smith and Paul Romer are well worth revisiting.

Friday, April 27, 2012

On Equilibrium, Disequilibrium, and Rational Expectations

There's been some animated discussion recently on equilibrium analysis in economics, starting with a provocative post by Noah Smith, vigorous responses by Roger Farmer and JW Mason, and some very lively comment threads (see especially the smart and accurate points made by Keshav on the latter posts). This is a topic of particular interest to me, and the debate gives me a welcome opportunity to resume blogging after an unusually lengthy pause.

As Farmer's post makes clear, equilibrium in an intertemporal model requires not only that individuals make plans that are optimal conditional on their beliefs about the future, but also that these plans are mutually consistent. The subjective probability distributions on the basis of which individuals make decisions are presumed to coincide with the objective distribution to which these decisions collectively give rise. This assumption is somewhat obscured by the representative agent construct, which gives macroeconomics the appearance of a decision-theoretic exercise. But the assumption is there nonetheless, hidden in plain sight as it were. Large scale asset revaluations and financial crises, from this perspective, arise only in response to exogenous shocks and not because many individuals come to realize that they have made plans that cannot possibly all be implemented.

Farmer points out, quite correctly, that rational expectations models with multiple equilibrium paths are capable of explaining a much broader range of phenomena than those possessed of a unique equilibrium. His own work demonstrates the truth of this claim: he has managed to develop models of crisis and depression without deviating from the methodology of rational expectations. The equilibrium approach, used flexibly with allowances for indeterminacy of equilibrium paths, is more versatile than many critics imagine.

Nevertheless, there are many routine economic transactions that cannot be reconciled with the hypothesis that individual plans are mutually consistent. For instance, it is commonly argued that hedging by one party usually requires speculation by another, since mutually offsetting exposures are rare. But speculation by one party does not require hedging by another, and an enormous amount of trading activity in markets for currencies, commodities, stock options and credit derivatives involves speculation by both parties to each contract. The same applies on a smaller scale to positions taken in prediction markets such as Intrade. In such transactions, both parties are trading based on a price view, and these views are inconsistent by definition. If one party is buying low planning to sell high, their counterparty is doing just the opposite. At most one of the parties can have subjective beliefs that are consistent with with the objective probability distribution to which their actions (combined with the actions of others) gives rise.

If it were not for fundamental belief heterogeneity of this kind, there could be no speculation. This is a consequence of Aumann's agreement theorem, which states that while individuals with different information can disagree, they cannot agree to disagree as long as their beliefs are derived from a common prior. That is, they cannot persist in disagreeing if their posterior beliefs are themselves common knowledge. The intuition for this is quite straightforward: your willingness to trade with me at current prices reveals that you have different information, which should cause me to revise my beliefs and alter my price view, and should cause you to do the same. Our willingness to transact with each other causes us both to shrink from the transaction if our beliefs are derived from a common prior.

Hence accounting for speculation requires that one depart, at a minimum, from the common prior assumption. But allowing for heterogeneous priors immediately implies mutual inconsistency of individual plans, and there can be no identification of subjective with objective probability distributions.

The development of models that allow for departures from equilibrium expectations is now an active area of research. A conference at Columbia last year (with Farmer in attendance) was devoted entirely to this issue, and Mike Woodford's reply to John Kay on the INET blog is quite explicit about the need for movement in this direction:
The macroeconomics of the future... will have to go beyond conventional late-twentieth-century methodology... by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.
There is a growing literature on heterogenous priors that I think could serve as a starting point for the development of such an alternative. However, it is not enough to simply allow for belief heterogeneity; one must also confront the question of how the distribution of (mutually inconsistent) beliefs changes over time. To a first approximation, I would argue that the belief distribution evolves based on differential profitability: successful beliefs proliferate, regardless of whether those holding them were broadly correct or just extremely fortunate. This has to be combined with the possibility that some individuals will invest considerable time and effort and bear significant risk to profit from large mismatches between the existing belief distribution and the objective distribution to which it gives rise. Such contrarian actions may be spectacular successes or miserable failures, but must be accounted for in any theory of expectations that is rich enough to be worthy of the name.

 --- 

Some of the issues discussed here are explored at greater length in an essay on market ecology that I presented at a symposium in honor of Duncan Foley last week. Duncan was among the first to see that the rational expectations hypothesis implicitly entailed the assumption of complete futures markets, and would therefore be difficult to "reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life." 

Friday, February 17, 2012

The Countrywide Complaint and the Capitalization of Trust

In December 2011 the Department of Justice filed suit against Countrywide Financial Corporation alleging discrimination on the basis of race and national origin in its mortgage lending operations over the period 2004-2008. The result was a record settlement for $335 million with Bank of America, which had acquired Countrywide in 2008.

The complaint was based on a review of "internal company documents and non-public loan-level data" on more than 2.5 million loans and is worth reading in full. In addition to providing evidence of disparate impact, it describes in detail the set of incentive structures under which loan officers and mortgage brokers were operating. These compensation schemes left considerable room for individual discretion in the setting of fees and rates, and for steering borrowers towards particular loan products. The manner in which this discretion was exercised had significant effects on overall levels of compensation, resulting in strong incentives for brokers and loan officers to act against the interests of borrowers.

But these incentives were formally neutral with respect to race and national origin, which raises the question of why they led to such disparate impact. In the standard economic theory of price discrimination, it is the most affluent customers, or the ones who value the product the most, who pay the highest prices. But in the case of mortgage loans it appears that the highest prices were paid by those who could least afford to do so. One possible reason for this is that this set of borrowers was poorly informed about market rates and alternatives. But this alone is not a satisfactory explanation, because such information can be sought if one considers it to be valuable. It may not be sought, however, if if a borrower trusts his broker to be providing the best available terms. I argue below, based on a very interesting paper by Carolina Reid of the San Francisco Fed, that variations across communities in the level of such trust was a key factor in explaining why the incentive structures in place gave rise to such disparate impact.

But first, the complaint:
As a result of Countrywide's policies and practices, more than 200,000 Hispanic and African-American borrowers paid Countrywide higher loan fees and costs for their home mortgages than non-Hispanic White borrowers, not based on their creditworthiness or other objective criteria related to borrower risk, but because of their race or national origin. 
Additionally... Hispanic and African-American borrowers were placed into subprime loans when similarly-qualified non-Hispanic White borrowers received prime loans. Between 2004 and 2007, more than 10,000 Hispanic and African-American wholesale borrowers received subprime loans, with adverse terms and conditions such as high interest rates, excessive fees, prepayment penalties, and unavoidable future payment hikes, rather than prime loans... not based on their creditworthiness or other objective criteria related to borrower risk, but because of their race or national origin.
But what, exactly, were these policies and practices, and how did they give rise to the alleged disparate impact? The complaint focuses on the discretion given to loan officers and mortgage brokers, and the manner in which their compensation was determined. The process for retail loans was as follows:
Countrywide utilized a two-tier decision-making process to set the interest rates and other terms and conditions of retail loans it originated. The first step involved setting the credit risk-based prices on a daily basis... including interest rates, loan origination fees, and discount points. In this step, Countrywide accounted for numerous objective credit-related characteristics of applicants by setting a variety of prices for each of the different loan products that reflected its assessment of individual applicant creditworthiness, as well as the current market rate of interest and the price it could obtain from the sale of such a loan to investors. These prices, referred to as par or base prices, were communicated through rate sheets... Individual loan applicants did not have access to these rate sheets. 
As the second step in determining the final price it would charge an applicant for a loan, Countrywide allowed its retail mortgage loan officers... to increase the loan price charged to borrowers over the rate sheet prices set by Countrywide, up to certain caps; this pricing increase was labeled an overage. Countrywide also allowed these same employees to decrease the loan price charged to borrowers below the stated rate sheet prices; this pricing decrease was labeled a shortage. Countrywide further allowed those employees to alter the standard fees it charged in connection with processing a loan application and the standard allocation of closing costs between Countrywide and the borrower. Employees made these pricing adjustments in a subjective manner, unrelated to factors associated with an individual applicant's credit risk... 
During the time period at issue, Countrywide loan officer compensation was affected by the loan officers' decisions with respect to pricing overages and shortages, as well as other factors, such as volume of loans originated. Loan officers could obtain increased compensation for overages and could have their total compensation potentially decreased for shortages. Countrywide's compensation policy thus provided an incentive for its loan officers in making pricing adjustments to maximize overages and, when offering shortages, to minimize their amount.
Very similar incentives were in place for mortgage brokers who brought loan applications to Countrywide for origination and funding through its wholesale channel. As in the case of retail loans, rate sheets were made available to brokers on a daily basis with prices specified for different loan products based on borrower characteristics. Brokers were not required to "inform a prospective borrower of all available loan products for which he or she qualified, of the lowest interest rates and fees for a specific loan product, or of specific loan products best designed to serve the interests expressed by the applicant." In fact, the manner in which broker compensation was determined created incentives to actively conceal such information, since they were paid "based on the extent to which the interest rate charged on a loan exceeded the base, or par, rate for that loan to a borrower with particular credit risk characteristics fixed by Countrywide and listed on its rate sheets."

Aside from variation across borrowers in rates and fees for a given product, there was also variation in the types of products towards which borrowers were steered:
It was Countrywide's business practice to allow its mortgage brokers and employees to place a wholesale loan applicant in a subprime loan even when the applicant qualified for a prime loan according to Countrywide's underwriting practices... These underwriting guidelines were intended to be used to determine whether a loan applicant qualified for a prime loan product, an Alt-A loan product, a subprime loan product, or for no Countrywide loan product at all.  Countrywide's compensation policy and practice created a financial incentive for mortgage brokers to submit subprime loans to Countrywide for origination rather than any other type of residential loan product.
The incentives to increase overages, reduce shortages, and steer borrowers towards subprime products even when qualified for prime loans clearly operated against the interests of borrowers. Coupled with the incentives tied to loan volume, this compensation scheme encouraged brokers and loan officers to set terms that varied systematically across borrowers. Applicants who were more sophisticated and knowledgable, and would walk away from riskier or more expensive products, received better terms than those who were more naive. And those who were suspicious of their brokers and aware of the incentives under which they were operating secured better terms than those who were more trusting.

Hence the disparities in rates and fees identified in the complaint could, in principle, have arisen from differences across social groups in the degree to which they trusted those with whom they were transacting. Carolina Reid's paper provides some evidence for this interpretation. Reid argues that "while financial services have gone global... obtaining a mortgage is still a very local process, embedded in local context and social relations." In order to better understand this process, she interviewed homeowners in Oakland and Stockton, two areas that experienced very high rates of subprime lending prior to the crisis and correspondingly elevated rates of foreclosure subsequently. These were also areas in which a disproportionately large share of originations were mediated by mortgage brokers. Here is what she found:
One of the strong themes that emerged from the interviews was the extent to which respondents of color expressed their desire to work with a broker from their own community or background... In this sense, the interviews support Granovetter’s hypothesis that individuals are “less interested in general reputations than in whether a particular other may be expected to deal honestly with them—mainly a function of whether they or their own contacts have had satisfactory past dealings with the other.” (Granovetter 1985, p. 491) In numerous interviews, borrowers said that they turned to their social networks and relations in the neighborhood to identify a local mortgage broker who would be willing to “work with someone like me.” Part of this was driven by a lack of trust in traditional lenders, and several respondents in Oakland noted a historical distrust of banks in the community... More frequently, however, respondents noted that they didn’t think they could obtain or qualify for a loan without help from someone who was ‘like them’ but who knew the system... 
Respondents listed a wide array of ways that they received recommendations for both real estate agents and mortgage brokers: family, neighbors on the block, the local church, their jobs, the park, and parents at their kids’ school...  
The desire to be served by someone from the community was not lost on mortgage brokers, who during this time period actively created the impression that they were part of the community to help promote their business. Strategies ranged from relying on customer referrals to generate new business, to frequenting local churches, social gatherings, and businesses and by adopting local social conventions... The interviews pointed to how the respondents felt immediately connected to these brokers, “he understood my situation”, “he told me that he understands how difficult the paperwork is, especially when you have lots of jobs,” “I liked his ideas for how to brighten the kitchen,” “she seemed to understand why we wanted to move from SF, buy a house, provide for a yard for the kids, a good school.” 
In theory, mortgage brokers are well‐placed to serve as a “bridging tie” and “trusted advisor”, since they have both experience with the lending process and access to information about mortgage products and prices. Empirical research studies, however, have revealed that the during the subprime boom, yield spread premiums coupled with a push for a greater volume of loan originations provided a financial incentive for brokers to work against the interests of the borrower (e.g. Ernst, Bocia and Li 2008). In addition, since there was no statutory employer‐employee relationship between lending institutions and brokers, there were few legal protections to ensure that brokers provide borrowers with fair and balanced information. This aligns with the “trust” that social relations engender... In both Stockton and Oakland, respondents did not seem to be aware of the potential for perverse incentives on the part of brokers, and instead trusted them fully to act in their best interests. 
It is ironic that distrust of traditional lending institutions such as commercial banks led some borrowers to seek out brokers from their own communities whom they felt they could trust. But these brokers were operating under high-powered incentives to inflate rates and fees and guide borrowers towards subprime products even when they were eligible for cheaper alternatives. The trust that was placed in the brokers allowed them greater flexibility to respond to these incentives and left borrowers worse off than they would have been if they had been more suspicious or better aware of the incentive structures in place.

Viewed in this manner, the subprime saga has some broader implications. From the point of view of a company operating in multiple local markets with a diverse customer base, the strategy of giving local employees or contractors the discretion to adjust prices can be very profitable. This is especially so if these employees appear trustworthy to their customers, but are not in fact deserving of such trust. As Groucho Marx is reputed to have said:
The secret of life is honesty and fair-dealing. If you can fake that you've got it made. 
For products involving frequent repeat purchases by the same customer, reputation effects and competition can limit the degree of price discrimination. But the purchase of a home is an infrequent transaction for most people, and the complexity of the loan product precludes easy comparison with alternatives on offer. Trust then becomes a key determinant of pricing and transaction volume, especially when strong and hidden incentives for the betrayal of trust are in place.

Betrayal also leads to the erosion of trust over time. It could be argued that trust is one of our most valuable public goods, substantially lowering the costs of transacting. In the complete absence of trust, the volume of resources that would need to be devoted to monitoring would be prohibitively large and many organizations and markets would simply not exist. Trust also comes naturally to most of us, based on simple cues such as those revealed in Reid's interviews. High-powered incentives to secure and then betray such trust are therefore costly not just to the immediate victims, but also to the broader community. This may be one of the less visible consequences of the subprime crisis. 

Sunday, January 29, 2012

Returns to Information and Returns to Capital

One of the benefits of maintaining this blog is that it gives me the opportunity to think aloud, expressing half-formed ideas in the hope that the feedback will help me sort through some interesting questions. My last post on double taxation attracted a number of thoughtful (and in some cases skeptical) comments for which I am grateful.

What I was trying to do in that post was to evaluate two incompatible statements: Warren Buffet's declaration that he pays a substantially lower tax rate at 18% than any of his office staff, and Mitt Romney's conflicting claim that his effective tax rate is close to 50%, the sum of the corporate tax rate and the rate on long-term capital gains. I argued that since the corporate tax is capitalized into prices at both the time of purchase and the time of sale, it ought not to be simply added to the capital gains tax to determine an effective rate.

The point may be expressed as follows. Over the past couple of years Romney seems to have paid about 3 million dollars in taxes on income of about 20 million annually, a rate of about 15%. If his effective tax rate is 50% then his "effective" gross income is about twice his current after-tax income, or approximately 34 million. What he is claiming, in effect, is that in the absence of the corporate tax, and with no change in the nature of his economic activities, he would have been able to secure a capital gain of 34 million annually. This does not seem plausible to me. Elimination of the corporate tax would certainly result in a one-time gain to any currently held long positions, but I don't see how it could allow him to generate an extra 14 million, which is 70% greater than his current gross income, on an ongoing basis every year.

Whatever the merits of this argument, I think that most commenters on my earlier post agree with me on two things:
  1. The adding-up approach to effective tax rates does not work for short sales and related derivative positions, since it would lead to the absurd conclusion that short sellers were paying a negative effective tax rate on capital gains.
  2. Elimination of the corporate tax would result in a sharp rise in equity prices and a windfall gain to current long investors, but would have more modest and uncertain effects on the returns to future investors who enter positions after the lower rate has been capitalized into prices. 
In particular, the following comment from Richard Serlin got me thinking about the nature of capital gains:
With regard to short selling, when the corporate tax first hits (or becomes known to hit), they'll get a windfall, but then their expected returns (of the short sales people actually choose to take) will adjust to the new norm for their risk. It's not like short selling opportunities that pay a fair market risk adjusted return always exist, anyway. When they do, it's largely not a reward for the capital, but for the information that the stock is an overpriced bad deal.
It is certainly true, as Richard points out, that profits to short positions are rewards for information, broadly interpreted to include the processing and analysis of information. They are not returns to capital in any meaningful sense, although one requires capital to enter a short position. But the same is true for at least some portion of the profits to long positions. In fact, the essence of Buffet's investment strategy is to identify underpriced companies in which to take long (and long-term) positions on which capital gains are then realized.

If capital gains are viewed largely as a return to capital, then the double taxation argument makes some sense. But viewed as a return to information and analysis, it is not clear why capital gains should be given preferential tax treatment relative to the income generated, for instance, by doctors or teachers.

I suspect that Warren Buffet views his income as being generated largely by information and judgment, and does not believe that his opportunities for ongoing capital gain would be substantially increased if the corporate tax were eliminated. He does not therefore see the tax as a significant burden, and does not consider his effective gross income to be substantially greater than that which he declares on his tax returns. Whether Romney himself feels the same way is impossible to know, since political expediency currently compels him to take a very different position. 

Saturday, January 28, 2012

Double Taxation

The release of Mitt Romney's tax returns has drawn attention yet again to the disparity between the rates paid on ordinary income and those paid on capital gains. It is being argued in some quarters that the 15% rate on capital gains vastly underestimates the effective tax rate paid by those whose income comes largely from financial investments, on the grounds that corporations pay a rate of 35% on profits. Were it not for this tax, it is argued, dividends and capital gains would be higher, and so would the after-tax receipts of those who derive the bulk of their income from such sources.

Romney himself has made this argument recently, claiming that his effective tax rate is closer to 50%:
One of the reasons why we have a lower tax rate on capital gains is because capital gains are also being taxed at the corporate level. So as businesses earn profits, that's taxed at 35 percent. Then as they distribute those profits in dividends, that's taxed at 15 percent more. So all total, the tax rate is really closer to 45 or 50 percent.
The absurdity of this claim is clearly revealed if one considers capital gains that accrue to short sellers, who pay rather than receive dividends while their positions are open. Following the logic of the argument, one would be forced to conclude that short sellers are taxed at an effective rate of negative 20%, thereby receiving a significant subsidy due to the existence of the corporate tax. The flaw in this reasoning is apparent when one recognizes that asset prices are lower (relative to the zero corporate tax benchmark) not only when a short position is covered, but also when it is entered.

There is no doubt that the presence of the corporate tax depresses the price of equities, but it does so both at the time of purchase and at the time of sale. If there were no corporate tax, dividends and capital gains per share would certainly be higher, but an investor would have paid substantially more per share to acquire his assets in the first place. As a result he would be holding fewer shares for any given initial outlay, and his after-tax income (holding constant the rate paid on capital gains) would not be substantially different.

To see why, it is useful to think about what determines the price of equities. Three factors are especially important: the current earnings of a firm (after payment of interest and taxes), the rate at which these earnings are expected to grow, and the riskiness of the security, which itself is linked to the degree to which the firm's earnings are correlated with broader market movements. Securities that are riskier in this latter sense tend to appreciate faster on average because investors would otherwise avoid them, depressing their prices and raising their expected returns until such returns are viewed as adequate compensation for the greater risk of holding them. This risk is routinely expressed as a market capitalization rate, interpreted as the expected return that investors require in order to hold the security. Airline and automobile stocks, for instance, have higher market capitalization rates than do shares in utilities.

The manner in which these factors interact to influence prices may be illustrated by considering the simplest possible case of a firm with constant expected earnings growth and a fixed dividend payout ratio. In this case, for reasons discussed in any introductory finance textbook, the fundamental value of the security is given by the simple formula D/(k-g), where D is the current dividend forecast (a constant share of the earnings forecast), g is its expected rate of growth, and k is the market capitalization rate. Shares in a debt-free firm that pays 20% of its earnings as dividends, is currently earning $10 per share annually, is expected to grow at 10%, and has a market capitalization rate of 12% would then have a share price of $100. After a year (assuming no change in these parameters) the share price would be $110 and the dividend payout $2. An investor would have made $12 on a $100 investment, a percentage return precisely equal to the market capitalization rate. All this is with no corporate tax.

Now suppose that a 35% corporate tax is in place, so after-tax earnings per share are $6.50 instead, with no change in other specifications. Dividends are then $1.30 per share and the initial share price is $65. After a year this rises to $71.50. Adding dividends and capital gains, an investor makes $7.80 for each share purchased at $65, again earning precisely 12%. Each share results in lower revenues to the investor, but since more shares can be purchased at the outset, aggregate income is no different.

None of this should be in the least bit surprising. Note, however, that if the corporate tax were to be eliminated today, there would be a sharp rise in the price of equities and current asset holders would enjoy a windfall gain. Similar issues arise with respect to the mortgage interest deduction: eliminating this would result in an immediate decline in home values, severely punishing those who purchased recently at prices that reflected the anticipated tax savings over the duration of the mortgage.

This does not necessarily mean that eliminating the corporate tax while simultaneously raising the rate on capital gains is necessarily a bad idea, or that elimination of the mortgage interest deduction is necessarily bad policy. A case could be made for both initiatives. The corporate tax is not uniformly applied due to the broad range of loopholes and exemptions, and the mortgage deduction is regressive and inhibits both neighborhood integration and labor mobility. But any such changes will have major distributional effects that must be taken into account in any comprehensive evaluation of the policy. Doing so properly requires a clear distinction between stocks and flows, and an analysis that goes a little deeper than simple arithmetic.

---

Update: Follow-up post here.

Sunday, January 01, 2012

Self-Fulfilling Prophecies and the Iowa Caucus

A few days ago Nate Silver made the following intriguing comments on the Iowa Caucus (emphasis added):
There are extremely strong incentives for supporters of Mrs. Bachmann, Mr. Santorum and Mr. Perry to behave tactically, throwing their weight behind whichever one appears to have the best chance of finishing in the top two. What that means is that if any of these candidates appear to have any momentum at all during the final week of the campaign, their support could grow quite quickly as other voters jump on the bandwagon.

This is also a case in which the polling may actually influence voter behavior. In particular, if one of these candidates does well in the highly influential Des Moines Register poll that should be published on New Year’s Eve or thereabouts, that candidate might be a pretty good bet to overperform polling as voters use that as a cue on caucus night to determine which one is most viable...

I’m not sure that this theory actually makes any sense... But it may not matter if the theory is true. If voters are looking for anything to break the logjam between these candidates, mere speculation that one of them has momentum could prove to be a self-fulfilling prophecy.
What's most interesting about this is the possibility that even a methodologically flawed or misleading poll, provided that it is given credence, could coordinate expectations on one of these three candidates and result in a surge of support.

In fact, this seems to be precisely what has happened. A CNN/Time poll covering the period December 21-27 revealed Santorum to be in third place with 16% of the vote. This was an outlier at the time, and was sharply criticized by Tom Jensen of PPP and by Nate himself for surveying only registered Republicans:
What’s wrong with using a list of Republican voters for a Republican caucus poll? The answer is that it’s extremely easy for independent and Democratic voters to register or re-register as Republicans at the caucus site. Historically, a fair number of independent voters do this.

According to entrance polls in Iowa in 2008, for instance, about 15 percent of participants in the Republican caucus identified themselves as independents or Democrats on the way into the caucus site... Most other pollsters are making some attempt to account for these voters. They are anticipating that the fraction of independents and Democrats will be at least as high as it was in 2008 if not a little higher, which would make sense since Republicans do not have a competitive Democratic caucus to compete with this year.

The recent Public Policy Polling survey, for instance, estimated that 24 percent of Iowa caucus participants are currently registered as independents or Democrats and will re-register as Republicans at the caucuses. This month’s Washington Post/ABC News poll put the fraction at 18 percent. There is room to debate what the right number is but it will certainly not be zero, as the CNN poll assumes.
Since few independents and Democrats are inclined to vote for Santorum, the CNN/Time poll very likely exaggerated the level of support he enjoyed at the time. But despite this, it contributed to expectations of a surge which seem to have become self-fulfilling. The Des Moines Register poll released last night confirms this, with Santorum rising sharply from 10% on the 27th all the way to 22% four days later. This survey, conducted by the highly regarded Ann Selzer, has historically been among the most reliable of Iowa polls.

Did a misleading poll based on an unsound sample shift expectations in such a manner as to fulfill it's own flawed forecast? Tom Jensen certainly appears to think so:
Selzer had Santorum at 9% Tu-W. We had him at 10% M-Tu. Surge quite possibly generated by CNN poll that was quite possibly wrong... If CNN had shown Perry at 15% and he got all the momentum stories, the buzz in Iowa might be all about him this weekend.
The CNN/Time poll may also have given Romney an expectational boost at the expense of Paul by excluding independents from the survey. As Tom Jensen noted in his response, Romney was ahead of Paul in the restricted sample of the PPP poll, but quite clearly behind overall on December 27. It's an interesting example of how a seemingly innocuous methodological decision on a single primary poll could end up having major ramifications for the direction of the country.

The mechanisms at work here have some broader implications. They reveal the potential value to candidates (or their supporters) of manipulating prices in prediction markets such as Intrade, which have come to be closely monitored indicators of candidate viability. And they appear in all sorts of other contexts, from sovereign debt crises to speculative currency attacks.

In fact, any borrower who has financed long-dated assets with short term liabilities needs to periodically roll over debt, and the willingness of investors to facilitate this depends on their beliefs about whether other investors will continue to facilitate it in the future. These expectations are subject to capricious change, often as a result of small and seemingly unimportant triggers. The Iowa caucus illustrates the phenomenon, and the Eurozone debt crisis demonstrates its broader relevance.

Thursday, December 01, 2011

Price Coherence on Intrade

A couple of days ago, Richard Thaler tweeted this:
Intrade prices seem incoherent. How can Newt nomination price soar but Obama win stay at 50%?
Here's what Thaler is talking about. Over the past couple of weeks, the price of a contract that pays $10 in the event that Gingrich is nominated has risen sharply from about a dollar to above $3.50:


Over the same period a contract that pays $10 if Obama is reelected has remained within a narrow window, trading within a ten cent band a shade above $5:


Thaler considers this pattern to be incoherent because Gingrich is widely believed to be a weaker general election candidate than Romney. For instance, in head-to-head poll averages Obama currently leads Gingrich by 5.7%, but leads Romney by the much smaller margin of 1.5%.

But even if Gingrich really is the weaker candidate against Obama under any set of conditions that might prevail on election day, it does not follow (as a point of logic) that a rise in the Gingrich nomination price must be associated with a rise in the Obama reelection price. For instance, a belief among voters that Obama is more vulnerable would ordinarily result in a decline in his likelihood of reelection, but this could be offset if the same belief also leads to the nomination by the GOP of a more conservative but less electable candidate.

This reasoning is consistent with the so-called Buckley Rule, which urges a vote for the most conservative candidate who is also electable. As perceptions about the electability of the incumbent shift, so does the perceived viability of more ideologically extreme members of the opposition. These countervailing effects can dampen fluctuations in the electability of the incumbent. Hence the market data alone cannot decisively settle the question of price coherence. 

Friday, October 07, 2011

Notes on a Worldly Philosopher

The very first book on economics that I remember reading was Robert Heilbroner's majesterial history of thought The Worldly Philosophers. I'm sure that I'm not the only person who was drawn to the study of economics by that wonderfully lucid work. Heilbroner managed to convey the complexity of the subject matter, the depth of the great ideas, and the enormous social value that the discipline at its best is capable of generating.

I was reminded of Heilbroner's book by Robert Solow's review of Sylvia Nasar's Grand Pursuit: The Story of Economic Genius. Solow begins by arguing that the book does not quite deliver on the promise of its subtitle, and then goes on to fill the gap by providing his own encapsulated history of ideas. Like Heilbroner before him, he manages to convey with great lucidity the essence of some pathbreaking contributions. I was especially struck by the following passages on Keynes:
He was not without antecedents, of course, but he provided the first workable intellectual apparatus for thinking about what determines the level of “output as a whole.” A generation of economists found his ideas the only available handle with which to grasp the events of the Great Depression of the time... Back then, serious thinking about the general state of the economy was dominated by the notion that prices moved, market by market, to make supply equal to demand. Every act of production, anywhere, generates income and potential demand somewhere, and the price system would sort it all out so that supply and demand for every good would balance. Make no mistake: this is a very deep and valuable idea. Many excellent minds have worked to refine it. Much of the time it gives a good account of economic life. But Keynes saw that there would be occasions, in a complicated industrial capitalist economy, when this account of how things work would break down.

The breakdown might come merely because prices in some important markets are too inflexible to do their job adequately; that thought had already occurred to others. It seemed a little implausible that the Great Depression of the 1930s should be explicable along those lines. Or the reason might be more fundamental, and apparently less fixable. To take the most important example: we all know that families (and other institutions) set aside part of their incomes as saving. They do not buy any currently produced goods or services with that part. Something, then, has to replace that missing demand. There is in fact a natural counterpart: saving today presumably implies some intention to spend in the future, so the “missing” demand should come from real capital investment, the building of new productive capacity to satisfy that future spending. But Keynes pointed out that there is no market or other mechanism to express when that future spending will come or what form it will take... The prospect of uncertain demand at some unknown time may not be an adequately powerful incentive for businesses to make risky investments today. It is asking too much of the skittery capital market. Keynes was quite aware that occasionally a wave of unbridled optimism might actually be too powerful an incentive, but anyone in 1936 would take the opposite case to be more likely.

So a modern economy can find itself in a situation in which it is held back from full employment and prosperity not by its limited capacity to produce, but by a lack of willing buyers for what it could in fact produce. The result is unemployment and idle factories. Falling prices may not help, because falling prices mean falling incomes and still weaker demand, which is not an atmosphere likely to revive private investment. There are some forces tending to push the economy back to full utilization, but they may sometimes be too weak to do the job in a tolerable interval of time. But if the shortfall of aggregate private demand persists, the government can replace it through direct public spending, or can try to stimulate additional private spending through tax reduction or lower interest rates. (The recipe can be reversed if private demand is excessive, as in wartime.) This was Keynes’s case for conscious corrective fiscal and monetary policy. Its relevance for today should be obvious. It is a vulgar error to characterize Keynes as an advocate of “big government” and a chronic budget deficit. His goal was to stabilize the private economy at a generally prosperous level of activity.
This is as clear and concise a description of the fundamental contribution of the General Theory that I have ever read. And it reveals just how far from the original vision of Keynes the so-called Keynesian economics of our textbooks has come. The downward inflexibility of wages and prices is viewed in many quarters today to be the hallmark of the Keynesian theory, and yet the opposite is closer to the truth. The key problem for Keynes is the mutual inconsistency of individual plans: the inability of those who defer consumption to communicate their demand for future goods and services to those who would invest in the means to produce them.

The place where this idea gets buried in modern models is in the hypothesis of "rational expectations." A generation of graduate students has come to equate this hypothesis with the much more innocent claim that individual behavior is "forward looking." But the rational expectations hypothesis is considerably more stringent than that: it requires that the subjective probability distributions on the basis of which individual decisions are made correspond to the objective distributions that these decisions then give rise to. It is an equilibrium hypothesis, and not a behavioral one. And it amounts to assuming that the plans made by millions of individuals in a decentralized economy are mutually consistent. As Duncan Foley recognized a long time ago, this is nothing more than "a disguised form of the assumption of the existence of complete futures and contingencies markets."

It is gratifying, therefore, to see increasing attention being focused on developing models that take expectation revision and calculation seriously. A conference at Columbia earlier this year was devoted entirely to such lines of work. And here is Mike Woodford on the INET blog, making a case for this research agenda:
This postulate of “rational expectations,” as it is commonly though rather misleadingly known... is often presented as if it were a simple consequence of an aspiration to internal consistency in one’s model and/or explanation of people’s choices in terms of individual rationality, but in fact it is not a necessary implication of these methodological commitments. It does not follow from the fact that one believes in the validity of one’s own model and that one believes that people can be assumed to make rational choices that they must be assumed to make the choices that would be seen to be correct by someone who (like the economist) believes in the validity of the predictions of that model. Still less would it follow, if the economist herself accepts the necessity of entertaining the possibility of a variety of possible models, that the only models that she should consider are ones in each of which everyone in the economy is assumed to understand the correctness of that particular model, rather than entertaining beliefs that might (for example) be consistent with one of the other models in the set that she herself regards as possibly correct...

The macroeconomics of the future, I believe, will still make use of general-equilibrium models in which the behavior of households and firms is derived from considerations of intertemporal optimality, but in which the optimization is relative to the evolving beliefs of those actors about the future, which need not perfectly coincide with the predictions of the economist’s model. It will therefore build upon the modeling advances of the past several decades, rather than declaring them to have been a mistaken detour. But it will have to go beyond conventional late-twentieth-century methodology as well, by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.
I think that the vigorous pursuit of this research agenda could lead to a revival of interest in theories of economic fluctuations that have long been neglected because they could not be reformulated in ways that were methodologically acceptable to the professional mainstream. I am thinking, in particular, of nonlinear models of business cycles such as those of Kaldor, Goodwin, Tobin and Foley, which do not depend on exogenous shocks to account for departures from steady growth. This would be an interesting, ironic, and welcome twist in the tangled history of the worldly philosophy.