Monday, August 13, 2012

Building a Better Dow

The following post, written jointly with Debraj Ray, is based on our recent note proposing a change in the method for computing the Dow.

---

With a market capitalization approaching 600 billion, Apple is currently the largest publicly traded company in the world. The previous title-holder, Exxon Mobil, now stands far behind at 400 billion. But Apple is not a component of the Dow Jones Industrial Average. Nor is Google, with a higher valuation than all but a handful of firms in the index. Meanwhile, firms with less than a tenth of Apple's market capitalization, including Alcoa and Hewlett-Packard, continue to be included.

The exclusion of firms like Apple and Google would appear to undermine the stated purpose of the index, which is "to provide a clear, straightforward view of the stock market and, by extension, the U.S. economy." But there are good reasons for such seemingly arbitrary omissions. The Dow is a price-weighted index, and the average price of its thirty components is currently around $58. Both Apple and Google have share prices in excess of $600, and their inclusion would cause day-to-day changes in the index to be driven largely by the behavior of these two securities. For instance, their combined weight in the Dow would be about 43% if they were to replace Alcoa and Travelers, which are the two current components with the lowest valuations. Furthermore, the index would become considerably more volatile even if the included stocks were individually no more volatile than those they replace. As John Presbo, chairman of the index oversight committee, has observed, such heavy dependence of the index on one or two stocks would "hamper its ability to accurately reflect the broader market."

Indeed, price-weighting is decidedly an odd methodology. IBM has a smaller market capitalization than Microsoft, but a substantially higher share price. Under current conditions, a 1% change in the price of IBM has an effect on the index that is almost seven times as great as a 1% change in the price of Microsoft. In fact, IBM's weight in the index is above 11%, although its valuation is less than 6% of the total among Dow components.

This issue does not arise with value-weighted indexes such as the S&P 500. But as Prestbo and others have pointed out, the Dow provides an uninterrupted picture of stock market movements dating back to 1896. An abrupt switch to value weighting would introduce a methodological discontinuity that would "essentially obliterate this history." Attention has therefore been focused on the desirability of a stock split, which would reduce Apple's share price to a level that could be accommodated by the questionable methodology of the Dow.

But an abrupt switch to a value weighting or the flawed artifice of a stock split are not the only available alternatives. In a recent paper we propose a modification that largely preserves the historical integrity of the Dow time series, while allowing for the inclusion of securities regardless of their market price. Our modified index also leads to a smooth and gradual transition, as incumbent stocks are replaced, to a fully value-weighted index in the long run.

The proposed index is composed of two subindices, one price-weighted to respect the internal structure of the Dow, and the other value-weighted to apply to new entrants. The index has two parameters, both of which are adjusted whenever a substitution is made. One of these maintains continuity in the value of the index, while the other ensures that the two subindices are weighted in proportion to their respective market capitalizations. Stock splits require a change in parameters (as in the case of the current Dow divisor) but only if the split occurs for a firm in the price-weighted subindex.

Once all incumbent firms are replaced, the result will be a fully value-weighted index. In practice this could take several decades, as some incumbent firms are likely to be remain components far into the future. But firms in the price-weighted component of the index that happen to have weights roughly commensurate with their market capitalization can be transferred with no loss of continuity to the value-weighted component. This procedure, which we call bridging, can accelerate the transition to a value-weighted index with minimal short-term disruption. Currently Coca Cola and Disney are prime candidates for bridging.

Under our proposed index, Apple would enter with a weight of less than 13% if it were to replace Alcoa. This is scarcely more than the weight currently associated with IBM, a substantially smaller company. Adding Google (in place of HP or Travelers) would further lower the weight of Apple since the total market capitalization of Dow components would rise. This is a relatively modest change that, we believe, would simultaneously serve the desirable goals of methodological continuity and market representativeness.

Friday, July 13, 2012

Market Overreaction: A Case Study

At 7:30pm yesterday the Drudge Report breathlessly broadcast the following:
ROMNEY NARROWS VP CHOICES; CONDI EMERGES AS FRONTRUNNER
Thu Jul 12 2012 19:30:01 ET 
**Exclusive** 
Late Thursday evening, Mitt Romney's presidential campaign launched a new fundraising drive, 'Meet The VP' -- just as Romney himself has narrowed the field of candidates to a handful, sources reveal. 
And a surprise name is now near the top of the list: Former Secretary of State Condoleezza Rice! 
The timing of the announcement is now set for 'coming weeks'.
The reaction on Intrade was immediate. The price of a contract that pays $10 if Rice is selected as Romney's running mate (and nothing otherwise) shot up from about 35 cents to $2, with about 2500 contracts changing hands within twenty minutes of the Drudge announcement. By the sleepy standards of the prediction market this constitutes very heavy volume. Nate Silver responded at 7:49 as follows:
The Condi Rice for VP contract at Intrade possibly the most obvious short since Pets.com
Good advice, as it turned out. By 9:45 pm the price had dropped to 90 cents a contract with about 5000 contracts traded in total since the initial announcement. Here's the price and volume chart:


One of the most interesting aspects of markets such as Intrade is that they offer sets of contracts on a list of exhaustive and mutually exclusive events. For instance, the Republican VP Nominee market contains not just the contract for Rice, but also for 56 other potential candidates, as well as a residual contract that pays off if none of the named contracts do. The sum of the bids for all these contracts cannot exceed $10, otherwise someone could sell the entire set of contracts and make an arbitrage profit. In practice, no individual is going to take the trouble to spot and exploit such opportunities, but it's a trivial matter to write a computer program that can do so as soon as they arise.

In fact, such algorithms are in widespread use on Intrade, and easy to spot. The sharp rise in the Rice contract caused the arbitrage condition to be momentarily violated and simultaneous sales of the entire set of contracts began to occur. While the price of one contract rose, the prices of the others (Portman, Pawlenty, and Ryan especially) were knocked back as existing bids started to be filled by algorithmic instruction. But as new bidders appeared for these other contracts the Rice contract itself was pushed back in price, resulting in the reversal seen in the above chart. All this in a matter of two or three hours.

Does any of this have relevance for the far more economically significant markets for equity and debt? There's a fair amount of direct evidence that these markets are also characterized by overreaction to news, and such overreaction is consistent with the excess volatility of stock prices relative to dividend flows. But overreactions in stock and bond markets can take months or years to reverse.  Benjamin Graham famously claimed that "the interval required for a substantial undervaluation to correct itself averages approximately 1½ to 2½ years," and DeBondt and Thaler found that "loser" portfolios (composed of stocks that had previously experienced sharp capital losses) continued to outperform "winner" portfolios (composed of those with significant prior capital gains) for up to five years after construction.

One reason why overreaction to news in stock markets takes so long to correct is that there is no arbitrage constraint that forces a decline in other assets when one asset rises sharply in price. 
In prediction markets, such constraints cause immediate reactions in related contracts as soon as one contract makes a major move. Similar effects arise in derivatives markets more generally: options prices respond instantly to changes in the price of the underlying, futures prices move in lock step with spot prices, and exchange-traded funds trade at prices that closely track those of their component securities. Most of this activity is generated by algorithms designed to sniff out and snap up opportunities for riskless profit. But the primitive assets in our economy, stocks and bonds, are constrained only by beliefs about their future values, and can therefore wander far and wide for long periods before being dragged back by their cash flow anchors.

---

Update (7/13). Mark Thoma and Yves Smith have both reposted this, with interesting preludes. Here's Yves:
I’d like to quibble with the notion that there is such a thing as a correct price for as vague a promise as a stock (by contrast, for derivatives, it is possible to determine a theoretical price in relationship to an actively traded underlying instrument, so even though the underlying may be misvalued, the derivative’s proper value given the current price and other parameters can be ascertained).  
Sethi suggests that stocks have “cash flow anchors”. I have trouble with that notion. A bond is a very specific obligation: to pay interest in specified amounts on specified dates, and to repay principal as of a date certain... By contrast, a stock is a very unsuitable instrument to be traded on an arm’s length, anonymous basis. A stock is a promise to pay dividends if the company makes enough money and the board is in the mood to do so. Yes, you have a vote, but your vote can be diluted at any time. There aren’t firm expectations of future cash flows; it’s all guess work and heuristics.
I chose the term "anchor" with some care, because the rode of an anchor is not always taut. I didn't mean to suggest that there is a single proper value for a stock that can be unambiguously deduced from the available information; heterogeneity in the interpretation of information alone is enough to generate a broad range of valuations. This can allow for drift in various directions as long as the price doesn't become too far detached from earnings projections.

Mark argues that the leak to Drudge was an attempt at distraction:
Rajiv Sethi looks at the reaction to the Romney campaign's attempt to change the subject from Romney's role at Bain to potential picks for vice-president (as far as I can tell, Rice has no chance -- she's "mildly pro-choice" for one -- so this was nothing more than an attempt to divert attention from Bain, an attempt one that seems to have worked, at least to some extent).
This view, which seems to be held left and right, was brilliantly summed up by Nate Silver as follows:
drudge (v.): To leak news to displace an unfavorable headline; to muddy up the news cycle.
I was tempted to reply to Nate's tweet with:
twartist (n.): One who is able by virtue of imagination and skill to create written works of aesthetic value in 140 characters or less.
But it seems that the term is already in use.

Saturday, June 30, 2012

Fighting over Claims

This brief segment from a recent speech by Joe Stiglitz sums up very neatly the nature of our current economic predicament (emphasis added):
We should realize that the resources in our economy... today is the same is at was five years ago. We have the same human capital, the same physical capital, the same natural capital, the same knowledge... the same creativity... we have all these strengths, they haven't disappeared. What has happened is, we're having a fight over claims, claims to resources. We've created more liabilities... but these are just paper. Liabilities are claims on these resources. But the resources are there. And the fight over the claims is interfering with our use of the resources
I think this is a very useful way to think about the potential effectiveness under current conditions of various policy proposals, including conventional fiscal and monetary stabilization policies.

Part of the reason for our anemic and fitful recovery is that contested claims, especially in the housing market, continue to be settled in a chaotic and extremely wasteful manner. Recovery from subprime foreclosures is typically a small fraction of outstanding principal, and properly calibrated principal write-downs can often benefit both borrowers and lenders. Modifications that would occur routinely under the traditional bilateral model of lending are much harder to implement when lenders are holders of complex structured claims on the revenues generated by mortgage payments. Direct contact between lenders and borrowers is neither legal nor practicable in this case, and the power to make modifications lies instead with servicers. But servicer incentives are not properly aligned with those of the lenders on whose behalf they collect and process payments. The result is foreclosure even when modification would be much less destructive of resources.

Despite some indications that home values are starting to rise again, the steady flow of defaults and foreclosures shows no sign of abating. Any policy that stands a chance of getting us back to pre-recession levels of resource utilization has to result in the quick and orderly settlement of these claims, with or without modification of the original contractual terms. And it's not clear to me that the blunt instruments of conventional stabilization policy can accomplish this.

Consider monetary policy for instance. The clamor for more aggressive action by the Fed has recently become deafening, with a long and distinguished line of advocates (see, for instance, recent posts by Miles KimballJoseph Gagnon, Ryan AventScott Sumner, Paul Krugman, and Tim Duy). While the various proposals differ with respect to details the idea seems to be the following: (i) the Fed has the capacity to increase inflation and nominal GDP should it choose to do so, (ii) this can be accomplished by asset purchases on a large enough scale, and (iii) doing this would increase not only inflation and nominal GDP but also output and employment.

It's the third part of this argument with which I have some difficulty, because I don't see how it would help resolve the fight over claims that is crippling our recovery. Higher inflation can certainly reduce the real value of outstanding debt in an accounting sense, but this doesn't mean that distressed borrowers will be able to meet their obligations at the originally contracted terms. In order for them to do so, it is necessary that their nominal income rises, not just nominal income in the aggregate. And monetary policy via asset purchases would seem to put money disproportionately in the pockets of existing asset holders, who are more likely to be creditors than debtors. Put differently, while the Fed has the capacity to raise nominal income, it does not have much control over the manner in which this increment is distributed across the population. And the distribution matters.

Similar issues arise with inflation. Inflation is just the growth rate of an index number, a weighted average of prices for a broad range of goods and services. The Fed can certainly raise the growth rate of this average, but has virtually no control over its individual components. That is, it cannot increase the inflation rate without simultaneously affecting relative prices. For instance, purchases of assets that drive down long term interest rates will lead to portfolio shifts and an increase in the price of commodities, which are now an actively traded asset class. This in turn will raise input costs for some firms more than others, and these cost increases will affect wages and prices to varying degrees depending on competitive conditions. As Dan Alpert has argued, expansionary monetary policy under these conditions could even "collapse economic activity, as limited per capita wages are shunted to oil and food, rather than to more expansionary forms of consumption."

I don't mean to suggest that more aggressive action by the Fed is unwarranted or would necessarily be counterproductive, just that it needs to be supplemented by policies designed to secure the rapid and efficient settlement of conflicting claims.

One of the most interesting proposals of this kind was floated back in October 2008 by John Geanakoplos and Susan Koniak, and a second article a few months later expanded on the original. It's worth examining the idea in detail. First, deadweight losses arising from foreclosure are substantial:
For subprime and other non-prime loans, which account for more than half of all foreclosures, the best thing to do for the homeowners and for the bondholders is to write down principal far enough so that each homeowner will have equity in his house and thus an incentive to pay and not default again down the line... there is room to make generous principal reductions, without hurting bondholders and without spending a dime of taxpayer money, because the bond markets expect so little out of foreclosures. Typically, a homeowner fights off eviction for 18 months, making no mortgage or tax payments and no repairs. Abandoned homes are often stripped and vandalized. Foreclosure and reselling expenses are so high the subprime bond market trades now as if it expects only 25 percent back on a loan when there is a foreclosure.
Second, securitization precludes direct contact between borrowers and lenders:
In the old days, a mortgage loan involved only two parties, a borrower and a bank. If the borrower ran into difficulty, it was in the bank’s interest to ease the homeowner’s burden and adjust the terms of the loan. When housing prices fell drastically, bankers renegotiated, helping to stabilize the market. 
The world of securitization changed that, especially for subprime mortgages. There is no longer any equivalent of “the bank” that has an incentive to rework failing loans. The loans are pooled together, and the pooled mortgage payments are divided up among many securities according to complicated rules. A party called a “master servicer” manages the pools of loans. The security holders are effectively the lenders, but legally they are prohibited from contacting the homeowners.
Third, the incentives of servicers are not aligned with those of lenders:
Why are the master servicers not doing what an old-fashioned banker would do? Because a servicer has very different incentives. Most anything a master servicer does to rework a loan will create big winners but also some big losers among the security holders to whom the servicer holds equal duties... By allowing foreclosures to proceed without much intervention, they avoid potentially huge lawsuits by injured security holders. 
On top of the legal risks, reworking loans can be costly for master servicers. They need to document what new monthly payment a homeowner can afford and assess fluctuating property values to determine whether foreclosing would yield more or less than reworking. It’s costly just to track down the distressed homeowners, who are understandably inclined to ignore calls from master servicers that they sense may be all too eager to foreclose.
And finally, the proposed solution:
To solve this problem, we propose legislation that moves the reworking function from the paralyzed master servicers and transfers it to community-based, government-appointed trustees. These trustees would be given no information about which securities are derived from which mortgages, or how those securities would be affected by the reworking and foreclosure decisions they make. 
Instead of worrying about which securities might be harmed, the blind trustees would consider, loan by loan, whether a reworking would bring in more money than a foreclosure... The trustees would be hired from the ranks of community bankers, and thus have the expertise the judiciary lacks...  
Our plan does not require that the loans be reassembled from the securities in which they are now divided, nor does it require the buying up of any loans or securities. It does require the transfer of the servicers’ duty to rework loans to government trustees. It requires that restrictions in some servicing contracts, like those on how many loans can be reworked in each pool, be eliminated when the duty to rework is transferred to the trustees... Once the trustees have examined the loans — leaving some unchanged, reworking others and recommending foreclosure on the rest — they would pass those decisions to the government clearing house for transmittal back to the appropriate servicers... 
Our plan would keep many more Americans in their homes, and put government money into local communities where it would make a difference. By clarifying the true value of each loan, it would also help clarify the value of securities associated with those mortgages, enabling investors to trade them again. Most important, our plan would help stabilize housing prices.
As with any proposal dealing with a problem of such magnitude and complexity, there are downsides to this. Anticipation of modification could induce borrowers who are underwater but current with their payments to default strategically in order to secure reductions in principal. Such policy-induced default could be mitigated by ensuring that only truly distressed households qualify. But since current financial distress is in part a reflection of past decisions regarding consumption and saving, some are sure to find the distributional effects of the policy galling. Nevertheless, it seems that something along these lines needs to be attempted if we are to get back to pre-recession levels of resource utilization anytime soon. And the urgency of action does seem to be getting renewed attention.

The bottom line, I think, is this: too much faith in the traditional tools of macroeconomic stabilization under current conditions is misplaced. One can conceive of dramatically different approaches to monetary policy, such as direct transfers to households, but these would surely face insurmountable legal and political obstacles. It is essential, therefore, that macroeconomic stabilization be supplemented by policies that are microeconomically detailed, fine grained, and directly confront the problem of balance sheet repair. Otherwise this enormously costly fight over claims will continue to impede the use of our resources for many more years to come.

Sunday, June 24, 2012

Reciprocal Fear and the Castle Doctrine Laws

In his timeless classic The Strategy of Conflict, Thomas Schelling began a chapter on the "reciprocal fear of surprise attack" as follows:
If I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to just leave quietly, and I wish him to, there is danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on. "Self-Defense" is ambiguous, when one is only trying to preclude being shot in self-defense.
This effect is empirically important, and is part of the reason why homicide rates vary so greatly across otherwise similar locations, and can change so sharply over time at a given location. In our attempt to understand why the Newark homicide rate doubled in just six years from 2000-2006 while the national rate remained essentially constant, Dan O'Flaherty and I found a substantial number of homicides to be the outcome of escalating disputes between strangers or acquaintances often over seemingly trivial matters. High rates of homicide make for a tense and fearful environment within which the preemptive motive for killing starts to loom large, and this itself reinforces the cycle of tension, fear, and continued killing. Incremental reductions in homicide under such circumstances are unlikely to be feasible, but sudden large scale reductions that transform the environment and break the cycle can sometimes be attained. Similar effects arise with international arms races.

In the jargon of economics, homicide is characterized by strategic complementarity: any increase in the willingness of one set of individuals to kill will be amplified by increases in the willingness of others to kill preemptively, and so on, in an expectations driven cascade. Any change in fundamentals can set this process off, such as a breakdown in law enforcement, easier availability of firearms, or increases in the value of a contested resource.

The logic of strategic complementarity implies that a broadening of the notion of justifiable homicide, in an attempt to benefit potential victims of crime, can have tragic and entirely counterproductive effects. Florida's 2005 stand-your-ground law is an example of this, and more than twenty other states have adopted similar legislation in its wake. These are sometimes called castle doctrine laws, since they extend to other locations the principle that one does not have a duty to retreat in one's own home (or "castle").

Enough time has elapsed since the passage of these laws for an empirical analysis of their effects to be be conducted, and a recent paper by Cheng and Hoekstra does exactly this. Determining the causal effects of any change in the legal environment is always a tricky business. The authors tackle the problem by grouping states into those that adopted such laws and those that did not, and comparing within-state changes in outcomes across the two groups of states (the so-called difference in differences identification strategy). Their findings are striking:
Results indicate that the prospect of facing additional self-defense does not deter crime. Specifically, we find no evidence of deterrence effects on burglary, robbery, or aggravated assault. Moreover, our estimates are sufficiently precise as to rule out meaningful deterrence effects. 
In contrast, we find significant evidence that the laws increase homicides... the laws increase murder and manslaughter by a statistically significant 7 to 9 percent, which translates into an additional 500 to 700 homicides per year nationally across the states that adopted castle doctrine. Thus, by lowering the expected costs associated with using lethal force, castle doctrine laws induce more of it... murder alone is increased by a statistically significant 6 to 11 percent. This is important because murder excludes non-negligent manslaughter classifications that one might think are used more frequently in self-defense cases. But regardless of how one interprets increases from various classifications, it is clear that the primary effect of strengthening self-defense law is to increase homicide.
These are statistical findings and refer to aggregate effects; no individual homicide can be attributed with certainty to a change in the legal environment, not even the one killing that has brought castle doctrine laws into national focus. Nevertheless, we now have compelling evidence that the adoption of such laws has led directly to several hundred deaths annually nationwide, with negligible deterrence effects on other crimes. While the latter finding may be surprising, the former should have been entirely predictable.

Tuesday, June 12, 2012

Elinor Ostrom, 1933-2012

The political scientist Elinor Ostrom, co-recipient of the 2009 Nobel Prize in Economics, died this morning at the age of 78. I met her just once, after a talk she gave at Columbia sometime in the 1990s. It was in a very interesting seminar series organized by Dick Nelson if I recall correctly.

I was a great admirer of Ostrom's research on common pool resources, and tried to interpret some of her insights from an evolutionary perspective in some joint work with E. Somanathan a while ago. I've written about her here on a couple of occasions, and once reviewed a book that was largely a celebration of her vision (she had a hand in no less than six chapters).

Here are some extracts from a post written soon after the Nobel announcement:
Ostrom’s extensive research on local governance has shattered the myth of inevitability surrounding the “tragedy of the commons” and curtailed the uncritical application of the free-rider hypothesis to collective action problems. Prior to her work it was widely believed that scarce natural resources such as forests and fisheries would be wastefully used and degraded or exhausted under common ownership, and therefore had to be either state owned or held as private property in order to be efficiently managed. Ostrom demonstrated that self-governance was possible when a group of users had collective rights to the resource, including the right to exclude outsiders, and the capacity to enforce rules and norms through a system of decentralized monitoring and sanctions. This is clearly a finding of considerable practical significance. 
As importantly, the award recognized an approach to research that is practically extinct in contemporary economics. Ostrom developed her ideas by reading and generalizing from a vast number of case studies of forests, fisheries, groundwater basins, irrigation systems, and pastures. Her work is rich in institutional detail and interdisciplinary to the core. She used game theoretic models and laboratory experiments to refine her ideas, but historical and institutional analysis was central to this effort. She deviated from standard economic assumptions about rationality and self-interest when she felt that such assumptions were at variance with observed behavior, and did so long before behavioral economics was in fashion... 
There is no doubt that her research has dramatically transformed our thinking about the feasibility and efficiency of common property regimes. In addition, it serves as a reminder that her eclectic and interdisciplinary approach to social science can be enormously fruitful. In making this selection at this time, it is conceivable that the Nobel Committee is sending a message that methodological pluralism is something our discipline would do well to restore, preserve and foster.
And from the book review:
Although several distinguished scholars have been affiliated with the workshop over the years, Ostrom remains its leading light and creative force. It is fitting, therefore, that the book concludes with her 1988 Presidential Address to the American Political Science Association. In this chapter, she identifies serious shortcomings in prevailing theories of collective action. Approaches based on the hypothesis of unbounded rationality and material self-interest often predict a “tragedy of the commons” and prescribe either privatization of common property or its appropriation by the state. Policies based on such theories, in her view, “have been subject to major failure and have exacerbated the very problems they were intended to ameliorate”. What is required, instead, is an approach to collective action that places reciprocity, reputation and trust at its core. Any such theory must take into account our evolved capacity to learn norms of reciprocity, and must incorporate a theory of boundedly rational and moral behavior. It is only in such terms that the effects of communication on behavior can be understood. Communication is effective in fostering cooperation, in Ostrom’s view, because it allows subjects to build trust, form group identities, reinforce reciprocity norms, and establish mutual commitment. The daunting task of building rigorous models of economic and political choice in which reciprocity and trust play a meaningful role is only just beginning... 
The key conclusions drawn by the contributors are nuanced and carefully qualified, but certain policy implications do emerge from the analysis. The most important of these is that local communities can often find autonomous and effective solutions to collective-action problems when markets and states fail to do so. Such institutions of self-governance are fragile: large-scale interventions, even when well-intentioned, can disrupt and damage local governance structures, often resulting in unanticipated welfare losses. When a history of successful community resource management is in evidence, significant interventions should be made with caution. Once destroyed, evolved institutions are every bit as difficult to reconstruct as natural ecosystems, and a strong case can be made for conserving those that achieve acceptable levels of efficiency and equity. By ignoring the possibility of self-governance, one puts too much faith in the benevolence of a national government that is too large for local problems and too small for global ones. Moreover, as Ostrom points out in the concluding chapter, by teaching successive generations that the solution to collective-action problems lie either in the market or in the state, “we may be creating the very conditions that undermine our democratic way of life”. The stakes could not be higher.
Earlier tributes to Ostrom from Vernon Smith and Paul Romer are well worth revisiting.

Friday, April 27, 2012

On Equilibrium, Disequilibrium, and Rational Expectations

There's been some animated discussion recently on equilibrium analysis in economics, starting with a provocative post by Noah Smith, vigorous responses by Roger Farmer and JW Mason, and some very lively comment threads (see especially the smart and accurate points made by Keshav on the latter posts). This is a topic of particular interest to me, and the debate gives me a welcome opportunity to resume blogging after an unusually lengthy pause.

As Farmer's post makes clear, equilibrium in an intertemporal model requires not only that individuals make plans that are optimal conditional on their beliefs about the future, but also that these plans are mutually consistent. The subjective probability distributions on the basis of which individuals make decisions are presumed to coincide with the objective distribution to which these decisions collectively give rise. This assumption is somewhat obscured by the representative agent construct, which gives macroeconomics the appearance of a decision-theoretic exercise. But the assumption is there nonetheless, hidden in plain sight as it were. Large scale asset revaluations and financial crises, from this perspective, arise only in response to exogenous shocks and not because many individuals come to realize that they have made plans that cannot possibly all be implemented.

Farmer points out, quite correctly, that rational expectations models with multiple equilibrium paths are capable of explaining a much broader range of phenomena than those possessed of a unique equilibrium. His own work demonstrates the truth of this claim: he has managed to develop models of crisis and depression without deviating from the methodology of rational expectations. The equilibrium approach, used flexibly with allowances for indeterminacy of equilibrium paths, is more versatile than many critics imagine.

Nevertheless, there are many routine economic transactions that cannot be reconciled with the hypothesis that individual plans are mutually consistent. For instance, it is commonly argued that hedging by one party usually requires speculation by another, since mutually offsetting exposures are rare. But speculation by one party does not require hedging by another, and an enormous amount of trading activity in markets for currencies, commodities, stock options and credit derivatives involves speculation by both parties to each contract. The same applies on a smaller scale to positions taken in prediction markets such as Intrade. In such transactions, both parties are trading based on a price view, and these views are inconsistent by definition. If one party is buying low planning to sell high, their counterparty is doing just the opposite. At most one of the parties can have subjective beliefs that are consistent with with the objective probability distribution to which their actions (combined with the actions of others) gives rise.

If it were not for fundamental belief heterogeneity of this kind, there could be no speculation. This is a consequence of Aumann's agreement theorem, which states that while individuals with different information can disagree, they cannot agree to disagree as long as their beliefs are derived from a common prior. That is, they cannot persist in disagreeing if their posterior beliefs are themselves common knowledge. The intuition for this is quite straightforward: your willingness to trade with me at current prices reveals that you have different information, which should cause me to revise my beliefs and alter my price view, and should cause you to do the same. Our willingness to transact with each other causes us both to shrink from the transaction if our beliefs are derived from a common prior.

Hence accounting for speculation requires that one depart, at a minimum, from the common prior assumption. But allowing for heterogeneous priors immediately implies mutual inconsistency of individual plans, and there can be no identification of subjective with objective probability distributions.

The development of models that allow for departures from equilibrium expectations is now an active area of research. A conference at Columbia last year (with Farmer in attendance) was devoted entirely to this issue, and Mike Woodford's reply to John Kay on the INET blog is quite explicit about the need for movement in this direction:
The macroeconomics of the future... will have to go beyond conventional late-twentieth-century methodology... by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.
There is a growing literature on heterogenous priors that I think could serve as a starting point for the development of such an alternative. However, it is not enough to simply allow for belief heterogeneity; one must also confront the question of how the distribution of (mutually inconsistent) beliefs changes over time. To a first approximation, I would argue that the belief distribution evolves based on differential profitability: successful beliefs proliferate, regardless of whether those holding them were broadly correct or just extremely fortunate. This has to be combined with the possibility that some individuals will invest considerable time and effort and bear significant risk to profit from large mismatches between the existing belief distribution and the objective distribution to which it gives rise. Such contrarian actions may be spectacular successes or miserable failures, but must be accounted for in any theory of expectations that is rich enough to be worthy of the name.

 --- 

Some of the issues discussed here are explored at greater length in an essay on market ecology that I presented at a symposium in honor of Duncan Foley last week. Duncan was among the first to see that the rational expectations hypothesis implicitly entailed the assumption of complete futures markets, and would therefore be difficult to "reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life." 

Friday, February 17, 2012

The Countrywide Complaint and the Capitalization of Trust

In December 2011 the Department of Justice filed suit against Countrywide Financial Corporation alleging discrimination on the basis of race and national origin in its mortgage lending operations over the period 2004-2008. The result was a record settlement for $335 million with Bank of America, which had acquired Countrywide in 2008.

The complaint was based on a review of "internal company documents and non-public loan-level data" on more than 2.5 million loans and is worth reading in full. In addition to providing evidence of disparate impact, it describes in detail the set of incentive structures under which loan officers and mortgage brokers were operating. These compensation schemes left considerable room for individual discretion in the setting of fees and rates, and for steering borrowers towards particular loan products. The manner in which this discretion was exercised had significant effects on overall levels of compensation, resulting in strong incentives for brokers and loan officers to act against the interests of borrowers.

But these incentives were formally neutral with respect to race and national origin, which raises the question of why they led to such disparate impact. In the standard economic theory of price discrimination, it is the most affluent customers, or the ones who value the product the most, who pay the highest prices. But in the case of mortgage loans it appears that the highest prices were paid by those who could least afford to do so. One possible reason for this is that this set of borrowers was poorly informed about market rates and alternatives. But this alone is not a satisfactory explanation, because such information can be sought if one considers it to be valuable. It may not be sought, however, if if a borrower trusts his broker to be providing the best available terms. I argue below, based on a very interesting paper by Carolina Reid of the San Francisco Fed, that variations across communities in the level of such trust was a key factor in explaining why the incentive structures in place gave rise to such disparate impact.

But first, the complaint:
As a result of Countrywide's policies and practices, more than 200,000 Hispanic and African-American borrowers paid Countrywide higher loan fees and costs for their home mortgages than non-Hispanic White borrowers, not based on their creditworthiness or other objective criteria related to borrower risk, but because of their race or national origin. 
Additionally... Hispanic and African-American borrowers were placed into subprime loans when similarly-qualified non-Hispanic White borrowers received prime loans. Between 2004 and 2007, more than 10,000 Hispanic and African-American wholesale borrowers received subprime loans, with adverse terms and conditions such as high interest rates, excessive fees, prepayment penalties, and unavoidable future payment hikes, rather than prime loans... not based on their creditworthiness or other objective criteria related to borrower risk, but because of their race or national origin.
But what, exactly, were these policies and practices, and how did they give rise to the alleged disparate impact? The complaint focuses on the discretion given to loan officers and mortgage brokers, and the manner in which their compensation was determined. The process for retail loans was as follows:
Countrywide utilized a two-tier decision-making process to set the interest rates and other terms and conditions of retail loans it originated. The first step involved setting the credit risk-based prices on a daily basis... including interest rates, loan origination fees, and discount points. In this step, Countrywide accounted for numerous objective credit-related characteristics of applicants by setting a variety of prices for each of the different loan products that reflected its assessment of individual applicant creditworthiness, as well as the current market rate of interest and the price it could obtain from the sale of such a loan to investors. These prices, referred to as par or base prices, were communicated through rate sheets... Individual loan applicants did not have access to these rate sheets. 
As the second step in determining the final price it would charge an applicant for a loan, Countrywide allowed its retail mortgage loan officers... to increase the loan price charged to borrowers over the rate sheet prices set by Countrywide, up to certain caps; this pricing increase was labeled an overage. Countrywide also allowed these same employees to decrease the loan price charged to borrowers below the stated rate sheet prices; this pricing decrease was labeled a shortage. Countrywide further allowed those employees to alter the standard fees it charged in connection with processing a loan application and the standard allocation of closing costs between Countrywide and the borrower. Employees made these pricing adjustments in a subjective manner, unrelated to factors associated with an individual applicant's credit risk... 
During the time period at issue, Countrywide loan officer compensation was affected by the loan officers' decisions with respect to pricing overages and shortages, as well as other factors, such as volume of loans originated. Loan officers could obtain increased compensation for overages and could have their total compensation potentially decreased for shortages. Countrywide's compensation policy thus provided an incentive for its loan officers in making pricing adjustments to maximize overages and, when offering shortages, to minimize their amount.
Very similar incentives were in place for mortgage brokers who brought loan applications to Countrywide for origination and funding through its wholesale channel. As in the case of retail loans, rate sheets were made available to brokers on a daily basis with prices specified for different loan products based on borrower characteristics. Brokers were not required to "inform a prospective borrower of all available loan products for which he or she qualified, of the lowest interest rates and fees for a specific loan product, or of specific loan products best designed to serve the interests expressed by the applicant." In fact, the manner in which broker compensation was determined created incentives to actively conceal such information, since they were paid "based on the extent to which the interest rate charged on a loan exceeded the base, or par, rate for that loan to a borrower with particular credit risk characteristics fixed by Countrywide and listed on its rate sheets."

Aside from variation across borrowers in rates and fees for a given product, there was also variation in the types of products towards which borrowers were steered:
It was Countrywide's business practice to allow its mortgage brokers and employees to place a wholesale loan applicant in a subprime loan even when the applicant qualified for a prime loan according to Countrywide's underwriting practices... These underwriting guidelines were intended to be used to determine whether a loan applicant qualified for a prime loan product, an Alt-A loan product, a subprime loan product, or for no Countrywide loan product at all.  Countrywide's compensation policy and practice created a financial incentive for mortgage brokers to submit subprime loans to Countrywide for origination rather than any other type of residential loan product.
The incentives to increase overages, reduce shortages, and steer borrowers towards subprime products even when qualified for prime loans clearly operated against the interests of borrowers. Coupled with the incentives tied to loan volume, this compensation scheme encouraged brokers and loan officers to set terms that varied systematically across borrowers. Applicants who were more sophisticated and knowledgable, and would walk away from riskier or more expensive products, received better terms than those who were more naive. And those who were suspicious of their brokers and aware of the incentives under which they were operating secured better terms than those who were more trusting.

Hence the disparities in rates and fees identified in the complaint could, in principle, have arisen from differences across social groups in the degree to which they trusted those with whom they were transacting. Carolina Reid's paper provides some evidence for this interpretation. Reid argues that "while financial services have gone global... obtaining a mortgage is still a very local process, embedded in local context and social relations." In order to better understand this process, she interviewed homeowners in Oakland and Stockton, two areas that experienced very high rates of subprime lending prior to the crisis and correspondingly elevated rates of foreclosure subsequently. These were also areas in which a disproportionately large share of originations were mediated by mortgage brokers. Here is what she found:
One of the strong themes that emerged from the interviews was the extent to which respondents of color expressed their desire to work with a broker from their own community or background... In this sense, the interviews support Granovetter’s hypothesis that individuals are “less interested in general reputations than in whether a particular other may be expected to deal honestly with them—mainly a function of whether they or their own contacts have had satisfactory past dealings with the other.” (Granovetter 1985, p. 491) In numerous interviews, borrowers said that they turned to their social networks and relations in the neighborhood to identify a local mortgage broker who would be willing to “work with someone like me.” Part of this was driven by a lack of trust in traditional lenders, and several respondents in Oakland noted a historical distrust of banks in the community... More frequently, however, respondents noted that they didn’t think they could obtain or qualify for a loan without help from someone who was ‘like them’ but who knew the system... 
Respondents listed a wide array of ways that they received recommendations for both real estate agents and mortgage brokers: family, neighbors on the block, the local church, their jobs, the park, and parents at their kids’ school...  
The desire to be served by someone from the community was not lost on mortgage brokers, who during this time period actively created the impression that they were part of the community to help promote their business. Strategies ranged from relying on customer referrals to generate new business, to frequenting local churches, social gatherings, and businesses and by adopting local social conventions... The interviews pointed to how the respondents felt immediately connected to these brokers, “he understood my situation”, “he told me that he understands how difficult the paperwork is, especially when you have lots of jobs,” “I liked his ideas for how to brighten the kitchen,” “she seemed to understand why we wanted to move from SF, buy a house, provide for a yard for the kids, a good school.” 
In theory, mortgage brokers are well‐placed to serve as a “bridging tie” and “trusted advisor”, since they have both experience with the lending process and access to information about mortgage products and prices. Empirical research studies, however, have revealed that the during the subprime boom, yield spread premiums coupled with a push for a greater volume of loan originations provided a financial incentive for brokers to work against the interests of the borrower (e.g. Ernst, Bocia and Li 2008). In addition, since there was no statutory employer‐employee relationship between lending institutions and brokers, there were few legal protections to ensure that brokers provide borrowers with fair and balanced information. This aligns with the “trust” that social relations engender... In both Stockton and Oakland, respondents did not seem to be aware of the potential for perverse incentives on the part of brokers, and instead trusted them fully to act in their best interests. 
It is ironic that distrust of traditional lending institutions such as commercial banks led some borrowers to seek out brokers from their own communities whom they felt they could trust. But these brokers were operating under high-powered incentives to inflate rates and fees and guide borrowers towards subprime products even when they were eligible for cheaper alternatives. The trust that was placed in the brokers allowed them greater flexibility to respond to these incentives and left borrowers worse off than they would have been if they had been more suspicious or better aware of the incentive structures in place.

Viewed in this manner, the subprime saga has some broader implications. From the point of view of a company operating in multiple local markets with a diverse customer base, the strategy of giving local employees or contractors the discretion to adjust prices can be very profitable. This is especially so if these employees appear trustworthy to their customers, but are not in fact deserving of such trust. As Groucho Marx is reputed to have said:
The secret of life is honesty and fair-dealing. If you can fake that you've got it made. 
For products involving frequent repeat purchases by the same customer, reputation effects and competition can limit the degree of price discrimination. But the purchase of a home is an infrequent transaction for most people, and the complexity of the loan product precludes easy comparison with alternatives on offer. Trust then becomes a key determinant of pricing and transaction volume, especially when strong and hidden incentives for the betrayal of trust are in place.

Betrayal also leads to the erosion of trust over time. It could be argued that trust is one of our most valuable public goods, substantially lowering the costs of transacting. In the complete absence of trust, the volume of resources that would need to be devoted to monitoring would be prohibitively large and many organizations and markets would simply not exist. Trust also comes naturally to most of us, based on simple cues such as those revealed in Reid's interviews. High-powered incentives to secure and then betray such trust are therefore costly not just to the immediate victims, but also to the broader community. This may be one of the less visible consequences of the subprime crisis.