Saturday, June 30, 2012

Fighting over Claims

This brief segment from a recent speech by Joe Stiglitz sums up very neatly the nature of our current economic predicament (emphasis added):
We should realize that the resources in our economy... today is the same is at was five years ago. We have the same human capital, the same physical capital, the same natural capital, the same knowledge... the same creativity... we have all these strengths, they haven't disappeared. What has happened is, we're having a fight over claims, claims to resources. We've created more liabilities... but these are just paper. Liabilities are claims on these resources. But the resources are there. And the fight over the claims is interfering with our use of the resources
I think this is a very useful way to think about the potential effectiveness under current conditions of various policy proposals, including conventional fiscal and monetary stabilization policies.

Part of the reason for our anemic and fitful recovery is that contested claims, especially in the housing market, continue to be settled in a chaotic and extremely wasteful manner. Recovery from subprime foreclosures is typically a small fraction of outstanding principal, and properly calibrated principal write-downs can often benefit both borrowers and lenders. Modifications that would occur routinely under the traditional bilateral model of lending are much harder to implement when lenders are holders of complex structured claims on the revenues generated by mortgage payments. Direct contact between lenders and borrowers is neither legal nor practicable in this case, and the power to make modifications lies instead with servicers. But servicer incentives are not properly aligned with those of the lenders on whose behalf they collect and process payments. The result is foreclosure even when modification would be much less destructive of resources.

Despite some indications that home values are starting to rise again, the steady flow of defaults and foreclosures shows no sign of abating. Any policy that stands a chance of getting us back to pre-recession levels of resource utilization has to result in the quick and orderly settlement of these claims, with or without modification of the original contractual terms. And it's not clear to me that the blunt instruments of conventional stabilization policy can accomplish this.

Consider monetary policy for instance. The clamor for more aggressive action by the Fed has recently become deafening, with a long and distinguished line of advocates (see, for instance, recent posts by Miles KimballJoseph Gagnon, Ryan AventScott Sumner, Paul Krugman, and Tim Duy). While the various proposals differ with respect to details the idea seems to be the following: (i) the Fed has the capacity to increase inflation and nominal GDP should it choose to do so, (ii) this can be accomplished by asset purchases on a large enough scale, and (iii) doing this would increase not only inflation and nominal GDP but also output and employment.

It's the third part of this argument with which I have some difficulty, because I don't see how it would help resolve the fight over claims that is crippling our recovery. Higher inflation can certainly reduce the real value of outstanding debt in an accounting sense, but this doesn't mean that distressed borrowers will be able to meet their obligations at the originally contracted terms. In order for them to do so, it is necessary that their nominal income rises, not just nominal income in the aggregate. And monetary policy via asset purchases would seem to put money disproportionately in the pockets of existing asset holders, who are more likely to be creditors than debtors. Put differently, while the Fed has the capacity to raise nominal income, it does not have much control over the manner in which this increment is distributed across the population. And the distribution matters.

Similar issues arise with inflation. Inflation is just the growth rate of an index number, a weighted average of prices for a broad range of goods and services. The Fed can certainly raise the growth rate of this average, but has virtually no control over its individual components. That is, it cannot increase the inflation rate without simultaneously affecting relative prices. For instance, purchases of assets that drive down long term interest rates will lead to portfolio shifts and an increase in the price of commodities, which are now an actively traded asset class. This in turn will raise input costs for some firms more than others, and these cost increases will affect wages and prices to varying degrees depending on competitive conditions. As Dan Alpert has argued, expansionary monetary policy under these conditions could even "collapse economic activity, as limited per capita wages are shunted to oil and food, rather than to more expansionary forms of consumption."

I don't mean to suggest that more aggressive action by the Fed is unwarranted or would necessarily be counterproductive, just that it needs to be supplemented by policies designed to secure the rapid and efficient settlement of conflicting claims.

One of the most interesting proposals of this kind was floated back in October 2008 by John Geanakoplos and Susan Koniak, and a second article a few months later expanded on the original. It's worth examining the idea in detail. First, deadweight losses arising from foreclosure are substantial:
For subprime and other non-prime loans, which account for more than half of all foreclosures, the best thing to do for the homeowners and for the bondholders is to write down principal far enough so that each homeowner will have equity in his house and thus an incentive to pay and not default again down the line... there is room to make generous principal reductions, without hurting bondholders and without spending a dime of taxpayer money, because the bond markets expect so little out of foreclosures. Typically, a homeowner fights off eviction for 18 months, making no mortgage or tax payments and no repairs. Abandoned homes are often stripped and vandalized. Foreclosure and reselling expenses are so high the subprime bond market trades now as if it expects only 25 percent back on a loan when there is a foreclosure.
Second, securitization precludes direct contact between borrowers and lenders:
In the old days, a mortgage loan involved only two parties, a borrower and a bank. If the borrower ran into difficulty, it was in the bank’s interest to ease the homeowner’s burden and adjust the terms of the loan. When housing prices fell drastically, bankers renegotiated, helping to stabilize the market. 
The world of securitization changed that, especially for subprime mortgages. There is no longer any equivalent of “the bank” that has an incentive to rework failing loans. The loans are pooled together, and the pooled mortgage payments are divided up among many securities according to complicated rules. A party called a “master servicer” manages the pools of loans. The security holders are effectively the lenders, but legally they are prohibited from contacting the homeowners.
Third, the incentives of servicers are not aligned with those of lenders:
Why are the master servicers not doing what an old-fashioned banker would do? Because a servicer has very different incentives. Most anything a master servicer does to rework a loan will create big winners but also some big losers among the security holders to whom the servicer holds equal duties... By allowing foreclosures to proceed without much intervention, they avoid potentially huge lawsuits by injured security holders. 
On top of the legal risks, reworking loans can be costly for master servicers. They need to document what new monthly payment a homeowner can afford and assess fluctuating property values to determine whether foreclosing would yield more or less than reworking. It’s costly just to track down the distressed homeowners, who are understandably inclined to ignore calls from master servicers that they sense may be all too eager to foreclose.
And finally, the proposed solution:
To solve this problem, we propose legislation that moves the reworking function from the paralyzed master servicers and transfers it to community-based, government-appointed trustees. These trustees would be given no information about which securities are derived from which mortgages, or how those securities would be affected by the reworking and foreclosure decisions they make. 
Instead of worrying about which securities might be harmed, the blind trustees would consider, loan by loan, whether a reworking would bring in more money than a foreclosure... The trustees would be hired from the ranks of community bankers, and thus have the expertise the judiciary lacks...  
Our plan does not require that the loans be reassembled from the securities in which they are now divided, nor does it require the buying up of any loans or securities. It does require the transfer of the servicers’ duty to rework loans to government trustees. It requires that restrictions in some servicing contracts, like those on how many loans can be reworked in each pool, be eliminated when the duty to rework is transferred to the trustees... Once the trustees have examined the loans — leaving some unchanged, reworking others and recommending foreclosure on the rest — they would pass those decisions to the government clearing house for transmittal back to the appropriate servicers... 
Our plan would keep many more Americans in their homes, and put government money into local communities where it would make a difference. By clarifying the true value of each loan, it would also help clarify the value of securities associated with those mortgages, enabling investors to trade them again. Most important, our plan would help stabilize housing prices.
As with any proposal dealing with a problem of such magnitude and complexity, there are downsides to this. Anticipation of modification could induce borrowers who are underwater but current with their payments to default strategically in order to secure reductions in principal. Such policy-induced default could be mitigated by ensuring that only truly distressed households qualify. But since current financial distress is in part a reflection of past decisions regarding consumption and saving, some are sure to find the distributional effects of the policy galling. Nevertheless, it seems that something along these lines needs to be attempted if we are to get back to pre-recession levels of resource utilization anytime soon. And the urgency of action does seem to be getting renewed attention.

The bottom line, I think, is this: too much faith in the traditional tools of macroeconomic stabilization under current conditions is misplaced. One can conceive of dramatically different approaches to monetary policy, such as direct transfers to households, but these would surely face insurmountable legal and political obstacles. It is essential, therefore, that macroeconomic stabilization be supplemented by policies that are microeconomically detailed, fine grained, and directly confront the problem of balance sheet repair. Otherwise this enormously costly fight over claims will continue to impede the use of our resources for many more years to come.

Sunday, June 24, 2012

Reciprocal Fear and the Castle Doctrine Laws

In his timeless classic The Strategy of Conflict, Thomas Schelling began a chapter on the "reciprocal fear of surprise attack" as follows:
If I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to just leave quietly, and I wish him to, there is danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on. "Self-Defense" is ambiguous, when one is only trying to preclude being shot in self-defense.
This effect is empirically important, and is part of the reason why homicide rates vary so greatly across otherwise similar locations, and can change so sharply over time at a given location. In our attempt to understand why the Newark homicide rate doubled in just six years from 2000-2006 while the national rate remained essentially constant, Dan O'Flaherty and I found a substantial number of homicides to be the outcome of escalating disputes between strangers or acquaintances often over seemingly trivial matters. High rates of homicide make for a tense and fearful environment within which the preemptive motive for killing starts to loom large, and this itself reinforces the cycle of tension, fear, and continued killing. Incremental reductions in homicide under such circumstances are unlikely to be feasible, but sudden large scale reductions that transform the environment and break the cycle can sometimes be attained. Similar effects arise with international arms races.

In the jargon of economics, homicide is characterized by strategic complementarity: any increase in the willingness of one set of individuals to kill will be amplified by increases in the willingness of others to kill preemptively, and so on, in an expectations driven cascade. Any change in fundamentals can set this process off, such as a breakdown in law enforcement, easier availability of firearms, or increases in the value of a contested resource.

The logic of strategic complementarity implies that a broadening of the notion of justifiable homicide, in an attempt to benefit potential victims of crime, can have tragic and entirely counterproductive effects. Florida's 2005 stand-your-ground law is an example of this, and more than twenty other states have adopted similar legislation in its wake. These are sometimes called castle doctrine laws, since they extend to other locations the principle that one does not have a duty to retreat in one's own home (or "castle").

Enough time has elapsed since the passage of these laws for an empirical analysis of their effects to be be conducted, and a recent paper by Cheng and Hoekstra does exactly this. Determining the causal effects of any change in the legal environment is always a tricky business. The authors tackle the problem by grouping states into those that adopted such laws and those that did not, and comparing within-state changes in outcomes across the two groups of states (the so-called difference in differences identification strategy). Their findings are striking:
Results indicate that the prospect of facing additional self-defense does not deter crime. Specifically, we find no evidence of deterrence effects on burglary, robbery, or aggravated assault. Moreover, our estimates are sufficiently precise as to rule out meaningful deterrence effects. 
In contrast, we find significant evidence that the laws increase homicides... the laws increase murder and manslaughter by a statistically significant 7 to 9 percent, which translates into an additional 500 to 700 homicides per year nationally across the states that adopted castle doctrine. Thus, by lowering the expected costs associated with using lethal force, castle doctrine laws induce more of it... murder alone is increased by a statistically significant 6 to 11 percent. This is important because murder excludes non-negligent manslaughter classifications that one might think are used more frequently in self-defense cases. But regardless of how one interprets increases from various classifications, it is clear that the primary effect of strengthening self-defense law is to increase homicide.
These are statistical findings and refer to aggregate effects; no individual homicide can be attributed with certainty to a change in the legal environment, not even the one killing that has brought castle doctrine laws into national focus. Nevertheless, we now have compelling evidence that the adoption of such laws has led directly to several hundred deaths annually nationwide, with negligible deterrence effects on other crimes. While the latter finding may be surprising, the former should have been entirely predictable.

Tuesday, June 12, 2012

Elinor Ostrom, 1933-2012

The political scientist Elinor Ostrom, co-recipient of the 2009 Nobel Prize in Economics, died this morning at the age of 78. I met her just once, after a talk she gave at Columbia sometime in the 1990s. It was in a very interesting seminar series organized by Dick Nelson if I recall correctly.

I was a great admirer of Ostrom's research on common pool resources, and tried to interpret some of her insights from an evolutionary perspective in some joint work with E. Somanathan a while ago. I've written about her here on a couple of occasions, and once reviewed a book that was largely a celebration of her vision (she had a hand in no less than six chapters).

Here are some extracts from a post written soon after the Nobel announcement:
Ostrom’s extensive research on local governance has shattered the myth of inevitability surrounding the “tragedy of the commons” and curtailed the uncritical application of the free-rider hypothesis to collective action problems. Prior to her work it was widely believed that scarce natural resources such as forests and fisheries would be wastefully used and degraded or exhausted under common ownership, and therefore had to be either state owned or held as private property in order to be efficiently managed. Ostrom demonstrated that self-governance was possible when a group of users had collective rights to the resource, including the right to exclude outsiders, and the capacity to enforce rules and norms through a system of decentralized monitoring and sanctions. This is clearly a finding of considerable practical significance. 
As importantly, the award recognized an approach to research that is practically extinct in contemporary economics. Ostrom developed her ideas by reading and generalizing from a vast number of case studies of forests, fisheries, groundwater basins, irrigation systems, and pastures. Her work is rich in institutional detail and interdisciplinary to the core. She used game theoretic models and laboratory experiments to refine her ideas, but historical and institutional analysis was central to this effort. She deviated from standard economic assumptions about rationality and self-interest when she felt that such assumptions were at variance with observed behavior, and did so long before behavioral economics was in fashion... 
There is no doubt that her research has dramatically transformed our thinking about the feasibility and efficiency of common property regimes. In addition, it serves as a reminder that her eclectic and interdisciplinary approach to social science can be enormously fruitful. In making this selection at this time, it is conceivable that the Nobel Committee is sending a message that methodological pluralism is something our discipline would do well to restore, preserve and foster.
And from the book review:
Although several distinguished scholars have been affiliated with the workshop over the years, Ostrom remains its leading light and creative force. It is fitting, therefore, that the book concludes with her 1988 Presidential Address to the American Political Science Association. In this chapter, she identifies serious shortcomings in prevailing theories of collective action. Approaches based on the hypothesis of unbounded rationality and material self-interest often predict a “tragedy of the commons” and prescribe either privatization of common property or its appropriation by the state. Policies based on such theories, in her view, “have been subject to major failure and have exacerbated the very problems they were intended to ameliorate”. What is required, instead, is an approach to collective action that places reciprocity, reputation and trust at its core. Any such theory must take into account our evolved capacity to learn norms of reciprocity, and must incorporate a theory of boundedly rational and moral behavior. It is only in such terms that the effects of communication on behavior can be understood. Communication is effective in fostering cooperation, in Ostrom’s view, because it allows subjects to build trust, form group identities, reinforce reciprocity norms, and establish mutual commitment. The daunting task of building rigorous models of economic and political choice in which reciprocity and trust play a meaningful role is only just beginning... 
The key conclusions drawn by the contributors are nuanced and carefully qualified, but certain policy implications do emerge from the analysis. The most important of these is that local communities can often find autonomous and effective solutions to collective-action problems when markets and states fail to do so. Such institutions of self-governance are fragile: large-scale interventions, even when well-intentioned, can disrupt and damage local governance structures, often resulting in unanticipated welfare losses. When a history of successful community resource management is in evidence, significant interventions should be made with caution. Once destroyed, evolved institutions are every bit as difficult to reconstruct as natural ecosystems, and a strong case can be made for conserving those that achieve acceptable levels of efficiency and equity. By ignoring the possibility of self-governance, one puts too much faith in the benevolence of a national government that is too large for local problems and too small for global ones. Moreover, as Ostrom points out in the concluding chapter, by teaching successive generations that the solution to collective-action problems lie either in the market or in the state, “we may be creating the very conditions that undermine our democratic way of life”. The stakes could not be higher.
Earlier tributes to Ostrom from Vernon Smith and Paul Romer are well worth revisiting.

Friday, April 27, 2012

On Equilibrium, Disequilibrium, and Rational Expectations

There's been some animated discussion recently on equilibrium analysis in economics, starting with a provocative post by Noah Smith, vigorous responses by Roger Farmer and JW Mason, and some very lively comment threads (see especially the smart and accurate points made by Keshav on the latter posts). This is a topic of particular interest to me, and the debate gives me a welcome opportunity to resume blogging after an unusually lengthy pause.

As Farmer's post makes clear, equilibrium in an intertemporal model requires not only that individuals make plans that are optimal conditional on their beliefs about the future, but also that these plans are mutually consistent. The subjective probability distributions on the basis of which individuals make decisions are presumed to coincide with the objective distribution to which these decisions collectively give rise. This assumption is somewhat obscured by the representative agent construct, which gives macroeconomics the appearance of a decision-theoretic exercise. But the assumption is there nonetheless, hidden in plain sight as it were. Large scale asset revaluations and financial crises, from this perspective, arise only in response to exogenous shocks and not because many individuals come to realize that they have made plans that cannot possibly all be implemented.

Farmer points out, quite correctly, that rational expectations models with multiple equilibrium paths are capable of explaining a much broader range of phenomena than those possessed of a unique equilibrium. His own work demonstrates the truth of this claim: he has managed to develop models of crisis and depression without deviating from the methodology of rational expectations. The equilibrium approach, used flexibly with allowances for indeterminacy of equilibrium paths, is more versatile than many critics imagine.

Nevertheless, there are many routine economic transactions that cannot be reconciled with the hypothesis that individual plans are mutually consistent. For instance, it is commonly argued that hedging by one party usually requires speculation by another, since mutually offsetting exposures are rare. But speculation by one party does not require hedging by another, and an enormous amount of trading activity in markets for currencies, commodities, stock options and credit derivatives involves speculation by both parties to each contract. The same applies on a smaller scale to positions taken in prediction markets such as Intrade. In such transactions, both parties are trading based on a price view, and these views are inconsistent by definition. If one party is buying low planning to sell high, their counterparty is doing just the opposite. At most one of the parties can have subjective beliefs that are consistent with with the objective probability distribution to which their actions (combined with the actions of others) gives rise.

If it were not for fundamental belief heterogeneity of this kind, there could be no speculation. This is a consequence of Aumann's agreement theorem, which states that while individuals with different information can disagree, they cannot agree to disagree as long as their beliefs are derived from a common prior. That is, they cannot persist in disagreeing if their posterior beliefs are themselves common knowledge. The intuition for this is quite straightforward: your willingness to trade with me at current prices reveals that you have different information, which should cause me to revise my beliefs and alter my price view, and should cause you to do the same. Our willingness to transact with each other causes us both to shrink from the transaction if our beliefs are derived from a common prior.

Hence accounting for speculation requires that one depart, at a minimum, from the common prior assumption. But allowing for heterogeneous priors immediately implies mutual inconsistency of individual plans, and there can be no identification of subjective with objective probability distributions.

The development of models that allow for departures from equilibrium expectations is now an active area of research. A conference at Columbia last year (with Farmer in attendance) was devoted entirely to this issue, and Mike Woodford's reply to John Kay on the INET blog is quite explicit about the need for movement in this direction:
The macroeconomics of the future... will have to go beyond conventional late-twentieth-century methodology... by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.
There is a growing literature on heterogenous priors that I think could serve as a starting point for the development of such an alternative. However, it is not enough to simply allow for belief heterogeneity; one must also confront the question of how the distribution of (mutually inconsistent) beliefs changes over time. To a first approximation, I would argue that the belief distribution evolves based on differential profitability: successful beliefs proliferate, regardless of whether those holding them were broadly correct or just extremely fortunate. This has to be combined with the possibility that some individuals will invest considerable time and effort and bear significant risk to profit from large mismatches between the existing belief distribution and the objective distribution to which it gives rise. Such contrarian actions may be spectacular successes or miserable failures, but must be accounted for in any theory of expectations that is rich enough to be worthy of the name.

 --- 

Some of the issues discussed here are explored at greater length in an essay on market ecology that I presented at a symposium in honor of Duncan Foley last week. Duncan was among the first to see that the rational expectations hypothesis implicitly entailed the assumption of complete futures markets, and would therefore be difficult to "reconcile with the recurring phenomena of financial crisis and asset revaluation that play so large a role in actual capitalist economic life." 

Friday, February 17, 2012

The Countrywide Complaint and the Capitalization of Trust

In December 2011 the Department of Justice filed suit against Countrywide Financial Corporation alleging discrimination on the basis of race and national origin in its mortgage lending operations over the period 2004-2008. The result was a record settlement for $335 million with Bank of America, which had acquired Countrywide in 2008.

The complaint was based on a review of "internal company documents and non-public loan-level data" on more than 2.5 million loans and is worth reading in full. In addition to providing evidence of disparate impact, it describes in detail the set of incentive structures under which loan officers and mortgage brokers were operating. These compensation schemes left considerable room for individual discretion in the setting of fees and rates, and for steering borrowers towards particular loan products. The manner in which this discretion was exercised had significant effects on overall levels of compensation, resulting in strong incentives for brokers and loan officers to act against the interests of borrowers.

But these incentives were formally neutral with respect to race and national origin, which raises the question of why they led to such disparate impact. In the standard economic theory of price discrimination, it is the most affluent customers, or the ones who value the product the most, who pay the highest prices. But in the case of mortgage loans it appears that the highest prices were paid by those who could least afford to do so. One possible reason for this is that this set of borrowers was poorly informed about market rates and alternatives. But this alone is not a satisfactory explanation, because such information can be sought if one considers it to be valuable. It may not be sought, however, if if a borrower trusts his broker to be providing the best available terms. I argue below, based on a very interesting paper by Carolina Reid of the San Francisco Fed, that variations across communities in the level of such trust was a key factor in explaining why the incentive structures in place gave rise to such disparate impact.

But first, the complaint:
As a result of Countrywide's policies and practices, more than 200,000 Hispanic and African-American borrowers paid Countrywide higher loan fees and costs for their home mortgages than non-Hispanic White borrowers, not based on their creditworthiness or other objective criteria related to borrower risk, but because of their race or national origin. 
Additionally... Hispanic and African-American borrowers were placed into subprime loans when similarly-qualified non-Hispanic White borrowers received prime loans. Between 2004 and 2007, more than 10,000 Hispanic and African-American wholesale borrowers received subprime loans, with adverse terms and conditions such as high interest rates, excessive fees, prepayment penalties, and unavoidable future payment hikes, rather than prime loans... not based on their creditworthiness or other objective criteria related to borrower risk, but because of their race or national origin.
But what, exactly, were these policies and practices, and how did they give rise to the alleged disparate impact? The complaint focuses on the discretion given to loan officers and mortgage brokers, and the manner in which their compensation was determined. The process for retail loans was as follows:
Countrywide utilized a two-tier decision-making process to set the interest rates and other terms and conditions of retail loans it originated. The first step involved setting the credit risk-based prices on a daily basis... including interest rates, loan origination fees, and discount points. In this step, Countrywide accounted for numerous objective credit-related characteristics of applicants by setting a variety of prices for each of the different loan products that reflected its assessment of individual applicant creditworthiness, as well as the current market rate of interest and the price it could obtain from the sale of such a loan to investors. These prices, referred to as par or base prices, were communicated through rate sheets... Individual loan applicants did not have access to these rate sheets. 
As the second step in determining the final price it would charge an applicant for a loan, Countrywide allowed its retail mortgage loan officers... to increase the loan price charged to borrowers over the rate sheet prices set by Countrywide, up to certain caps; this pricing increase was labeled an overage. Countrywide also allowed these same employees to decrease the loan price charged to borrowers below the stated rate sheet prices; this pricing decrease was labeled a shortage. Countrywide further allowed those employees to alter the standard fees it charged in connection with processing a loan application and the standard allocation of closing costs between Countrywide and the borrower. Employees made these pricing adjustments in a subjective manner, unrelated to factors associated with an individual applicant's credit risk... 
During the time period at issue, Countrywide loan officer compensation was affected by the loan officers' decisions with respect to pricing overages and shortages, as well as other factors, such as volume of loans originated. Loan officers could obtain increased compensation for overages and could have their total compensation potentially decreased for shortages. Countrywide's compensation policy thus provided an incentive for its loan officers in making pricing adjustments to maximize overages and, when offering shortages, to minimize their amount.
Very similar incentives were in place for mortgage brokers who brought loan applications to Countrywide for origination and funding through its wholesale channel. As in the case of retail loans, rate sheets were made available to brokers on a daily basis with prices specified for different loan products based on borrower characteristics. Brokers were not required to "inform a prospective borrower of all available loan products for which he or she qualified, of the lowest interest rates and fees for a specific loan product, or of specific loan products best designed to serve the interests expressed by the applicant." In fact, the manner in which broker compensation was determined created incentives to actively conceal such information, since they were paid "based on the extent to which the interest rate charged on a loan exceeded the base, or par, rate for that loan to a borrower with particular credit risk characteristics fixed by Countrywide and listed on its rate sheets."

Aside from variation across borrowers in rates and fees for a given product, there was also variation in the types of products towards which borrowers were steered:
It was Countrywide's business practice to allow its mortgage brokers and employees to place a wholesale loan applicant in a subprime loan even when the applicant qualified for a prime loan according to Countrywide's underwriting practices... These underwriting guidelines were intended to be used to determine whether a loan applicant qualified for a prime loan product, an Alt-A loan product, a subprime loan product, or for no Countrywide loan product at all.  Countrywide's compensation policy and practice created a financial incentive for mortgage brokers to submit subprime loans to Countrywide for origination rather than any other type of residential loan product.
The incentives to increase overages, reduce shortages, and steer borrowers towards subprime products even when qualified for prime loans clearly operated against the interests of borrowers. Coupled with the incentives tied to loan volume, this compensation scheme encouraged brokers and loan officers to set terms that varied systematically across borrowers. Applicants who were more sophisticated and knowledgable, and would walk away from riskier or more expensive products, received better terms than those who were more naive. And those who were suspicious of their brokers and aware of the incentives under which they were operating secured better terms than those who were more trusting.

Hence the disparities in rates and fees identified in the complaint could, in principle, have arisen from differences across social groups in the degree to which they trusted those with whom they were transacting. Carolina Reid's paper provides some evidence for this interpretation. Reid argues that "while financial services have gone global... obtaining a mortgage is still a very local process, embedded in local context and social relations." In order to better understand this process, she interviewed homeowners in Oakland and Stockton, two areas that experienced very high rates of subprime lending prior to the crisis and correspondingly elevated rates of foreclosure subsequently. These were also areas in which a disproportionately large share of originations were mediated by mortgage brokers. Here is what she found:
One of the strong themes that emerged from the interviews was the extent to which respondents of color expressed their desire to work with a broker from their own community or background... In this sense, the interviews support Granovetter’s hypothesis that individuals are “less interested in general reputations than in whether a particular other may be expected to deal honestly with them—mainly a function of whether they or their own contacts have had satisfactory past dealings with the other.” (Granovetter 1985, p. 491) In numerous interviews, borrowers said that they turned to their social networks and relations in the neighborhood to identify a local mortgage broker who would be willing to “work with someone like me.” Part of this was driven by a lack of trust in traditional lenders, and several respondents in Oakland noted a historical distrust of banks in the community... More frequently, however, respondents noted that they didn’t think they could obtain or qualify for a loan without help from someone who was ‘like them’ but who knew the system... 
Respondents listed a wide array of ways that they received recommendations for both real estate agents and mortgage brokers: family, neighbors on the block, the local church, their jobs, the park, and parents at their kids’ school...  
The desire to be served by someone from the community was not lost on mortgage brokers, who during this time period actively created the impression that they were part of the community to help promote their business. Strategies ranged from relying on customer referrals to generate new business, to frequenting local churches, social gatherings, and businesses and by adopting local social conventions... The interviews pointed to how the respondents felt immediately connected to these brokers, “he understood my situation”, “he told me that he understands how difficult the paperwork is, especially when you have lots of jobs,” “I liked his ideas for how to brighten the kitchen,” “she seemed to understand why we wanted to move from SF, buy a house, provide for a yard for the kids, a good school.” 
In theory, mortgage brokers are well‐placed to serve as a “bridging tie” and “trusted advisor”, since they have both experience with the lending process and access to information about mortgage products and prices. Empirical research studies, however, have revealed that the during the subprime boom, yield spread premiums coupled with a push for a greater volume of loan originations provided a financial incentive for brokers to work against the interests of the borrower (e.g. Ernst, Bocia and Li 2008). In addition, since there was no statutory employer‐employee relationship between lending institutions and brokers, there were few legal protections to ensure that brokers provide borrowers with fair and balanced information. This aligns with the “trust” that social relations engender... In both Stockton and Oakland, respondents did not seem to be aware of the potential for perverse incentives on the part of brokers, and instead trusted them fully to act in their best interests. 
It is ironic that distrust of traditional lending institutions such as commercial banks led some borrowers to seek out brokers from their own communities whom they felt they could trust. But these brokers were operating under high-powered incentives to inflate rates and fees and guide borrowers towards subprime products even when they were eligible for cheaper alternatives. The trust that was placed in the brokers allowed them greater flexibility to respond to these incentives and left borrowers worse off than they would have been if they had been more suspicious or better aware of the incentive structures in place.

Viewed in this manner, the subprime saga has some broader implications. From the point of view of a company operating in multiple local markets with a diverse customer base, the strategy of giving local employees or contractors the discretion to adjust prices can be very profitable. This is especially so if these employees appear trustworthy to their customers, but are not in fact deserving of such trust. As Groucho Marx is reputed to have said:
The secret of life is honesty and fair-dealing. If you can fake that you've got it made. 
For products involving frequent repeat purchases by the same customer, reputation effects and competition can limit the degree of price discrimination. But the purchase of a home is an infrequent transaction for most people, and the complexity of the loan product precludes easy comparison with alternatives on offer. Trust then becomes a key determinant of pricing and transaction volume, especially when strong and hidden incentives for the betrayal of trust are in place.

Betrayal also leads to the erosion of trust over time. It could be argued that trust is one of our most valuable public goods, substantially lowering the costs of transacting. In the complete absence of trust, the volume of resources that would need to be devoted to monitoring would be prohibitively large and many organizations and markets would simply not exist. Trust also comes naturally to most of us, based on simple cues such as those revealed in Reid's interviews. High-powered incentives to secure and then betray such trust are therefore costly not just to the immediate victims, but also to the broader community. This may be one of the less visible consequences of the subprime crisis. 

Sunday, January 29, 2012

Returns to Information and Returns to Capital

One of the benefits of maintaining this blog is that it gives me the opportunity to think aloud, expressing half-formed ideas in the hope that the feedback will help me sort through some interesting questions. My last post on double taxation attracted a number of thoughtful (and in some cases skeptical) comments for which I am grateful.

What I was trying to do in that post was to evaluate two incompatible statements: Warren Buffet's declaration that he pays a substantially lower tax rate at 18% than any of his office staff, and Mitt Romney's conflicting claim that his effective tax rate is close to 50%, the sum of the corporate tax rate and the rate on long-term capital gains. I argued that since the corporate tax is capitalized into prices at both the time of purchase and the time of sale, it ought not to be simply added to the capital gains tax to determine an effective rate.

The point may be expressed as follows. Over the past couple of years Romney seems to have paid about 3 million dollars in taxes on income of about 20 million annually, a rate of about 15%. If his effective tax rate is 50% then his "effective" gross income is about twice his current after-tax income, or approximately 34 million. What he is claiming, in effect, is that in the absence of the corporate tax, and with no change in the nature of his economic activities, he would have been able to secure a capital gain of 34 million annually. This does not seem plausible to me. Elimination of the corporate tax would certainly result in a one-time gain to any currently held long positions, but I don't see how it could allow him to generate an extra 14 million, which is 70% greater than his current gross income, on an ongoing basis every year.

Whatever the merits of this argument, I think that most commenters on my earlier post agree with me on two things:
  1. The adding-up approach to effective tax rates does not work for short sales and related derivative positions, since it would lead to the absurd conclusion that short sellers were paying a negative effective tax rate on capital gains.
  2. Elimination of the corporate tax would result in a sharp rise in equity prices and a windfall gain to current long investors, but would have more modest and uncertain effects on the returns to future investors who enter positions after the lower rate has been capitalized into prices. 
In particular, the following comment from Richard Serlin got me thinking about the nature of capital gains:
With regard to short selling, when the corporate tax first hits (or becomes known to hit), they'll get a windfall, but then their expected returns (of the short sales people actually choose to take) will adjust to the new norm for their risk. It's not like short selling opportunities that pay a fair market risk adjusted return always exist, anyway. When they do, it's largely not a reward for the capital, but for the information that the stock is an overpriced bad deal.
It is certainly true, as Richard points out, that profits to short positions are rewards for information, broadly interpreted to include the processing and analysis of information. They are not returns to capital in any meaningful sense, although one requires capital to enter a short position. But the same is true for at least some portion of the profits to long positions. In fact, the essence of Buffet's investment strategy is to identify underpriced companies in which to take long (and long-term) positions on which capital gains are then realized.

If capital gains are viewed largely as a return to capital, then the double taxation argument makes some sense. But viewed as a return to information and analysis, it is not clear why capital gains should be given preferential tax treatment relative to the income generated, for instance, by doctors or teachers.

I suspect that Warren Buffet views his income as being generated largely by information and judgment, and does not believe that his opportunities for ongoing capital gain would be substantially increased if the corporate tax were eliminated. He does not therefore see the tax as a significant burden, and does not consider his effective gross income to be substantially greater than that which he declares on his tax returns. Whether Romney himself feels the same way is impossible to know, since political expediency currently compels him to take a very different position. 

Saturday, January 28, 2012

Double Taxation

The release of Mitt Romney's tax returns has drawn attention yet again to the disparity between the rates paid on ordinary income and those paid on capital gains. It is being argued in some quarters that the 15% rate on capital gains vastly underestimates the effective tax rate paid by those whose income comes largely from financial investments, on the grounds that corporations pay a rate of 35% on profits. Were it not for this tax, it is argued, dividends and capital gains would be higher, and so would the after-tax receipts of those who derive the bulk of their income from such sources.

Romney himself has made this argument recently, claiming that his effective tax rate is closer to 50%:
One of the reasons why we have a lower tax rate on capital gains is because capital gains are also being taxed at the corporate level. So as businesses earn profits, that's taxed at 35 percent. Then as they distribute those profits in dividends, that's taxed at 15 percent more. So all total, the tax rate is really closer to 45 or 50 percent.
The absurdity of this claim is clearly revealed if one considers capital gains that accrue to short sellers, who pay rather than receive dividends while their positions are open. Following the logic of the argument, one would be forced to conclude that short sellers are taxed at an effective rate of negative 20%, thereby receiving a significant subsidy due to the existence of the corporate tax. The flaw in this reasoning is apparent when one recognizes that asset prices are lower (relative to the zero corporate tax benchmark) not only when a short position is covered, but also when it is entered.

There is no doubt that the presence of the corporate tax depresses the price of equities, but it does so both at the time of purchase and at the time of sale. If there were no corporate tax, dividends and capital gains per share would certainly be higher, but an investor would have paid substantially more per share to acquire his assets in the first place. As a result he would be holding fewer shares for any given initial outlay, and his after-tax income (holding constant the rate paid on capital gains) would not be substantially different.

To see why, it is useful to think about what determines the price of equities. Three factors are especially important: the current earnings of a firm (after payment of interest and taxes), the rate at which these earnings are expected to grow, and the riskiness of the security, which itself is linked to the degree to which the firm's earnings are correlated with broader market movements. Securities that are riskier in this latter sense tend to appreciate faster on average because investors would otherwise avoid them, depressing their prices and raising their expected returns until such returns are viewed as adequate compensation for the greater risk of holding them. This risk is routinely expressed as a market capitalization rate, interpreted as the expected return that investors require in order to hold the security. Airline and automobile stocks, for instance, have higher market capitalization rates than do shares in utilities.

The manner in which these factors interact to influence prices may be illustrated by considering the simplest possible case of a firm with constant expected earnings growth and a fixed dividend payout ratio. In this case, for reasons discussed in any introductory finance textbook, the fundamental value of the security is given by the simple formula D/(k-g), where D is the current dividend forecast (a constant share of the earnings forecast), g is its expected rate of growth, and k is the market capitalization rate. Shares in a debt-free firm that pays 20% of its earnings as dividends, is currently earning $10 per share annually, is expected to grow at 10%, and has a market capitalization rate of 12% would then have a share price of $100. After a year (assuming no change in these parameters) the share price would be $110 and the dividend payout $2. An investor would have made $12 on a $100 investment, a percentage return precisely equal to the market capitalization rate. All this is with no corporate tax.

Now suppose that a 35% corporate tax is in place, so after-tax earnings per share are $6.50 instead, with no change in other specifications. Dividends are then $1.30 per share and the initial share price is $65. After a year this rises to $71.50. Adding dividends and capital gains, an investor makes $7.80 for each share purchased at $65, again earning precisely 12%. Each share results in lower revenues to the investor, but since more shares can be purchased at the outset, aggregate income is no different.

None of this should be in the least bit surprising. Note, however, that if the corporate tax were to be eliminated today, there would be a sharp rise in the price of equities and current asset holders would enjoy a windfall gain. Similar issues arise with respect to the mortgage interest deduction: eliminating this would result in an immediate decline in home values, severely punishing those who purchased recently at prices that reflected the anticipated tax savings over the duration of the mortgage.

This does not necessarily mean that eliminating the corporate tax while simultaneously raising the rate on capital gains is necessarily a bad idea, or that elimination of the mortgage interest deduction is necessarily bad policy. A case could be made for both initiatives. The corporate tax is not uniformly applied due to the broad range of loopholes and exemptions, and the mortgage deduction is regressive and inhibits both neighborhood integration and labor mobility. But any such changes will have major distributional effects that must be taken into account in any comprehensive evaluation of the policy. Doing so properly requires a clear distinction between stocks and flows, and an analysis that goes a little deeper than simple arithmetic.

---

Update: Follow-up post here.