Sunday, February 28, 2010

Is Over-the-Air Television Broadcasting Really Obsolete?

Writing in the New York Times today, Richard Thaler had this to say:
Here's a list of national domestic priorities, in no particular order: Stimulate the economy, improve health care, offer fast Internet connections to all of our schools, foster development of advanced technology. Oh, and let’s not forget, we’d better do something about the budget deficit.

Now, suppose that there were a way to deal effectively with all of those things at once, without hurting anyone... I know that this sounds like the second coming of voodoo economics, but bear with me. This proposal involves no magical thinking, just good common sense: By simply reallocating the way we use the radio spectrum now devoted to over-the-air television broadcasting, we can create a bonanza for the government, stimulate the economy and advance all of the other goals listed above. Really.
What Thaler means by "reallocating the way we use the radio spectrum" is to take frequencies currently in use for over-the-air television broadcasting and auction them off for other uses:
Because we can’t create additional spectrum, we must make better use of the existing space. And the target that looks most promising in this regard is the spectrum used for over-the-air television broadcasts... over-the-air broadcasts are becoming a nearly obsolete technology. Already, 91 percent of American households get their television via cable or satellite. So we are using all of this beachfront property to serve a small and shrinking segment of the population.
Alex Tabarrok (linking to the Thaler piece) concurs:
Despite the fact that 91 percent of American households get their television via cable or satellite huge chunks of radio-spectrum are locked up in the dead technology of over-the-air television.
Maybe so. But the transition to digital over-the-air broadcasting has dramatically improved the picture quality that one can obtain with an amplified indoor antenna (even in Manhattan) and has caused many people (myself included) to switch to over-the-air broadcasts for the first time. The first thing I noticed when I did so was a significant improvement in picture quality relative to high-definition cable. Randy Hoffner of TV technology explains why:
Broadcast HDTV delivers by far the best-quality HD pictures, because cable and satellite bit-starve the digital pictures in order to decrease the bandwidth they occupy.
And people are beginning to notice. Here's a Los Angeles Times report from a couple of months ago: 
In Los Angeles, more than 30 over-the-air channels are available in English, including stations featuring movies, dramas and children's programs. Major networks including ABC, CBS and NBC beam out daytime and prime-time shows -- and professional sports -- in resolution with clarity that may shock viewers expecting the hazy broadcast signals they remember from childhood.

"Everyone who does it says the picture quality is actually better than what you're getting through cable," said Patricia McDonough, a senior vice president at Nielsen.

As more viewers tune in to the newly reenergized possibilities of broadcast television, manufacturers say they can't make antennas fast enough.

"Our sales are going through the roof," said Richard Schneider, president of Antennas Direct, a St. Louis manufacturer of the devices.

Schneider said that sales had nearly tripled since the switch-over, and that he had to add a new assembly line in his factory to meet the demand. The company produces nearly 100,000 antennas every month, thousands of which are sold in the Los Angeles area, he said.

Viewers are also finding they can combine broadcast television with the growing array of movie and TV programming available online.
Of course, it may still be the case that the most efficient use of scarce radio frequencies lies elsewhere, as Thaler contends, though it's not obvious to me that installing and maintaining a network of cables is the most cost-effective way to deliver television programming to households. In any case, until there is a change in FCC policy, the "small and shrinking segment of the population" that relies on over-the-air broadcasts is unlikely to continue shrinking for much longer.

---

Update (3/2). Robin Hanson and I go back and forth on the issue in the comments section of this post. He makes the point that assigning frequencies to the highest bidders would allow this scarce resource to be put to its highest value uses. This would be true if we had complete markets, but since broadcasters cannot contract with individual recipients of over-the-air television signals, we have a missing market. The absence of a property right in the signal prevents broadcasters from capturing any of the consumers’ surplus, and makes their auction bids uninformative with respect to overall efficiency.

We could just ignore this problem and assign frequencies to those who do have the ability to contract individually with their customers. But it’s not obvious to me that this is a better outcome than trying to complete the missing market – for instance by taxing receivers and allowing broadcasters some use of frequencies at a price that is below the market clearing bid.

The main point of my post was simply that the over-the-air product is now very good -- potentially much better than cable -- and possibly even delivered more cost-effectively. When economists with the professional stature of Richard Thaler make claims about trillion dollar free lunches, it's tempting to jump instantly on the bandwagon. But his basic premise -- that over-the-air broadcasting is "nearly obsolete" -- is not supported by the facts; the technology is alive, improved, and gaining in popularity. That alone doesn't mean it's worth preserving, but the choice is not as obvious as his article would lead one to believe.

---

Update (3/7). Cable Television obsolescence watch:
ABC's parent company switched off its signal to Cablevision's 3.1 million customers in New York at midnight Saturday in a dispute over payments that escalated just hours before the start of the Academy Awards.
Further down in the same article:
The signal can still be pulled from the air for free with an antenna and a new TV or digital converter box.
Some of those who do this will notice an improvement in picture quality. They may not give up on cable just yet because the range of over-the-air programming is still quite limited, but they might start to wonder why they are paying so much for an inferior product.

Wednesday, February 24, 2010

On Intellectual Property and Guard Labor

For several years now Michele Boldrin and David Levine have been making a vigorous case for the outright elimination of most copyright and patent protection. A very accessible (and entertaining) overview of their arguments may be found in Against Intellectual Monopoly, a version of which can (appropriately enough) be downloaded without charge. In it, the authors claim that patent protections stifle rather than stimulate technological innovation, and that copyright has the same chilling effect on artistic creativity. They point to examples of flourishing and innovative industries that thrive without such protections, and others in which the expansion of legal coverage has resulted in stagnation or decline. In doing so, they strike at the heart of the usual argument in favor of intellectual property rights, namely that such rights are necessary for sustaining economic vitality and variety.
Boldrin and Levine's use of the term "monopoly" rather than "property" to characterize patents and copyrights clearly adds rhetorical force to their arguments, but it also has an interesting precedent in English law:
It was the English Parliament that, in 1623, pioneered patent law with the aptly named Statute of Monopolies. At the time the euphemism of intellectual “property” had not yet been adopted – that a monopoly right and not a property right was being granted to innovators no one questioned. Moreover... the Statute did not create a new monopoly. It took the monopoly away from the monarchy (represented at the time by King James I) and lodged it instead with the inventor. It therefore replaced the super-monopolistic power of expropriation the Crown had enjoyed till then, with a milder monopoly by the inventor... The historical facts are worth keeping in mind vis-à-vis the frequent claims that it was the introduction of patent privileges in the seventeenth century England that spurred the subsequent industrial revolution. 
In fact, the authors argue that patent protection delayed the industrial revolution by a generation as James Watt used the "full force of the legal system" to inhibit the spread of superior variants of his invention: 
After the expiration of Watt’s patents, not only was there an explosion in the production and efficiency of engines, but steam power came into its own as the driving force of the industrial revolution. Over a thirty year period steam engines were modified and improved as crucial innovations such as the steam train, the steamboat and the steam jenny came into wide usage. The key innovation was the high-pressure steam engine – development of which had been blocked by Watt’s strategic use of his patent. Many new improvements to the steam engine, such as those of William Bull, Richard Trevithick, and Arthur Woolf, became available by 1804: although developed earlier these innovations were kept idle until the Boulton and Watt patent expired.
Nor was this an isolated case. In 1902, the Wright brothers "managed to obtain a patent covering (in their view) virtually anything resembling an airplane." They subsequently invested little effort in developing and marketing aircraft, but did spent "an enormous amount of effort in legal actions" to prevent other innovators such as Glenn Curtiss from doing so. At around the same time in England, the Baadische Chemical Company used a broad patent covering textile coloring to prevent a competitor, Levinstein, from using a "superior process to deliver the same product." The former company was unable to understand and exploit the new technology, however, and was put out of business when the latter began production in the Netherlands. Other examples of patent holders preventing the spread of innovations that they could not themselves use or profit from are scattered throughout the book.
In contrast, there are numerous cases of rapid innovation in industries with no recognized intellectual property rights. Software could not be patented before 1981, nor could financial securities prior to 1998; yet the pace of innovation was frantic in both sectors. Consider software:
What about the graphical user interfaces, the widgets such as buttons and icons, the compilers, assemblers, linked lists, object oriented programs, databases, search algorithms, font displays, word processing, computer languages – all the vast array of algorithms and methods that go into even the simplest modern program? ... Each and every one of these key innovations occurred prior to 1981 and so occurred without the benefit of patent protection. Not only that, had all these bits and pieces of computer programs been patented, as they certainly would have in the current regime, far from being enhanced, progress in the software industry would never have taken place. According to Bill Gates – hardly your radical communist or utopist – “If people had understood how patents would be granted when most of today's ideas were invented, and had taken out patents, the industry would be at a complete standstill today.”
Even today, the rate of innovation in open-source software remains vigorous without the benefit of protection:
Whatever you are viewing on the web – we hesitate to ask what it might be – is served up by a webserver. Netcraft regularly surveys websites to see what webserver they are using. In December 2004 they polled all of the 58,194,836 web sites they could find on the Internet, and found that the open source webserver Apache had a 68.43% of the market, Microsoft had 20.86% and Sun only 3.14%. Apache’s share is increasing; all other’s market shares are decreasing. So again – if you used the web today, you almost certainly used open source software.
Another industry that remains largely unprotected by intellectual property law is fashion design. Again one finds evidence of creativity and innovation alongside extensive and rapid replication by lower cost imitators:
Even the most casual of observers can scarcely be unaware of the enormous innovation that occurs in the clothing and accessories industry every three-six months, with a few top designers racing to set the standards that will be adopted by the wealthy first, and widely imitated by the mass producers of clothing for the not so wealthy shortly after. And “shortly after”, here, means really shortly after. The now world-wide phenomenon of the Spanish clothing company Zara (and of its many imitators) shows that one can bring to the mass market the designs introduced for the very top clientele with a delay that varies between three and six months. Still, the original innovators keep innovating, and keep becoming richer.
The invention of new techniques in professional sports seems completely unhindered by the fact that the innovators have no power to prevent their creations from being copied:
Innovation is also important in sports, with such innovations as the Fosbury Flop in high jumping, the triangle offense in basketball, and of course the many new American football plays that are introduced every year, serving to improve performance and provide greater consumer satisfaction. Indeed, the position of the sports leagues with respect to innovation in their own sport is not appreciably different from that of the benevolent social planner invoked by economists in assessing alternative economic institutions.

Given that sports leagues are in the position of wishing to encourage all innovations for which the benefits exceed the cost, they are also in the position to implement a private system of intellectual property, should they find it advantageous. That is, there is nothing to prevent, say, the National Football League from awarding exclusive rights to a new football play for a period of time to the coach or inventor of the new play. Strikingly, we know of no sports league that has ever done this. Apparently, in sports the competitive provision of innovation serves the social purpose, and additional incentive in the form of awards of monopoly power do not serve a useful purpose.
There is no doubt that the world would look very different in the absence of patents and copyrights, and this includes the nature of contracts written between creators and distributors of content. To get a glimpse of the kinds of contracts that are likely to become widespread if copyright were to be eliminated, one can look back at the case of publishing in the 19th century, when English authors has no protection with respect to sales in the United States. Yet they often managed to secure lucrative deals with American publishers:
How did it work? Then, as now, there is a great deal of impatience in the demand for books, especially good books. English authors would sell American publishers the manuscripts of their new books before their publication in Britain. The American publisher who bought the manuscript had every incentive to saturate the market for that particular novel as soon as possible, to avoid cheap imitators to come in soon after. This led to mass publication at fairly low prices. The amount of revenues British authors received up front from American publishers often exceeded the amount they were able to collect over a number of years from royalties in the UK.
Now one might argue that with dramatically lower costs of copying and electronic distribution, such a system would not be viable today. Boldrin and Levine provide a truly fascinating rebuttal to this argument:
What would happen to an author today without copyright?

This question is not easy to answer – since today virtually everything written is copyrighted, whether or not intended by the author. There is, however, one important exception – documents produced by the U.S. government. Not, you might think, the stuff of best sellers – and hopefully not fiction. But it does turn out that some government documents have been best sellers. This makes it possible to ask in a straightforward way – how much can be earned in the absence of copyright? The answer may surprise you as much as it surprised us.

The most significant government best seller of recent years has the rather off-putting title of The Final Report of the National Commission on Terrorist Attacks Upon the United States, but it is better known simply as the 9/11 Commission Report. The report was released to the public at noon on Thursday July 22, 2004. At that time, it was freely available for downloading from a government website. A printed version of the report published by W.W. Norton simultaneously went on sale in bookstores...

Because it is a U.S. government document, the moment it was released, other individuals, and more important, publishing houses, had the right to buy or download copies and to make and resell additional copies – electronically or in print, at a price of their choosing, in direct competition with Norton... And the right to compete with Norton was not a purely hypothetical one. Another publisher, St. Martin’s, in collaboration with the New York Times, released their own version of the report in early August, about two weeks after Norton, and this version contained not only the entire government report – but additional articles and analysis by New York Times reporters. Like the Norton version, this version was also a best seller. In addition it is estimated that 6.9 million copies of the report were (legally) downloaded over the Internet. Competition, in short, was pretty fierce.

Despite this fierce competition, the evidence suggests that Norton was able to turn a profit... we know that they sold about 1.1 million copies, and that they charged between a dollar and a dollar fifty more than St. Martin’s did. Other publishers also estimated Norton made on the order of a dollar of profit on each copy. Assuming that St. Martin’s has some idea of how to price a book to avoid losing money, this suggests Norton made at the very least on the order of a million dollars...

What, then, do these facts mean for fiction without copyright? By way of contrast to the 9/11 commission report, which was in paperback and, including free downloads, seems to have about 8 millions copies in circulation, the initial print run for Harry Potter and the Half-Blood Prince was reported to be 10.8 million hardcover copies. So we can realistically conclude that if J.K. Rowling were forced to publish her book without the benefit of copyright, she might reasonably expect to sell the book to a publishing house for several million dollars – or more. This is certainly quite a bit less money than she earns under the current copyright regime. But it seems likely... that it would still give her adequate incentive to produce her great works of literature.
While Boldrin and Levine focus largely on the effects of intellectual property rights on innovation and creativity, they also recognize the enormous waste of resources that arises in order to secure, protect and exploit these rights. There are four types of inefficiencies that can result. The most obvious is the fact that monopoly pricing excludes from the market many who would be willing and able to pay well above the current costs of production for a product; this can be particularly tragic in the case of life-saving pharmaceuticals. Second, there are the productive inefficiencies that tend to arise when firms are sheltered from competition. Third, there are investments in lobbying that serve no productive purpose but are designed to alter the legislative landscape in one's favor. And fourth, there are the costs of legal action and deliberate manipulation of product design to prevent copying and competition.
In fact, the widespread adoption of patents and copyrights has given rise to a peculiarly modern and highly skilled form of what Arjun Jayadev and Sam Bowles refer to as guard labor (see, for instance, Mark Thoma's recent post on their work.)  In the context of intellectual property rights, guard labor includes not just legal teams but also individuals with considerable technical expertise who can alter product characteristics in a manner that prevents resale across segmented markets. As Boldrin and Levine note:
For example, music producers love Digital Rights Management (DRM) because it enables them to price discriminate. The reason that DVDs have country codes, for example, is to prevent cheap DVDs sold in one country from being resold in another country where they have a higher price. Yet the effect of DRM is to reduce the usefulness of the product. One of the reasons the black market in MP3s is not threatened by legal electronic sales is that the unprotected MP3 is a superior product to the DRM protected legal product. Similarly, producers of computer software sell crippled products to consumers in an effort to price discriminate and preserve their more lucrative corporate market. One consequence of price discrimination by monopolists, especially intellectual monopolists, is that they artificially degrade their products in certain markets so as not to compete with other more lucrative markets.
Technically skilled labor is required not only to alter product characteristics, but also to identify products and processes that could be patented for entirely defensive purposes:
The following statement is from Jerry Baker, Senior Vice President of Oracle Corporation
Our engineers and patent counsel have advised me that it may be virtually impossible to develop a complicated software product today without infringing numerous broad existing patents. … As a defensive strategy, Oracle has expended substantial money and effort to protect itself by selectively applying for patents which will present the best opportunities for cross-licensing between Oracle and other companies who may allege patent infringement. If such a claimant is also a software developer and marketer, we would hope to be able to use our pending patent applications to cross-license and leave our business unchanged.
Pundits and lawyers call this “navigating the patent thickets” and a whole literature, not to speak of a lucrative new profession, has sprung up around it in the last fifteen years. The underlying idea is simple, and frightening at the same time. Thanks to the US Patent Office policy of awarding a patent to anyone with a halfway competent lawyer – and, as noted a moment ago, IP lawyers have quadrupled – thousands of individuals and firms hold patents on the most disparate kinds of software writing techniques and lines of code. As a consequence, it has become almost impossible to develop new software without infringing some patent held by someone else. A software innovator must, therefore, be ready to face legal actions by firms or individuals holding patents on some software components. A way of handling such threats is the credible counter-threat of bringing the suitor to court, in turn, for the infringement of some other patent the innovative firm holds.
The idea that certain categories of labor inhibit rather than promote economic growth dates back at least to Adam Smith. The third chapter in  Book II of the Wealth of Nations bears the title: "Of the Accumulation of Capital, or of Productive and Unproductive Labour." In it, Smith states in no uncertain terms that the manner in which labor is divided among productive and unproductive occupations affects the rate of economic growth:
Both productive and unproductive labourers, and those who do not labour at all, are all equally maintained by the annual produce of the land and labour of the country. This produce, how great soever, can never be infinite, but must have certain limits. According, therefore, as a smaller or greater proportion of it is in any one year employed in maintaining unproductive hands, the more in the one case and the less in the other will remain for the productive, and the next year’s produce will be greater or smaller accordingly; the whole annual produce, if we except the spontaneous productions of the earth, being the effect of productive labour.
Smith's argument has even greater force in an economy where some of the most highly skilled individuals are assigned to unproductive tasks. These are precisely the individuals who are best equipped to push against the technological frontier. Their loss in the productive sector therefore lowers the rate of growth for two reasons: they are unavailable to produce goods and services under current technologies, and the rate of technological progress is itself retarded.
There is much more in the book than I have been able to survey here. I have not discussed the theoretical models that provide the analytical foundation for the authors' recommendations. Nor have I mentioned the frivolous patents for methods of putting in golf or swinging a swing, or the submarine patents that seek to anticipate innovations by others. The book expands on all these issues and more, and ends up making as convincing a case for the abolition of intellectual property rights as you are likely to find anywhere. The authors do concede that there may be industries in which the absence of protection may result in suboptimal levels of innovative activity, but argue that even in such cases, direct subsidies rather than monopoly rights would be a superior policy response. Regardless of whether or not you are eventually sold on the main idea, you cannot fail to be impressed by the originality, breadth and detail of the argument. Such books are now a rarity in economics, but Against Intellectual Monoply is proof that they are not yet extinct.

---

Update (2/25). Boldrin and Levine provide a concise overview of their arguments (along with a response to some of their critics) in a recent article in the Review of Law and Economics. And this post contains a good discussion of the Jayadev and Bowles paper on guard labor, with applications to intellectual property rights. The author (mtraven) argues that guard labor is in its "purest and most apparently wasteful form when it is guarding digital content," and has this to say about open source software:
In fact, the whole free/open source movement in software and elsewhere may be seen as a response to the unpleasantness of guard labor. Proprietary software requires licensing schemes... that cause new bugs, interfere with legitimate uses, and more generally cause friction. More broadly, locking software behind a pay wall reduces the amount of sharing and requires frequent reinvention of the wheel. It's inefficient, and this drives engineers crazy. Most of the time they don't get to vote, but the FOSS movement arose as a direct response to some of the unpleasantness surrounding proprietary software and has in its way been amazingly successful...
I was around for the birth of the open source movement and efficiency really had nothing to do with it -- it was a moral struggle, based on the anguish of the excluded when a once open resource suddenly being subject to enclosure and guarding. But its ongoing success happened because of efficiency and the self-interest of software producers and companies.
The entire post is worth reading. Lots more on this topic (including occasional posts by Boldrin and Levine) may be found on the blog Against Monopoly.

Monday, February 22, 2010

Some Readings on Liquidity, Leverage and Crisis

In an earlier post I mentioned an interview with Eric Maskin in which he claimed that "most of the pieces for understanding the current financial mess were in place well before the crisis occurred," and identified five contributions that in his view were particularly insightful. 
Along similar lines, Yeon-Koo Che has assembled a weekly reading group consisting of faculty and graduate students in the Columbia community to discuss articles that might be helpful in shedding light on recent events. Included among these is a paper by John Geanakoplos that I have surveyed previously on this blog, and several that I hope to discuss in the future. Ten of the contributions we hope to tackle over the coming weeks are the following:
  1. Financial Intermediation, Loanable Funds and the Real Sector by Holmstrom and Tirole
  2. The Limits of Arbitrage by Shleifer and Vishny
  3. Understanding Financial Crises by Allen and Gale
  4. Credit-Worthiness Tests and Interbank Competition by Broeker
  5. Credit Cycles by Kiyotaki and Moore
  6. The Leverage Cycle by Geanakoplos
  7. Collective Moral Hazard, Maturity Mismatch and Systemic Bailouts by Farhi and Tirole
  8. Liquidity and Leverage by Adrian and Shin
  9. Market Liquidity and Funding Liquidity by Brunnermeir and Pedersen
  10. Outside and Inside Liquidity by Bolton, Santos and Scheinkman
I would welcome any comments on these, or suggestions for others that we may have overlooked.

Sunday, February 14, 2010

The Invincible Markets Hypothesis

There has been a lot of impassioned debate over the efficient markets hypothesis recently, but some of the disagreement has been semantic rather than substantive, based on a failure to distinguish clearly between informational efficiency and allocative efficiency. Roughly speaking, informational efficiency states that active management strategies that seek to identify mispriced securities cannot succeed systematically, and that individuals should therefore adopt passive strategies such as investments in index funds. Allocative efficiency requires more than this, and is satisfied when the price of an asset accurately reflects the (appropriately discounted) stream of earnings that it is expected to yield over the course of its existence. If markets fail to satisfy this latter condition, then resource allocation decisions (such as residential construction or even career choices) that are based on price signals can result in significant economic inefficiencies.
Some of the earliest and most influential work on market efficiency was based on the (often implicit) assumption that informational efficiency implied allocative efficiency. Consider, for instance, the following passage from Eugene Fama's 1965 paper on random walks in stock market prices (emphasis added):
The assumption of the fundamental analysis approach is that at any point in time an individual security has an intrinsic value... which depends on the earning potential of the security. The earning potential of the security depends in turn on such fundamental factors as quality of management, outlook for the industry and the economy, etc...

In an efficient market, competition among the many intelligent participants leads to a situation where, at any point in time, actual prices of individual securities already reflect the effects of information based both on events that have already occurred and on events which, as of now, the market expects to take place in the future. In other words, in an efficient market at any point in time the actual price of a security will be a good estimate of its intrinsic value.
Or consider the opening paragraph of his enormously influential 1970 review of the theory and evidence for market efficiency:
The primary role of the capital market is allocation of ownership of the economy's capital stock. In general terms, the ideal is a market in which prices provide accurate signals for resource allocation: that is, a market in which firms can make production-investment decisions, and investors can choose among the securities that represent ownership of firms' activities under the assumption that security prices at any time “fully reflect” all available information. A market in which prices always “fully reflect” available information is called “efficient.”
The above passage is quoted by Justin Fox, who argues that proponents of the hypothesis have recently been defining efficiency down:
That leaves us with an efficient market hypothesis that merely claims, as John Cochrane puts it, that "nobody can tell where markets are going." This is an okay theory, and one that has held up reasonably well—although there are well-documented exceptions such as the value and momentum effects.
The most effective recent criticisms of the efficient markets hypothesis have not focused on these exceptions or anomalies, which for the most part are quite minor and impermanent. The critics concede that informational efficiency is a reasonable approximation, at least with respect to short-term price forecasts, but deny that prices consistently provide "accurate signals for resource allocation." This is the position taken by Richard Thaler in his recent interview with John Cassidy (h/t Mark Thoma):
I always stress that there are two components to the theory. One, the market price is always right. Two, there is no free lunch: you can’t beat the market without taking on more risk. The no-free-lunch component is still sturdy, and it was in no way shaken by recent events: in fact, it may have been strengthened. Some people thought that they could make a lot of money without taking more risk, and actually they couldn’t. So either you can’t beat the market, or beating the market is very difficult—everybody agrees with that...
The question of whether asset prices get things right is where there is a lot of dispute. Gene [Fama] doesn’t like to talk about that much, but it’s crucial from a policy point of view. We had two enormous bubbles in the last decade, with massive consequences for the allocation of resources.
The same point is made somewhat more tersely by The Economist:
Markets are efficient in the sense that it's hard to make an easy buck off of them, particularly when they're rushing maniacally up the skin of an inflating bubble. But are they efficient in the sense that prices are right? Tens of thousands of empty homes say no.
And again, by Jason Zweig, building on the ideas of Benjamin Graham:
Mr. Graham proposed that the price of every stock consists of two elements. One, "investment value," measures the worth of all the cash a company will generate now and in the future. The other, the "speculative element," is driven by sentiment and emotion: hope and greed and thrill-seeking in bull markets, fear and regret and revulsion in bear markets.

The market is quite efficient at processing the information that determines investment value. But predicting the shifting emotions of tens of millions of people is no easy task. So the speculative element in pricing is prone to huge and rapid swings that can swamp investment value.

Thus, it's important not to draw the wrong conclusions from the market's inefficiency... even after the crazy swings of the past decade, index funds still make the most sense for most investors. The market may be inefficient, but it remains close to invincible.
This passage illustrates very clearly the limited value of informational efficiency when allocative efficiency fails to hold. Prices may indeed contain "all relevant information" but this includes not just beliefs about earnings and discount rates, but also beliefs about "sentiment and emotion." These latter beliefs can change capriciously, and are notoriously difficult to track and predict. Prices therefore send messages that can be terribly garbled, and resource allocation decisions based on these prices can give rise to enormous (and avoidable) waste. Provided that major departures of prices from intrinsic values can be reliably identified, a case could be made for government intervention in affecting either the prices themselves, or at least the responses to the signals that they are sending.
Under these conditions it makes little sense to say that markets are efficient, even if they are essentially unpredictable in the short run. Lorenzo at Thinking Out Aloud suggests a different name:
...like other things in economics, such as rational expectations, EMH needs a better name. It is really something like the "all-information-is-incorporated hypothesis" just as rational expectations is really consistent expectations. If they had more descriptive names, people would not misconstrue them so easily and there would be less argument about them.
But a name that emphasizes informational efficiency is also misleading, because it does not adequately capture the range of non-fundamental information on market psychology that prices reflect. My own preference (following Jason Zweig) would be to simply call it the invincible markets hypothesis.

---

Update (2/16). Mark Thoma has more on the subject, as does Cyril Hédoin. Brad DeLong and Robert Waldmann have also linked here, which gives me an excuse to mention two papers of theirs (both written with Shleifer and Summers, and both published in 1990) that were among the first to try and grapple with the question of how rational arbitrageurs would adjust their behavior in response to the presence of noise traders. In a related article that I have discussed previously on this blog (here and here, for instance), Abreu and Brunnermeier have shown how the difficulty of coordinated attack can result in prolonged departures of prices from fundamentals.

---

Update (2/17). My purpose here was to characterize a hypothesis and not to endorse it. In a comment on the post (and also here), Rob Bennett makes the claim that market timing based on aggregate P/E ratios can be a far more effective strategy than passive investing over long horizons (ten years or more.) I am not in a position to evaluate this claim empirically but it is consistent with Shiller's analysis and I can see how it could be true. Over short horizons, however, attempts at market timing can be utterly disastrous, as I have discussed previously. This is what makes bubbles possible. In fact, I believe that market timing over short horizons is much riskier than it would be if markets satisfied allocative efficiency and the only risk came from changes in fundamentals and one's own valuation errors.

---

Update (2/20). Scott Sumner jumps into the fray, but grossly mischaracterizes my position:
Then there is talk (here and here) of a new type of inefficient markets; Rajiv Sethi calls it the invincible market hypothesis.  I don’t buy it, nor do I think the more famous anti-EMH types would either. The claim is that markets are efficient, but they are also so irrational that there is no way for investors to take advantage of that fact.  This implies that the gap between actual price and fundamental value doesn’t tend to close over time, but rather follows a sort of random walk, drifting off toward infinity.
This is obviously not what I claimed. There is absolutely no chance of the gap between prices and fundamentals "drifting off toward infinity." All bubbles are followed by crashes or bear markets, and prices do track fundamentals pretty well over long horizons. The problem lies in the fact that attempts to time the market over short horizons can be utterly disastrous, as I have discussed at length in a previous post; this is what makes asset bubbles possible in the first place. I used the term "invincible markets hypothesis" not as a "new type of inefficient markets hypothesis" but rather as a description of the claim that markets satisfy informational (but not allocative) efficiency. And I did not endorse this claim, except as a "reasonable approximation... with respect to short-term price forecasts."

Sumner continues as follows:
Sethi argues Shiller might be right in the long run, but may be wrong in the short run. I don’t buy that distinction. If Shiller’s right then the anti-EMH position has useful investment implications, even for short term investors...
This too is false. I believe that Shiller is right in the short run (since he argues that prices can depart significantly from fundamental values) and also right in the long run (since he believes that in the long run prices track fundamentals quite well). This does not have useful investment implications for short term investors because short run price movements are so unpredictable and taking short positions during a bubble is so risky. Over long horizons, however, Shiller's analysis does suggest that risk-adjusted returns will be greater if the P/E ratio is lower at the time of the initial investment. One could rationalize this with suitable assumptions about time-varying discount rates, but as Thaler points out, such rational choice models are incredibly flexible and lacking in discipline. Everyone acknowledges, however, that for most investors passive investing is far superior to short-term attempts at market timing. The disagreement is about whether prices can deviate significantly from fundamentals from time to time, resulting in severe economic dislocations and inefficiencies.

---

Update (2/24). For a sober assessment of why passive investing remains the best strategy for most investors despite modest violations of informational efficiency, see this post at Pop Economics.

Robert Waldmann's comments at Angry Bear are excellent, but need to be read with some care. His main point is this: there exist certain (standard but restrictive) general equilibrium models in which informational efficiency does imply allocative efficiency. But minor deviations from informational efficiency do not imply that deviations from allocative efficiency will also be minor.
Anomalies in risk adjusted returns on the order of 1% per year can't be detected. We can't be sure of exactly how to adjust for risk. However, they can make the difference between allocative efficiency and gross inefficiency.

For policy makers there is a huge huge difference between "markets are approximately informational efficient" and "markets are informational efficient." The second claim (plus standard false assumptions) implies that markets are allocatively efficient. The [first] implies nothing about allocative efficiency.
In other words, the link between informational efficiency and allocative efficiency is not robust. This is why the market can be hard to beat, and yet generate significant departures of prices from fundamentals from time to time. 

Saturday, February 06, 2010

A Case for Agent-Based Models in Economics

In a recent essay in Nature, Doyne Farmer and Duncan Foley have made a strong case for the use of agent-based models in economics. These are computational models in which a large numbers of interacting agents (individuals, households, firms, and regulators, for example) are endowed with behavioral rules that map environmental cues onto actions. Such models are capable of generating complex dynamics even with simple behavioral rules because the interaction structure can give rise to emergent properties that could not possibly be deduced by examining the rules themselves. As such, they are capable of providing microfoundations for macroeconomics in a manner that is both more plausible and more authentic than is the case with highly aggregative representative agent models.

Among the most famous (and spectacular) agent-based models is John Conway's Game of Life (if you've never seen a simulation of this you really must). In economics, the earliest such models were developed by Thomas Schelling in the 1960s, and included his celebrated checkerboard model of residential segregation. But with the exception of a few individuals (some of whom are mentioned below) there has been limited interest among economists in the further development of such approaches.

Farmer and Foley hope to change this. They begin their piece with a critical look at contemporary modeling practices:
In today's high-tech age, one naturally assumes that US President Barack Obama's economic team and its international counterparts are using sophisticated quantitative computer models to guide us out of the current economic crisis. They are not. 
The best models they have are of two types, both with fatal flaws. Type one is econometric: empirical statistical models that are fitted to past data. These successfully forecast a few quarters ahead as long as things stay more or less the same, but fail in the face of great change. Type two goes by the name of 'dynamic stochastic general equilibrium'. These models... by their very nature rule out crises of the type we are experiencing now. 
As a result, economic policy-makers are basing their decisions on common sense, and on anecdotal analogies to previous crises such as Japan's 'lost decade' or the Great Depression...The leaders of the world are flying the economy by the seat of their pants.
This is hard for most non-economists to believe. Aren't people on Wall Street using fancy mathematical models? Yes, but for a completely different purpose: modelling the potential profit and risk of individual trades. There is no attempt to assemble the pieces and understand the behaviour of the whole economic system.
The authors suggest a shift in orientation:
There is a better way: agent-based models. An agent-based model is a computerized simulation of a number of decision-makers (agents) and institutions, which interact through prescribed rules. The agents can be as diverse as needed — from consumers to policy-makers and Wall Street professionals — and the institutional structure can include everything from banks to the government. Such models do not rely on the assumption that the economy will move towards a predetermined equilibrium state, as other models do. Instead, at any given time, each agent acts according to its current situation, the state of the world around it and the rules governing its behaviour. An individual consumer, for example, might decide whether to save or spend based on the rate of inflation, his or her current optimism about the future, and behavioural rules deduced from psychology experiments. The computer keeps track of the many agent interactions, to see what happens over time. Agent-based simulations can handle a far wider range of nonlinear behaviour than conventional equilibrium models. Policy-makers can thus simulate an artificial economy under different policy scenarios and quantitatively explore their consequences.
Such methods are unfamiliar (or unappealing) to most theorists in the leading research departments and rarely published in the top professional journals. Farmer and Foley attribute this in part to the failure of a particular set of macroeconomic policies, and the resulting ascendancy of the rational expectations hypothesis:
Why is this type of modelling not well-developed in economics? Because of historical choices made to address the complexity of the economy and the importance of human reasoning and adaptability.
The notion that financial economies are complex systems can be traced at least as far back as Adam Smith in the late 1700s. More recently John Maynard Keynes and his followers attempted to describe and quantify this complexity based on historical patterns. Keynesian economics enjoyed a heyday in the decades after the Second World War, but was forced out of the mainstream after failing a crucial test during the mid-seventies. The Keynesian predictions suggested that inflation could pull society out of a recession; that, as rising prices had historically stimulated supply, producers would respond to the rising prices seen under inflation by increasing production and hiring more workers. But when US policy-makers increased the money supply in an attempt to stimulate employment, it didn't work — they ended up with both high inflation and high unemployment, a miserable state called 'stagflation'. Robert Lucas and others argued in 1976 that Keynesian models had failed because they neglected the power of human learning and adaptation. Firms and workers learned that inflation is just inflation, and is not the same as a real rise in prices relative to wages...

The cure for macroeconomic theory, however, may have been worse than the disease. During the last quarter of the twentieth century, 'rational expectations' emerged as the dominant paradigm in economics... Even if rational expectations are a reasonable model of human behaviour, the mathematical machinery is cumbersome and requires drastic simplifications to get tractable results. The equilibrium models that were developed, such as those used by the US Federal Reserve, by necessity stripped away most of the structure of a real economy. There are no banks or derivatives, much less sub-prime mortgages or credit default swaps — these introduce too much nonlinearity and complexity for equilibrium methods to handle...
Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. In some other areas of science, such as epidemiology or traffic control, agent-based models already help policy-making.
One problem that must be addressed if agent-based models are to gain widespread acceptance in economics is that of quality control. For methodologies that are currently in common use, there exist well-understood (though imperfect) standards for assessing the value of any given contribution. Empirical researchers are concerned with identification and external validity, for instance, and theorists with robustness. But how is one to judge the robustness of a set of simulation results?
The major challenge lies in specifying how the agents behave and, in particular, in choosing the rules they use to make decisions. In many cases this is still done by common sense and guesswork, which is only sometimes sufficient to mimic real behaviour. An attempt to model all the details of a realistic problem can rapidly lead to a complicated simulation where it is difficult to determine what causes what. To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.
This recognizes the problem of quality control, but does not offer much in the way of guidance for editors or referees in evaluating submissions. Presumably such standards will emerge over time, perhaps through the development of a few contributions that are commonly agreed to be outstanding and can serve as templates for future work.

There do exist a number of researchers using agent-based methodologies in economics, and Farmer and Foley specifically mention Blake LeBaron, Rob Axtell, Mauro Gallegati, Robert Clower and Peter Howitt. To this list I would add Joshua Epstein, Marco Janssen, Peter Albin, and especially Leigh Tesfatsion, whose ACE (agent-based computational economics) website provides a wonderful overview of what such methods are designed to achieve. (Tesfatsion also mentions not just Smith but also Hayek as a key figure in exploring the "self-organizing capabilities of decentralized market economies.")

A recent example of an agent-based model that deals specifically with the financial crisis may be found in a paper by Thurner, Farmer, and Geanakoplos. Farmer and Foley provide an overview:
Leverage, the investment of borrowed funds, is measured as the ratio of total assets owned to the wealth of the borrower; if a house is bought with a 20% down-payment the leverage is five. There are four types of agents in this model. 'Noise traders', who trade more or less at random, but are slightly biased toward driving prices towards a fundamental value; hedge funds, which hold a stock when it is under-priced and otherwise hold cash; investors who decide whether to invest in a hedge fund; and a bank that can lend money to the hedge funds, allowing them to buy more stock. Normally, the presence of the hedge funds damps volatility, pushing the stock price towards its fundamental value. But, to contain their risk, the banks cap leverage at a predetermined maximum value. If the price of the stock drops while a fund is fully leveraged, the fund's wealth plummets and its leverage increases; thus the fund has to sell stock to pay off part of its loan and keep within its leverage limit, selling into a falling market.
This agent-based model shows how the behaviour of the hedge funds amplifies price fluctuations, and in extreme cases causes crashes. The price statistics from this model look very much like reality. It shows that the standard ways banks attempt to reduce their own risk can create more risk for the whole system.
Previous models of leverage based on equilibrium theory showed qualitatively how leverage can lead to crashes, but they gave no quantitative information about how this affects the statistical properties of prices. The agent approach simulates complex and nonlinear behaviour that is so far intractable in equilibrium models. It could be made more realistic by adding more detailed information about the behaviour of real banks and funds, and this could shed light on many important questions. For example, does spreading risk across many financial institutions stabilize the financial system, or does it increase financial fragility? Better data on lending between banks and hedge funds would make it possible to model this accurately. What if the banks themselves borrow money and use leverage too, a process that played a key role in the current crisis? The model could be used to see how these banks might behave in an alternative regulatory environment.
I have discussed Geanakoplos' more methodologically orthodox papers on leverage cycles in an earlier post. That work uses standard methods in general equilibrium theory to address related questions, suggesting that the two approaches are potentially quite complementary. In fact, the nature of agent-based modeling is such that it is best conducted in interdisciplinary teams, and is therefore unlikely to ever become the dominant methodology in use:
Creating a carefully crafted agent-based model of the whole economy is, like climate modelling, a huge undertaking. It requires close feedback between simulation, testing, data collection and the development of theory. This demands serious computing power and multi-disciplinary collaboration among economists, computer scientists, psychologists, biologists and physical scientists with experience in large-scale modelling. A few million dollars — much less than 0.001% of the US financial stimulus package against the recession — would allow a serious start on such an effort.
Given the enormity of the stakes, such an approach is well worth trying.
I agree. This kind of effort is currently being undertaken at the Santa Fe Institute, where Farmer and Foley are both on the faculty. And for graduate students interested in exploring these ideas and methods, John Miller and Scott Page hold a Workshop on Computational Economic Modeling in Santa Fe each summer (the program announcement for the 2010 workshop is here.) Their book on Complex Adaptive Systems provides a nice introduction to the subject, as does Epstein and Axtell's Growing Artificial Societies. But it is Thomas Schelling's Micromotives and Macrobehavior, first published in 1978, that in my view reveals most clearly the logic and potential of the agent-based approach.

---

Update (2/7). Cyril Hedoin at Rationalité Limitée points to a paper by Axtell, Axelrod, Epstein and Cohen that explicitly discusses the important issues of replication and comparative model evaluation for agent-based simulations. (He also mentions a nineteenth century debate on research methodology between Carl Menger and Gustav von Schmoller that seems relevant; I'd like to take a closer look at this if I can ever find the time.)

Also, in a pair of comments on this post, Barkley Rosser recommends a 2008 book on Emergent Macroeconomics by Delli Gatti, Gaffeo, Gallegati, Giulioni, and Palestrini, and provides an extensive review of agent-based computational models in regional science and urban economics.

Wednesday, February 03, 2010

Two Blog Birthdays and the Democratization of Discourse

Two notable economics blogs - Cheap Talk and The Money Illusion - celebrated their first birthdays yesterday. Each marked the occasion with highly readable (but very different) posts that got me thinking about the origins and purpose of my own blog, and the extraordinary democratization of economic discourse that the technology of blogging has set in motion.
Jeff Ely's birthday post at Cheap Talk describes how his collaboration with Sandeep Baliga finally got off the ground after a sequence of mislaid and misinterpreted emails, and how they finally settled on a name:
And so we started thinking of a name.  Sandeep had a lot of bad ideas for names
  1. hodgepodge hedgehog
  2. platypus
  3. bacon is a vegetable
  4. release the gecko
  5. coordination failure
  6. reaction function
and he is too much of a philistine to appreciate my ideas for names:
  1. banana seeds
  2. vapor mill
  3. el emenopi’
so we were at an impasse.  Somehow we hit upon the name Cheap Talk. Sandeep ran it by some folks at a party and it seemed like a hit.  (That name was taken by a then-defunct blog and wordpress does not recycle url’s so we had to morph it into cheeptalk.wordpress.com.)
I'm surprised they didn't consider babbling equilibrium. Here's the birthday message I left for them:
Jeff and Sandeep, congratulations! Reaction function would have been a good name but too modest for what you guys are doing. You have a mix of analytical clarity and offbeat humor that really appeals to my taste. I have to say, though, that your rational choice approach to torture made me a bit uneasy (not to mention queasy).

I know a bit about blogging stamina (or lack thereof). I started my blog in 2002 and had a total of 13 posts over the first seven years. Then I wrote a piece on the Gates arrest that the New York Times declined to publish, so I decided to bring the blog back to life. Most of us have more ideas than we could possibly turn into research papers – might as well make them available to everyone else.
It's not easy to find first-rate economic theorists with an abundance of style and wit, but the creators of Cheap Talk both qualify. I'm very glad they got this project going.

Meanwhile over at The Money Illusion, Scott Sumner's birthday post (which I reached via Tyler Cowen) was very different in content and tone. In part it was an attempt to justify his reasoning and policy recommendations over the past year, but it was much more than that: a serious and moving reflection on the blogging experience, the state of macroeconomic methodology, and the role of the public intellectual. Here are a few extracts from a long post that is worth reading in its entirety:
Be careful what you wish for.  Last February 2nd I started this blog with very low expectations... I knew I wasn’t a good writer, years ago I got a referee report back from an anonymous referee (named McCloskey) who said “if the author had used no commas at all, his use of commas would have been more nearly correct.”  Ouch!  But it was true, others said similar things.  And I was also pretty sure that the content was not of much interest to anyone.

Now my biggest problem is time—I spend 6 to 10 hours a day on the blog, seven days a week.  Several hours are spent responding to reader comments and the rest is spent writing long-winded posts and checking other economics blogs.  And I still miss many blogs that I feel I should be reading [...]

As you may know, I don’t think much of the official methodology in macroeconomics.  Many of my fellow economists seem to have a Popperian view of the social sciences.  You develop a model.  You go out and get some data.  And then you try to refute the model with some sort of regression analysis.  If you can’t refute it, then the model is assumed to be supported by the data, although papers usually end by noting “further research is necessary,” as models can never really be proved, only refuted.

My problem with this view is that it doesn’t reflect the way macro and finance actually work.  Instead the models are often data-driven.  Journals want to publish positive results, not negative.  So thousands of macroeconomists keep running tests until they find a “statistically significant” VAR model, or a statistically significant “anomaly” in the EMH.  Unfortunately, because the statistical testing is often used to generate the models, and determine which get published, the tests of statistical significance are meaningless.

I’m not trying to be a nihilist here, or a Luddite who wants to go back to the era before computers.  I do regressions in my research, and find them very useful.  But I don’t consider the results of a statistical regression to be a test of a model, rather they represent a piece of descriptive statistics, like a graph, which may or may not usefully supplement a more complex argument that relies on many different methods, not a single “Official Method.” [...]

I like Rorty’s pragmatism; his view that scientific models don’t literally correspond to reality, or mirror reality.  Rorty says that one should look for models that are “coherent,” that help us to make sense of a wide variety of facts.  I want people who read my blog to be saying to themselves “aha, now I understand why the economy continues to drag along despite low interest rates,” as they recall that low rates are not an indication of monetary stimulus... It’s all about persuasion.  And people are persuaded by coherent models [...] 
So that’s the goal of my blog, to constantly use theoretical arguments, empirical data, clever metaphors, and historical analogies that make people see the current situation in a new way.  Whatever works, as long as it is not dishonest [...]   
Regrets?  I’m pretty fatalistic about things.  I suppose it wasn’t a smart career move to spend so much time on the blog.  If I had ignored my commenters I could have had my manuscript revised by now.  But I think everything happens for a reason...The commenters played an important role in the blog.  By constantly having to defend myself against their criticism, I further refined my arguments.  In addition, I got a better idea of how other people look at monetary economics.  I don’t have any major regrets...
Happiness isn’t based on anything you achieve, but rather the anticipation of future happiness.  As sports fans know the most fun position to be in is the underdog challenging the evil empire... whether I in some sense “win” in the long run isn’t really that important to me.  I’ve already got most of what I wanted, which is for people I respect to find my arguments intriguing...
I used to think I had just a few ideas, and once I used those up I’d have nothing more to say.  As you’ve noticed (sometimes painfully) that is not my problem.  I suppose it came from being a loner for several decades... If you’d told me last year “write 1000 pages on monetary policy,” I would have recoiled in horror.  I figured I’d do a couple dozen posts, run out of ideas, and then merely comment on current events.  I had no idea that writing is thinking.  But now here I am a year later, and my blog is 1000 pages of sprawling essays.  Yes, there’s plenty of repetition, but even if you sliced out all the filler, I bet you could find a 200 page book in there somewhere.

Still, at the current pace my blog is gradually swallowing my life.  Soon I won’t be able to get anything else done.  And I really don’t get any support from Bentley, as far as I know the higher ups don’t even know I have a blog.  So I just did 2500 hours of uncompensated labor.  I hope someone got some value out of it.  Right now I just want my life back.

But I suppose I could do one more post.

And after that, maybe one more final post wouldn’t seem so difficult.

But please don’t ask me to become a blogger.  It’d be like asking me whether I ever considered becoming a heroin addict.  Just one more post.  One day at a time. . . .
This entry (not surprisingly) attracted a number of supportive and encouraging comments, among which was my own:
This is a wonderful, heartfelt post. You’re a far better writer than you give yourself credit for. Congratulations on the first birthday of your blog; I hope that there will be many more to come.
I really meant that. There was much in Sumner's post that struck a chord with me. I also see my own blog as a sequence of short interlocking essays that present what I hope is a coherent vision. And I too have been fortunate enough to have been visited by a number of thoughtful readers with whom I have had long, wide-ranging, and generally civil exchanges.
The community of academic economists is increasingly coming to be judged not simply by peer reviewers at journals or by carefully screened and selected cohorts of students, but by a global audience of curious individuals spanning multiple disciplines and specializations. Voices that have long been silenced in mainstream journals now insist on being heard on an equal footing. Arguments on blogs seem to be judged largely on their merits, independently of the professional stature of those making them. This has allowed economists in far-flung places with heavy teaching loads, or those who pursued non-academic career paths, to join debates. Even anonymous writers and autodidacts can wield considerable influence in this environment, and a number of genuinely interdisciplinary blogs have emerged (see, for instance, this fascinating post from one of my favorites.)
This has got to be a healthy development. One might persuade a referee or seminar audience that a particular assumption is justified simply because there is a large literature that builds on it, or that tractability concerns preclude reasonable alternatives. But this broader audience is not so easy to convince. Persuading a multitude of informed, thoughtful, intelligent readers of the relevance and validity of one's arguments using words rather than formal models is a far more challenging task than persuading one's own students or peers. If one can separate the wheat from the chaff, the reasoned argument from the noise, this process should result in a more dynamic and robust discipline in the long run.