Wednesday, July 28, 2010

Equilibrium Analysis

In a recent post on his (consistently interesting) blog, David Murphy questions the value of equilibrium analysis in economics and finance, and points to two earlier posts of his in which the same point is made. Here he is in July 2007:
An interesting post on the Street Light Blog, on currency misalignments, suggests an interesting question: is economics an equilibrium discipline? The very idea of a misaligned FX rate suggests that the natural state is an aligned one: perhaps the fundamentals move faster than the markets adjust, so FX is never in equilibrium. Perhaps (in the language of statistical mechanics) the relaxation time is much longer than the average time between forcings. 
And here, in August 2008:
My own view is that finance is not an equilibrium discipline, mostly, so while classical economics might work well in explaining the price of coffee... it does rather less well in asset allocation or explaining the return distribution of financial assets. Rather new news arrives faster than the market can restore equilibrium after the last perturbation, meaning that most of the time equilibrium is not a useful concept.
In a 1975 paper that remains worth reading to this day, James Tobin was explicit about the limitations of equilibrium analysis in understanding large scale economic fluctuations:
Keynes's General Theory attempted to prove the existence of equilibrium with involuntary unemployment, and this pretension touched off a long theoretical controversy. A. C. Pigou, in particular, argued effectively that there could not be a long-run equilibrium with excess supply of labor. The predominant verdict of history is that, as a matter of pure theory, Keynes failed to prove his case.

Very likely Keynes chose the wrong battleground. Equilibrium analysis and comparative statics were the tools to which he naturally turned to express his ideas, but they were probably not the best tools for his purpose... The real issue is not the existence of a long-run static equilibrium with unemployment, but the possibility of protracted unemployment which the natural adjustments of a market economy remedy very slowly if at all. So what if, within the recherché rules of the contest, Keynes failed to establish an "underemployment equilibrium"? The phenomena he described are better regarded as disequilibrium dynamics.
Tobin then goes on to develop a dynamic disequilibrium model of the macroeconomy (discussed at length here) which has a unique equilibrium characterized by full employment, steady inflation, and correct expectations. He shows that even if this equilibrium is locally stable, so that small perturbations are self-correcting, it need not be globally stable: sufficiently large shocks to the economy can result in cumulative divergence away from equilibrium unless arrested by a significant policy response. This seems to describe what we have experienced over the past couple of years better than any equilibrium model of which I am aware.
Note that Tobin's model is deterministic. The problem here is not that the economy is being buffeted by frequent shocks that arrive before a transition to equilibrium can occur, it is that the internal dynamics of adjustment simply do not approach the equilibrium from certain (large) sets of initial states even in the absence of shocks. The idea that the instability of steady growth with respect to disequilibrium dynamics is an important feature of modern market economies, and cannot be neglected in a comprehensive theory of economic fluctuations was forcefully advanced by Richard Goodwin as far back as 1951, and Paul Samuelson had explored the possibility even earlier. As Willem Buiter has recently lamented, this line of research in macroeconomics simply dried up about a generation ago.
Another area in which equilibrium analysis is likely to be inadequate is in the study of asset markets with significant speculative activity. Price and volume dynamics in such markets depend not just on changes in fundamentals but also on the distribution of trading strategies, and this in turn adjusts under pressure of differential performance. The idea of an equilibrium composition of trading strategies is a contradiction in terms: if there were any such thing there would be a new strategy that could enter to exploit the resulting regularity. It is the complexity of this disequilibrium process that allows information arbitrage efficiency to be approximately satisfied, while allowing for significant departures from fundamental valuation efficiency (the distinction, naturally, is also due to Tobin.)
Finally consider Hyman Minsky's financial instability hypothesis, built on the paradoxical idea that stability itself can be destabilizing. In Minsky's framework stable expansions give rise to increasingly aggressive financial practices as those firms having the greatest maturity mismatch between assets and liabilities profit relative to their closest competitors. The resulting erosion in margins of safety increases financial fragility, interpreted as the likelihood that a major default will trigger a crisis of liquidity. Such a crisis eventually materializes, devastating precisely those firms whose actions gave rise to greater fragility. The balance of financial practices is then shifted in favor of increased prudence, and the stage is set for another period of stability. Trying to give this analysis an equilibrium interpretation is a futile exercise; expectations of financial market tranquility are self-falsifying, and no fixed distribution of financial practices can be stable. 
Given the potential of disequilibrium dynamic models to illuminate our understanding of the economy, why are they generally neglected in contemporary economics? In part it is because the quality of a disequilibrium model is hard to evaluate and the dynamics are necessarily arbitrary to a degree. There is a professional consensus on how equilibrium analysis should be done, but none (so far) when it comes to disequilibrium analysis. Furthermore, equilibrium models can be enormously insightful, even in applications to macroeconomics and finance. The work of John Geanakoplos on the leverage cycle is a case in point, and Abreu and Brunnermeier's paper on bubbles and crashes is another. I have used equilibrium methods frequently and will continue to do so. But it seems that there ought to be greater space in the profession for serious work on the dynamics of disequilibrium.


Update (7/31). In an email (posted with permission) David Murphy adds:
One of the main reasons people study equilibrium models is that they are an order of magnitude easier, mathematically, than non-equilibrium. If you consider a simple problem like cooling, for instance, the equilibrium version is high school physics, and the non-equilibrium version is still a research problem (with useful application to the improvement of annealing methods). Economists in my experience are very comfortable with the maths they know (a bit - stress a bit - of stochastic calculus), but they are not willing to venture much further because it gets really, really hard quite quickly.  Hence the 'we have a hammer, everything looks like a nail' problem.
I think this is correct as far as analytical results are concerned; proving theorems (aside from convergence-to-equilibrium results) with the standard toolbox is not easy in disequilibrium models.  But if one adopts a computational approach the reverse may be true. I, for one, find it easier to write an algorithm to simulate a recursive system than one that requires the computation of fixed points in high dimensional spaces. But (as noted above) there does not exist anything close to a professional consensus on how the quality of such models should be evaluated, and so they are usually rejected out of hand at most mainstream journals.

Sunday, July 25, 2010

East Asian Tigers and African Lions

William Easterly has recently argued that contemporary poverty in African nations may largely be accounted for by technological differences that date back for centuries, if not millennia:
1500 AD technology is a particularly powerful predictor of per capita income today. 78 percent of the difference in income today between sub-Saharan Africa and Western Europe is explained by technology differences that already existed in 1500 AD – even before the slave trade and colonialism. Moreover, these technological differences had already appeared by 1000 BC. The state of technology in 1000 BC has a strong correlation with technology 2500 years later, in 1500 AD...
A large role for history is still likely to sit uncomfortably with modern development practitioners, because you can’t change your history. But we have to face the world as it is, not as we would like it to be...
In a recent speech in Kampala, Gordon Brown offered a prognosis (coupled with a long list of policy recommendations) that was decidedly less gloomy about the future of Africa. Building on the observation that the continent is "full of more untapped potential and unrealised talent than any other," Brown continued as follows:
Twenty years ago nobody would have predicted that China and India would be the big drivers of growth and political superpowers they have become. And there is no reason to believe the countries of Africa cannot make similar leaps in the decades to come.... just as people have spoken of an American century and an Asian century, I believe we can now speak of an African century...
I believe the new African growth will come from five sources;
  • a faster pace of economic integration in Africa's internal market, and between your market and those of other continents, facilitated by investment in infrastructure
  • a broader based export-led growth, founded on new products and services
  • investment in the private sector from African and foreign sources in firms that create jobs and wealth
  • the up-skilling of the workforce, including through the acceleration of education provision, IT infrastructure and uptake and finally through
  • more effective governance to ensure that effective states can discharge their task of creating growth and reducing poverty
Each of these five priorities will be difficult to achieve. But we should remember the value of the prize. Because if we can agree a new model of post-crisis growth then Africa - already a 1.6 trillion economy - will continue to grow even faster than the rest of the world. This is not my assessment, but that of the world's leading companies and analysts. For example a report just published by the McKinsey Global Institute claims that Africa's consumer spending could reach 1.4 trillion dollars by 2020 - a 60% increase on 2008. In other words in ten years African consumer spending will be as big as the whole African economy is today.

It is those sorts of projections which mean people are now rightly talking not just of East Asian tigers, but of African lions.
Brown is careful to note that this rosy scenario is a "possibility rather than a probability" and that "it will happen through choice not chance." But, as Shanta Devarajan has recently observed, the choices necessary to make it happen are already being made:
In recent years, a broad swath of African countries has begun to show a remarkable dynamism.  From Mozambique’s impressive growth rate (averaging 8% p.a. for more than a decade) to Kenya’s emergence as a major global supplier of cut flowers, from M-pesa’s mobile phone-based cash transfers to KickStart’s low-cost irrigation technology for small-holder farmers, and from Rwanda’s gorilla tourism to Lagos City’s Bus Rapid Transit system, Africa is seeing a dramatic transformation.  This favorable trend is spurred by, among other things, stronger leadership, better governance, an improving business climate, innovation, market-based solutions, a more involved citizenry, and an increasing reliance on home-grown solutions.  More and more, Africans are driving African development.
Shanta links to a long list of emerging African success stories.  
While the economic consequences of an African resurgence will be major, the social implications could be even more profound. I believe that the rise of the African lions will do more to shatter racial stereotypes in the United States and elsewhere than any government policy or electoral outcome. But that is a topic for another post.


Update (7/25). I do not dispute the empirical claims made by Comin, Easterly and Gong, nor mean to suggest that that Brown's speech and Devarajan's post have any bearing on these claims. But I have serious doubts about the relevance of their findings for identifying future centers of economic dynamism or for shaping development policy. History can matter for long periods of time (for instance in occupational inheritance or the patrilineal descent of surnames) and then cease to constrain our choices in any significant way. Once reliable correlations can break down suddenly and completely; history is full of such twists and turns. As far as African prosperity is concerned, I believe that a discontinuity of this kind is inevitable if not imminent.

From an overview of the McKinsey report referenced by Brown:
While Africa's increased economic momentum is widely recognized, less known are its sources and likely staying power... Africa's growth acceleration was widespread, with 27 of its 30 largest economies expanding more rapidly after 2000. All sectors contributed, including resources, finance, retail, agriculture, transportation and telecommunications. Natural resources directly accounted for just 24 percent of the continent's GDP growth from 2000 through 2008. Key to Africa's growth surge were improved political and macroeconomic stability and microeconomic reforms... total foreign capital flows into Africa rose from $15 billion in 2000 to a peak of $87 billion in 2007... Today the rate of return on foreign investment in Africa is higher than in any other developing region.

Sunday, July 18, 2010

David Blackwell, 1919-2010

The renowned mathematician David Blackwell died on July 8 at the age of 91.
I first came across Blackwell's name in a widely-cited paper by Kalai and Lehrer on learning in repeated games. Kalai and Lehrer identified conditions under which players with different initial subjective beliefs about each others' strategies will nevertheless converge to behavior that approximates a Nash equilibrium of the repeated game. In establishing this, the authors relied heavily on the Blackwell-Dubins Theorem:
Our proof of the convergence to playing an ε-Nash equilibrium is divided into three steps. The first establishes a general self-correcting property of Bayesian updating. This is a modified version of the seminal Blackwell and Dubins' (1962) result about merging of opinions... When applied to our model, the self-correcting property shows that the probability distributions describing the players' beliefs about the future play of the game must converge to the true distribution. In other words, the beliefs and the real play become realization equivalent.
While Blackwell's work is familiar in economics largely through this result, he is also known for the Rao-Blackwell Theorem and his book (with Meyer Girshick) on the Theory of Games and Statistical Decisions. [Update: A much fuller discussion of his influence and contributions may be found here.]
Blackwell earned his doctorate in mathematics at the age of 22 from the University of Illinois, where his thesis adviser was Joseph Doob. He then spent a year at the Institute for Advanced Study in Princeton, where his cohort included Shizuo Kakutani, Paul Halmos, Leonard Savage, and Alfred Tarski. He was elected to the National Academy of Sciences in 1965 and was the sole recipient of the John von Neumann Theory Prize in 1979 (sandwiched between Nash and Lemke in 1978 and Gale, Kuhn and Tucker in 1980).
While his accomplishments are stellar and many, it is also worth contemplating the many slights that Blackwell had to endure over the course of his career:
Blackwell was appointed a Postdoctoral Fellow at the Institute for Advanced Study from 1941 for a year. At that time, members of the Institute were automatically officially made visiting fellows of Princeton University, and thus Blackwell was listed in its bulletin as such. This caused considerable ruckus as there had never been a black student, much less faculty fellow, at the University... The president of Princeton wrote the director of the Institute that the Institute was abusing the University's hospitality by admitting a black... Colleagues in Princeton wished to extend Blackwell's appointment at the institute. However, the president of Princeton organized a great protestation... When it was time to leave the institute, Blackwell knew no white schools would hire him, and he applied to all 105 Black schools in the country. After instructorships at Southern University and Clark College, Dr. Blackwell joined the faculty of Howard University from 1944 as an instructor... In three years, Blackwell had risen to the rank of Full Professor and Chairman.
Blackwell eventually moved to Berkeley in 1954 (after having previously been denied a position there due to "racial objections"). He became the first black professor to be tenured there, chaired the department of statistics from 1957 to 1961, and remained at the University until his retirement in 1988.
It takes a particular kind of strength to manage such a productive research career while tolerating the stresses and strains of personal insult, and carrying the aspirations of so many on one's shoulders. Blackwell was more than a brilliant mathematician, he was also a human being of extraordinary personal fortitude.

I am currently in Bogotá co-teaching a course with Glenn Loury at the (very impressive) Universidad de Los Andes. I am grateful to Glenn for bringing to my attention the news that Blackwell had recently passed away.


Update (7/19). Jeff Ely has linked to two other posts in appreciation of Blackwell: Eran Shmaya focuses on his work while Jesús Fernández-Villaverde writes (in Spanish) about his life. Both are well worth reading. Here's an extract from Eran's wonderful post:
We game theorists know Blackwell for several seminal contributions. Blackwell’s approachability theorem is at the heart of Aumann and Maschler’s result about repeated games with incomplete information... Blackwell’s theory of comparison of experiments has been influential in the game-theoretic study of value of information... Another seminal contribution of Blackwell, together with Lester Dubins, is the theorem about merging of opinions, which is the major tool in the Ehuds’ theory of Bayesian learning in repeated games. And then there are his contributions to the theory of infinite games with Borel payoffs (now known as Blackwell games) and Blackwell and Fergurson’s solution to the Big Match game.

One conspicuous aspect of many of Blackwell’s awesome papers is that they are extremely short — often a couple of pages long. He had an amazing ability to prove theorems in the right way, and he wrote with eloquence and clarity. He is the only writer I know who uses the 'as the reader can verify' trick productively, exactly at those occasions when the reader will indeed find it easier to convince himself in the validity of an assertion than to read a formal proof of it. It is very rare that I succeed in reading proofs in papers that were written dozens of years ago: Notations and perspectives change, and important results are usually reproduced in clearer way over the years. But Blackwell’s papers are still the best place to read the proofs of his theorems...

I hope I am not forcing my own agenda on Blackwell’s research when I say that for him game and decision theory were a tool to study conceptual questions about the meaning of probability and information. At any rate, he was clearly interested in these questions... I hope the game theory society will find a way to celebrate Blackwell’s contribution to our community.
Kevin Bryan has also put up a nice post on Blackwell.


Update (7/20). Stergios (in a comment on this post) observes that "Blackwell also made fundamental contributions to the theory of stochastic processes" and that his "renewal theorem is taught in any doctoral level course on stochastic models." And Glenn Loury has emailed me a link to a paper by Jacques Crémer "nicely expositing one of Blackwell's more influential results in statistical decision theory."

Andrew Gelman (and his commenters) have more. And Anandaswarup Gadde links to a wonderful profile from about a year ago:
Doob’s foundational work would help broaden the field of mathematics to a dizzying array of uses in science, economics and technology. So it came as no surprise when in 1942, Jerzy Neyman of the University of California at Berkeley asked if Doob were interested in going West.
“No, I cannot come, but I have some good students, and Blackwell is the best,” he replied.

“But of course he’s black,” Doob continued, “and in spite of the fact that we are in a war that’s advancing the cause of democracy, it may not have spread throughout our own land.”

The quote, repeated in the book “Mathematical People,” says a lot about the times and even more about David H. Blackwell... who started as an Illinois undergraduate in 1935 and finished with a doctoral degree six years later, all accomplished at a time when residence halls were whites-only, and approximately 100 blacks were included in the student body of nearly 12,000.

What would be the odds of the son of a railroad worker from Centralia – whose parents did not complete high school and whose Depression-era teaching prospects were limited to segregated schools – becoming one of the top theoretical mathematicians (black or white) in the world?

Almost too hard to compute...

After earning a UI doctoral degree in mathematics in 1941 at the age of 22, Blackwell completed a year at the Institute for Advanced Study in Princeton, N.J., where he worked with, among others, John von Neumann, father of modern game theory.
Berkeley’s Jerzy Neyman – who had been unable to persuade Doob to join his department – wanted to offer Blackwell a position but appeared to have come up against a deal-breaker.

In an oral history interview at Berkeley, Blackwell, now 90 years old and in “fair” health, recalled what he learned years later – that the Texan wife of the department head told her husband she “was not going to have that darky in her house.”

The job offer never came.

Blackwell focused his efforts instead on realistic career aspirations for a person of color at the time. In 1942 he applied to 105 historically black colleges, received three offers and eventually landed at Howard University in Washington, D.C., in 1944, where he remained for 10 years...

Back at Berkeley, Neyman had never forgotten Blackwell and finally hired him in 1954, where he would stay for the remainder of his career.
Read the whole thing.

Thursday, July 15, 2010

To a Man with a Hammer

In an article published about a month ago, Richard Thaler argued that a behavioral propensity to accept "risks that are erroneously thought to be vanishingly small" was responsible for both the devastating oil spill in the Gulf of Mexico as well as the global financial crisis. This prompted James Kwak to respond as follows:
Don’t get me wrong: I like behavioral economics as much as the next guy. It’s quite clear that people are irrational in ways that the neoclassical model assumes away, and you can’t see human nature quite the same way after hearing Dan Ariely talk about his experiments on cheating. But I don’t think cognitive fallacies are the answer to everything, and I don’t think you can explain away the myriad crises of our time as the result of them.
Dan Ariely is among the best of the behavioral economists and a wonderful communicator but, like Thaler, seems to have fallen victim to a different kind of behavioral propensity: to a man with a hammer everything looks like a nail. Consider, for instance, his recent comments on the subprime mortgage crisis:
Behavioral economics argues that... people will often make the same mistake, and the individual mistakes can aggregate in the market. Let’s take the subprime mortgage crisis, which I think is a great example (but a very sad reality) of the market working to make the aggregation of mistakes worse. It is not as if some people made one kind of mistake and others made another kind. It was the fact that so many people made the same mistakes, and the market for these mistakes is what got us to where we are now...
Imagine that we understood how difficult it is for people to calculate the correct amount of mortgage that they should take, and instead of creating a calculator that told us the maximum that we can borrow, it helped us figure out what we should be borrowing. I suspect that if we had this type of calculator (and if people used it) much of the sub-prime mortgage catastrophe could have been avoided.
There's no doubt that mistakes were made in the sense that borrowers, lenders and purchasers of mortgage backed securities entered positions that they later came to regret. But they did so because such behavior had been profitable in the recent past, not because they were expressing cognitive lapses in the manner of subjects in controlled experiments. More generally, behavior in financial markets is subject to strong selection pressures based on performance, and if deviating from psychologically typical behavior pays off consistently, then such deviations will proliferate. Laboratory experiments are therefore a poor guide to financial practices, the distribution of which can fluctuate significantly over time.
This kind of promiscuous application of behavioral economics to everything under the sun has become extremely widespread. And now two prominent behavioral economists, George Loewenstein and Peter Ubel have taken notice:
It seems that every week a new book or major newspaper article appears showing that irrational decision-making helped cause the housing bubble or the rise in health care costs... behavioral economics has spawned a number of creative interventions [but] the field has its limits. As policymakers use it to devise programs, it’s becoming clear that behavioral economics is being asked to solve problems it wasn’t meant to address.
This is a point that I have made on several occasions to little effect. But the stature of Loewenstein and Ubel within the behavioral economics community might cause their reflections to be taken more seriously. And the choice is not simply one between behavioral economics and rational choice: agent-based computational models (among the earliest of which were developed by Thomas Schelling) constitute a promising alternative for the study of adaptive behavior in complex systems.

Sunday, July 11, 2010

Rationality and Fragility in Financial Markets

In a recent paper on financial innovation and fragility, Gennaioli, Shleifer and Vishny argue that investors (and often also financial intermediaries) are hobbled by certain systematic cognitive biases that cause them to neglect unlikely events when assessing asset values. They argue that such "local thinking" results in the creation and excessive issuance of engineered securities that are widely believed to be close substitutes for more traditional safe assets, but turn out to be much riskier than initially anticipated. This psychological regularity, they believe, accounts for a number of historical episodes of financial instability:
Many recent episodes of financial innovation share a common narrative. It begins with a strong demand from investors for a particular, often safe, pattern of cash flows. Some traditional securities available in the market offer this pattern, but investors demand more (so prices are high). In response to demand, financial intermediaries create new securities offering the sought after pattern of cash flows, usually by carving them out of existing projects or other securities that are more risky. By virtue of diversification, tranching, insurance, and other forms of financial engineering, the new securities are believed by the investors, and often by the intermediaries themselves, to be good substitutes for the traditional ones, and are consequently issued and bought in great volumes. At some point, news reveals that new securities are vulnerable to some unattended risks, and in particular are not good substitutes for the traditional securities. Both investors and intermediaries are surprised by the news, and investors sell these “false substitutes,” moving back to the traditional securities with the cash flows they seek. As investors fly for safety, financial institutions are stuck holding the supply of the new securities (or worse yet, having to dump them as well in a fire sale because they are leveraged). The prices of traditional securities rise while those of the new ones fall sharply.
The authors claim that this sequence of events describes not only the recent experience with collateralized debt obligations and money market funds, but also earlier episodes of financial innovation, including prepayment tranching of collateralized mortgage obligations in the 1980s.
In order to explore precisely the implications of local thinking in the context of financial innovation, the authors construct a model based on a number of stark, simplifying assumptions. There are two assets: a traditional safe security and a risky asset that has three possible terminal payoffs. The worst case outcome for the risky asset is also the least likely to occur (this is a crucial assumption). Investors are homogeneous and highly risk averse. Financial innovation takes the form of separating the cash flows from the risky asset into two components: a "safe" security that earns the the worst case payoff regardless of the actual outcome, and a risky residual claim. Under rational expectations this innovation is welfare improving, and the quantity of the substitute issued is precisely such that all such claims would be covered even if the worst case loss were to materialize. That is, the substitute security really is safe.
Under local thinking, the least likely event (which is also the worst case outcome) is simply neglected, and beliefs about the other two outcomes are correspondingly inflated. The intermediate outcome is now (mistakenly) perceived to be the worst, and a greater quantity of the substitute security is issued than could be honored if the actual worst case outcome were to be realized. Now suppose that some bad news arrives, conditional on which the objective probabilities of the three outcomes are altered in such a manner as to make the intermediate outcome the least likely. Local thinking then causes investors to become excessively pessimistic: the worst case outcome not only becomes suddenly salient, but the less disastrous intermediate outcome is neglected and the decline in the price of the asset previously thought to be safe is greater than it would be under rational expectations.
The development of a theoretical framework within which common elements of various historical episodes can be examined is clearly a worthwhile exercise. But what troubles me about this paper (and much of the behavioral finance literature) is that the rational expectations hypothesis of identical, accurate forecasts is replaced by an equally implausible hypothesis of identical, inaccurate forecasts. The underlying assumption is that financial market participants operating under competitive conditions will reliably express cognitive biases identified in controlled laboratory environments. And the implication is that financial instability could be avoided if only we were less cognitively constrained, or constrained in different ways -- endowed with a propensity to overestimate rather than discount the likelihood of unlikely events for example. 
This narrowly psychological approach to financial fragility neglects two of the most analytically interesting aspects of market dynamics: belief heterogeneity and evolutionary selection. Even behavioral propensities that are psychologically rare in the general population can become widespread in financial markets if they result in the adoption of successful strategies. As a result, asset prices disproportionately reflect the beliefs of investors who have been most successful in the recent past. There is no reason why these beliefs should consistently conform to those in the general population.
I have argued previously for the further development of this ecological perspective on financial instability, and similar themes have been explored elsewhere; see especially Macroeconomic Resilience and David Murphy. As I said in an earlier post, a bit too much is being asked of behavioral economics at this time, more than it has the capacity to deliver.


Update (7/11). David Murphy follows up with characteristic clarity:
I would even go further, because this argument neglects the explicitly reflexive nature of market participant’s thinking. (Call it social metacognition if you really want some high end jargon.) Traders can both absolutely understand that a behavioral propensity is rare and likely to lead to catastrophe and behave that way: they do this because they believe that other market participants will too, and behaving that way if others do will make money in the short term. Even if you think that it is crazy for (pick your favourite bubblicious asset) to trade that high, providing you also believe others will buy it, then it makes sense for you to buy it along with the crowd. Moreover, worse, you may well believe that they too think it is crazy: but all of you are in a self-sustaining system and the first one to get off looks the most foolish (for a while). Most people are capable of spotting a bubble if it lasts long enough: the hard part is timing your exit to account for the behaviour of all the other smart people trying to time their exit too.
I agree completely. There are many examples of prominent fund managers trying to grapple with this problem during the bubble in technology stocks a decade ago. This is why markets can (approximately) satisfy what James Tobin called information arbitrage efficiency while failing to satisfy fundamental valuation efficiency.

Saturday, July 03, 2010

Innovation, Scaling, and the Industrial Commons

When Yves Smith makes a strong reading recommendation, I usually take notice. Today she directed her readers to an article by Andy Grove calling for drastic changes in American policy towards innovation, scaling, and job creation in manufacturing. The piece is long, detailed and worth reading in full, but the central point is this: an economy that innovates prolifically but consistently exports its jobs to lower cost overseas locations will eventually lose not only its capacity for mass production, but eventually also its capacity for innovation:
Bay Area unemployment is even higher than the... national average. Clearly, the great Silicon Valley innovation machine hasn’t been creating many jobs of late -- unless you are counting Asia, where American technology companies have been adding jobs like mad for years.

The underlying problem isn’t simply lower Asian costs. It’s our own misplaced faith in the power of startups to create U.S. jobs... Startups are a wonderful thing, but they cannot by themselves increase tech employment. Equally important is what comes after that mythical moment of creation in the garage, as technology goes from prototype to mass production. This is the phase where companies scale up. They work out design details, figure out how to make things affordably, build factories, and hire people by the thousands. Scaling is hard work but necessary to make innovation matter.
The scaling process is no longer happening in the U.S. And as long as that’s the case, plowing capital into young companies that build their factories elsewhere will continue to yield a bad return in terms of American jobs...

There’s more at stake than exported jobs... A new industry needs an effective ecosystem in which technology knowhow accumulates, experience builds on experience, and close relationships develop between supplier and customer. The U.S. lost its lead in batteries 30 years ago when it stopped making consumer-electronics devices. Whoever made batteries then gained the exposure and relationships needed to learn to supply batteries for the more demanding laptop PC market, and after that, for the even more demanding automobile market. U.S. companies didn’t participate in the first phase and consequently weren’t in the running for all that followed...

How could the U.S. have forgotten [that scaling was crucial to its economic future]? I believe the answer has to do with a general undervaluing of manufacturing -- the idea that as long as “knowledge work” stays in the U.S., it doesn’t matter what happens to factory jobs... I disagree. Not only did we lose an untold number of jobs, we broke the chain of experience that is so important in technological evolution... our pursuit of our individual businesses, which often involves transferring manufacturing and a great deal of engineering out of the country, has hindered our ability to bring innovations to scale at home. Without scaling, we don’t just lose jobs -- we lose our hold on new technologies. Losing the ability to scale will ultimately damage our capacity to innovate.
Grove recognizes, of course, that companies will not unilaterally change course unless they face a different set of incentives, and that this will require a vigorous industrial policy:
The first task is to rebuild our industrial commons. We should develop a system of financial incentives: Levy an extra tax on the product of offshored labor. (If the result is a trade war, treat it like other wars -- fight to win.) Keep that money separate. Deposit it in the coffers of what we might call the Scaling Bank of the U.S. and make these sums available to companies that will scale their American operations. Such a system would be a daily reminder that while pursuing our company goals, all of us in business have a responsibility to maintain the industrial base on which we depend and the society whose adaptability -- and stability -- we may have taken for granted... Unemployment is corrosive. If what I’m suggesting sounds protectionist, so be it... If we want to remain a leading economy, we change on our own, or change will continue to be forced upon us.
Neither Grove's diagnosis nor his proposed solutions will persuade those who are convinced that protectionism of any kind is folly. I am not entirely convinced myself, and suspect that he may be underestimating the likelihood (and consequences) of cascading retaliatory actions and a collapse in international trade. But the argument must be taken seriously, and anyone opposed to his proposals really ought to come up with some alternatives of their own.


Update (7/4). In an email (posted with permission) Yves adds:
On the one hand, you are right, any move towards protectionism (or even permitted-within-WTO pushback against mercantilist trade partners) could very quickly get ugly. But the flip side is I wonder if we have a level of global integration that is inherently unstable (both for Rodrik trilemma reasons, international economic integration with insufficient government oversight creates political problems, plus the Reinhart/Rogoff finding that high levels of international capital flows are associated with financial crises). If so, we may have a short run (messiness of reconfiguration) v. long term (costs of really big financial crises) tradeoff.
This is a good point. The purpose of my post was to highlight Grove's analysis of the symbiotic relationship between innovation and scaling (which I think is both interesting and valid), and to challenge those who are opposed to his reform proposals to explain how they would deal with the situation in which we find ourselves. Passive tolerance of mass unemployment, widening income inequality, and withering innovative capacity is not an option.


Update (7/4). Tyler Cowen is predictably dismissive of Grove's article, but (less predictably) seems not to have read it very closely. What Grove means by scaling is the process by means of which "technology goes from prototype to mass production" as companies "work out design details, figure out how to make things affordably, build factories, and hire people by the thousands." This is not about increasing returns to scale as economists normally use the term (declining average costs as a function of output). So Tyler's claim that "at best, given the logic of [Grove's] argument, this would imply a tax only on the increasing returns industries" is not correct. And I cannot imagine what he means when he says that the "big exporting success these days is Germany, which has less "scale" than does the United States." Less scale in what sense? Population or per-capita income differences between the two countries are entirely irrelevant here. Is he trying to say that Germany engages in less scaling (and hence more offshoring) than does the United States? This would be relevant, but is empirically dubious.

Like Tyler, I am not convinced that Grove's policy proposals are wise. But his analysis of the relationship between innovation and scaling and the need for a policy response really does deserve to be read with more care.


Update (7/6). Tim Duy follows up with a characteristically detailed and thoughtful post. His bottom line:
Something more than cyclical forces is weighing on the American jobs machine. Here I have tried to extend the Grove/Smith/Sethi discourse with additional focus on absolute declines in manufacturing jobs and distressing declines in capacity growth rates. These trends may be critically important in understanding the dismal performance of US labor markets. If they are in fact critical, they raise serious questions about US trade policy – questions that few in Washington want to address. Given the extent to which manufacturing capacity has already been offshored, those questions go far beyond the recently announced tiny shift in Chinese currency policy. Simply put, accepting the importance of manufacturing capacity and the possibility that offshoring has had a much more deleterious impact on the US economy than commonly accepted would require a significant paradigm shift in the thinking of US policymakers. If you scream “protectionist fool” in response, then you need to have a viable policy alternative that goes beyond the empty rhetoric of “we need to teach better creative thinking skills in schools.” That answer is simply too little too late.
It's worth reading the entire post to see the data and reasoning that drives him to this conclusion.


I'll be away at a (very interesting) conference for the next couple of days and will be slow to respond to comments and emails.

Friday, July 02, 2010

Market Microstructure and Capital Formation

In an earlier post I argued that recent changes in technology have altered the distribution of trading strategies in asset markets, with information extracting strategies becoming more prevalent at the expense of information augmenting strategies. Specifically, there has been a dramatic increase in the market share of strategies based on rapid responses to market data using algorithms and co-location facilities. One consequence is that the data itself becomes less reliable over time, resulting in greater price volatility and occasional severe disruptions. The flash crash of May 6 was a striking example. 
While my focus has been on market stability, this kind of transformation in microstructure probably has a number of other important effects. In recent testimony before the joint CFTC-SEC committee on emerging regulatory issues, David Weild has argued that one of these consequences is on the size distribution of publicly traded companies, and on capital formation more generally:
There has been a computer arms race unleashed on Wall Street by changes in regulation and technology... [This] is displacing fundamental investing with computer‐trading based strategies and has created new forms of systemic risk, a loss of investor confidence, and a disastrous decline in primary (IPO) capital formation and the number of publicly listed companies in the United States.

From 1997 to Year End 2009 there has been a 40% decline in the number of publicly listed (i.e., NYSE, AMEX and NASDAQ) companies in the United States. On a GDP weighted basis, we have seen a more than 55% decline in the number of publicly listed companies. Today’s market structure has lost the ability to support small capitalization companies and initial public offerings (IPOs) on the scale necessary to help drive the US economy. The U.S. now annually delists twice as many companies as it lists and this trend has been going on since the advent of electronic trading... the unemployment crisis in the United States has been partly caused by changes to debt and equity capital market structure and the events of May 6 may give us an opportunity to come to grips with the notion that we have entered into an era where trading interests are eclipsing fundamental investment and economic interests.

Fundamental investing, or so‐called “information increasing” activities, are being displaced by trading, or so‐called “information mining” activities. The growth in indexing and ETFs may be exacerbating this problem.

In addition, stock market structure today is geared for large‐capitalization stocks with typically symmetrical order books but disastrous for the vast majority of small‐capitalization stocks with asymmetrical order books (where there is not naturally an offsetting buy order to match against a sell order and vice versa)... The “Flash Crash” was an example of where even normally liquid securities went to a state of “asymmetry” and price discovery broke down...
[Until] all trades, quotes and other messages in all interrelated markets are tagged and traceable to the trading venue, broker and ultimate investor, and disclosed to the market, markets will not be perceived as fair... With full tagging, tracking and reporting and the application of posttrade analysis and test bed techniques such as Agent‐Based Models, regulators and market participants will... once and for all be in a position to judge the impact of other participants and to regulate and plan accordingly...
It may be time to admit that what works for large, naturally visible companies, is the antithesis of what is needed by small companies and it is these small companies that are essential to grow our markets, reduce unemployment, restore US competitiveness and drive the US economy.
I am not aware of any academic research that links market microstructure to the size distribution of publicly listed companies in the manner suggested here, and I am grateful to David for for bringing his testimony and supporting documents to my attention. The issue is clearly of considerable importance and deserving of greater scrutiny.


Update (7/2). In an email (posted with permission) David adds:
I did a presentation to the ISEEE (International Stock Exchange Executives Emeriti) at the end of April.  The audience consisted of about 25 mostly former senior stock exchange executives... I was taken aback by the reaction of people from places like the Zurich Stock Exchange, Australian, New Zealand, Bovespa and others who were of the opinion that these electronic market structures (specifically, compressed spread-trading centric electronic continuous auction markets) are hurting primary capital formation in many of their countries as well.

For me, having run strategy for investment banking, research, institutional sales and trading at a major Wall Street firm, it is pretty simple - If one can't make money supporting small cap stocks, one won't support small cap stocks...
This has had two effects:
  1. The investment banks tell issuers that they have to do a much larger ($75 million) IPO; minimum IPO sizes have increased at much faster than the rate of inflation.   
  2. Aftermarket support for IPOs has withered because issuers lose money providing it (unless the companies are much larger).
It is commonly argued that the rise of algorithmic trading has resulted in increased liquidity, although this claim is by no means universally accepted. David (if I understand him correctly) is arguing that even if liquidity has increased for some classes of securities, it has declined for others, with detrimental net effects on capital formation.

Happiness and the World Cup

Tyler Cowen considers the question of which team's victory in the World Cup would result in the greatest overall happiness, and concludes (based on the number and intensity of fans) that it would be Brazil. As far as the immediate effects of a victory are concerned, this is probably about right. But could there not also be consequences for global economic growth and financial stability? 
Hein Schotsman of ABN AMRO has looked at these broader economic effects and comes to the conclusion that a victory by a large economy currently running a significant trade surplus would be best. This leads him to the one obvious candidate:
According to a detailed analysis of the 32 countries in this year’s tournament, Mr. Schotsman is convinced that a win by the Germans would boost the global economy. Here’s how: Germany is among the world’s biggest economies and has a large trade surplus. A win by the Germans would boost domestic confidence and spending, thus increasing imports from other countries.

“A German victory will result in a relatively big dent in the German trade surplus, which is best for the stability of the world economy. This is just what is badly needed after the credit crisis,” Mr. Schotsman said in a report released Tuesday called Soccernomics 2010.
Maybe so. But as far as my own happiness is concerned, I would like to see Argentina prevail against Germany tomorrow. Lionel Messi has been the player of the tournament so far and I would hate to see his team eliminated.
I thank Ingela Alger for alerting me to this story and sending me references. For those not fully fluent in Dutch, Schotsman's paper may be upload to Google Translate for a reasonably comprehensible rendering.