Saturday, February 06, 2010

A Case for Agent-Based Models in Economics

In a recent essay in Nature, Doyne Farmer and Duncan Foley have made a strong case for the use of agent-based models in economics. These are computational models in which a large numbers of interacting agents (individuals, households, firms, and regulators, for example) are endowed with behavioral rules that map environmental cues onto actions. Such models are capable of generating complex dynamics even with simple behavioral rules because the interaction structure can give rise to emergent properties that could not possibly be deduced by examining the rules themselves. As such, they are capable of providing microfoundations for macroeconomics in a manner that is both more plausible and more authentic than is the case with highly aggregative representative agent models.

Among the most famous (and spectacular) agent-based models is John Conway's Game of Life (if you've never seen a simulation of this you really must). In economics, the earliest such models were developed by Thomas Schelling in the 1960s, and included his celebrated checkerboard model of residential segregation. But with the exception of a few individuals (some of whom are mentioned below) there has been limited interest among economists in the further development of such approaches.

Farmer and Foley hope to change this. They begin their piece with a critical look at contemporary modeling practices:
In today's high-tech age, one naturally assumes that US President Barack Obama's economic team and its international counterparts are using sophisticated quantitative computer models to guide us out of the current economic crisis. They are not. 
The best models they have are of two types, both with fatal flaws. Type one is econometric: empirical statistical models that are fitted to past data. These successfully forecast a few quarters ahead as long as things stay more or less the same, but fail in the face of great change. Type two goes by the name of 'dynamic stochastic general equilibrium'. These models... by their very nature rule out crises of the type we are experiencing now. 
As a result, economic policy-makers are basing their decisions on common sense, and on anecdotal analogies to previous crises such as Japan's 'lost decade' or the Great Depression...The leaders of the world are flying the economy by the seat of their pants.
This is hard for most non-economists to believe. Aren't people on Wall Street using fancy mathematical models? Yes, but for a completely different purpose: modelling the potential profit and risk of individual trades. There is no attempt to assemble the pieces and understand the behaviour of the whole economic system.
The authors suggest a shift in orientation:
There is a better way: agent-based models. An agent-based model is a computerized simulation of a number of decision-makers (agents) and institutions, which interact through prescribed rules. The agents can be as diverse as needed — from consumers to policy-makers and Wall Street professionals — and the institutional structure can include everything from banks to the government. Such models do not rely on the assumption that the economy will move towards a predetermined equilibrium state, as other models do. Instead, at any given time, each agent acts according to its current situation, the state of the world around it and the rules governing its behaviour. An individual consumer, for example, might decide whether to save or spend based on the rate of inflation, his or her current optimism about the future, and behavioural rules deduced from psychology experiments. The computer keeps track of the many agent interactions, to see what happens over time. Agent-based simulations can handle a far wider range of nonlinear behaviour than conventional equilibrium models. Policy-makers can thus simulate an artificial economy under different policy scenarios and quantitatively explore their consequences.
Such methods are unfamiliar (or unappealing) to most theorists in the leading research departments and rarely published in the top professional journals. Farmer and Foley attribute this in part to the failure of a particular set of macroeconomic policies, and the resulting ascendancy of the rational expectations hypothesis:
Why is this type of modelling not well-developed in economics? Because of historical choices made to address the complexity of the economy and the importance of human reasoning and adaptability.
The notion that financial economies are complex systems can be traced at least as far back as Adam Smith in the late 1700s. More recently John Maynard Keynes and his followers attempted to describe and quantify this complexity based on historical patterns. Keynesian economics enjoyed a heyday in the decades after the Second World War, but was forced out of the mainstream after failing a crucial test during the mid-seventies. The Keynesian predictions suggested that inflation could pull society out of a recession; that, as rising prices had historically stimulated supply, producers would respond to the rising prices seen under inflation by increasing production and hiring more workers. But when US policy-makers increased the money supply in an attempt to stimulate employment, it didn't work — they ended up with both high inflation and high unemployment, a miserable state called 'stagflation'. Robert Lucas and others argued in 1976 that Keynesian models had failed because they neglected the power of human learning and adaptation. Firms and workers learned that inflation is just inflation, and is not the same as a real rise in prices relative to wages...

The cure for macroeconomic theory, however, may have been worse than the disease. During the last quarter of the twentieth century, 'rational expectations' emerged as the dominant paradigm in economics... Even if rational expectations are a reasonable model of human behaviour, the mathematical machinery is cumbersome and requires drastic simplifications to get tractable results. The equilibrium models that were developed, such as those used by the US Federal Reserve, by necessity stripped away most of the structure of a real economy. There are no banks or derivatives, much less sub-prime mortgages or credit default swaps — these introduce too much nonlinearity and complexity for equilibrium methods to handle...
Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. In some other areas of science, such as epidemiology or traffic control, agent-based models already help policy-making.
One problem that must be addressed if agent-based models are to gain widespread acceptance in economics is that of quality control. For methodologies that are currently in common use, there exist well-understood (though imperfect) standards for assessing the value of any given contribution. Empirical researchers are concerned with identification and external validity, for instance, and theorists with robustness. But how is one to judge the robustness of a set of simulation results?
The major challenge lies in specifying how the agents behave and, in particular, in choosing the rules they use to make decisions. In many cases this is still done by common sense and guesswork, which is only sometimes sufficient to mimic real behaviour. An attempt to model all the details of a realistic problem can rapidly lead to a complicated simulation where it is difficult to determine what causes what. To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.
This recognizes the problem of quality control, but does not offer much in the way of guidance for editors or referees in evaluating submissions. Presumably such standards will emerge over time, perhaps through the development of a few contributions that are commonly agreed to be outstanding and can serve as templates for future work.

There do exist a number of researchers using agent-based methodologies in economics, and Farmer and Foley specifically mention Blake LeBaron, Rob Axtell, Mauro Gallegati, Robert Clower and Peter Howitt. To this list I would add Joshua Epstein, Marco Janssen, Peter Albin, and especially Leigh Tesfatsion, whose ACE (agent-based computational economics) website provides a wonderful overview of what such methods are designed to achieve. (Tesfatsion also mentions not just Smith but also Hayek as a key figure in exploring the "self-organizing capabilities of decentralized market economies.")

A recent example of an agent-based model that deals specifically with the financial crisis may be found in a paper by Thurner, Farmer, and Geanakoplos. Farmer and Foley provide an overview:
Leverage, the investment of borrowed funds, is measured as the ratio of total assets owned to the wealth of the borrower; if a house is bought with a 20% down-payment the leverage is five. There are four types of agents in this model. 'Noise traders', who trade more or less at random, but are slightly biased toward driving prices towards a fundamental value; hedge funds, which hold a stock when it is under-priced and otherwise hold cash; investors who decide whether to invest in a hedge fund; and a bank that can lend money to the hedge funds, allowing them to buy more stock. Normally, the presence of the hedge funds damps volatility, pushing the stock price towards its fundamental value. But, to contain their risk, the banks cap leverage at a predetermined maximum value. If the price of the stock drops while a fund is fully leveraged, the fund's wealth plummets and its leverage increases; thus the fund has to sell stock to pay off part of its loan and keep within its leverage limit, selling into a falling market.
This agent-based model shows how the behaviour of the hedge funds amplifies price fluctuations, and in extreme cases causes crashes. The price statistics from this model look very much like reality. It shows that the standard ways banks attempt to reduce their own risk can create more risk for the whole system.
Previous models of leverage based on equilibrium theory showed qualitatively how leverage can lead to crashes, but they gave no quantitative information about how this affects the statistical properties of prices. The agent approach simulates complex and nonlinear behaviour that is so far intractable in equilibrium models. It could be made more realistic by adding more detailed information about the behaviour of real banks and funds, and this could shed light on many important questions. For example, does spreading risk across many financial institutions stabilize the financial system, or does it increase financial fragility? Better data on lending between banks and hedge funds would make it possible to model this accurately. What if the banks themselves borrow money and use leverage too, a process that played a key role in the current crisis? The model could be used to see how these banks might behave in an alternative regulatory environment.
I have discussed Geanakoplos' more methodologically orthodox papers on leverage cycles in an earlier post. That work uses standard methods in general equilibrium theory to address related questions, suggesting that the two approaches are potentially quite complementary. In fact, the nature of agent-based modeling is such that it is best conducted in interdisciplinary teams, and is therefore unlikely to ever become the dominant methodology in use:
Creating a carefully crafted agent-based model of the whole economy is, like climate modelling, a huge undertaking. It requires close feedback between simulation, testing, data collection and the development of theory. This demands serious computing power and multi-disciplinary collaboration among economists, computer scientists, psychologists, biologists and physical scientists with experience in large-scale modelling. A few million dollars — much less than 0.001% of the US financial stimulus package against the recession — would allow a serious start on such an effort.
Given the enormity of the stakes, such an approach is well worth trying.
I agree. This kind of effort is currently being undertaken at the Santa Fe Institute, where Farmer and Foley are both on the faculty. And for graduate students interested in exploring these ideas and methods, John Miller and Scott Page hold a Workshop on Computational Economic Modeling in Santa Fe each summer (the program announcement for the 2010 workshop is here.) Their book on Complex Adaptive Systems provides a nice introduction to the subject, as does Epstein and Axtell's Growing Artificial Societies. But it is Thomas Schelling's Micromotives and Macrobehavior, first published in 1978, that in my view reveals most clearly the logic and potential of the agent-based approach.

---

Update (2/7). Cyril Hedoin at Rationalité Limitée points to a paper by Axtell, Axelrod, Epstein and Cohen that explicitly discusses the important issues of replication and comparative model evaluation for agent-based simulations. (He also mentions a nineteenth century debate on research methodology between Carl Menger and Gustav von Schmoller that seems relevant; I'd like to take a closer look at this if I can ever find the time.)

Also, in a pair of comments on this post, Barkley Rosser recommends a 2008 book on Emergent Macroeconomics by Delli Gatti, Gaffeo, Gallegati, Giulioni, and Palestrini, and provides an extensive review of agent-based computational models in regional science and urban economics.

18 comments:

  1. Hi Rajiv,

    There is already a framework for modeling the economy as a whole. I will not call it an agent based approach, but a balance sheet based approach. For example check this google books link: Features Of A Realistic Banking System Within A Stock-Flow Consistent Model by Wynne Godley and Marc Lavoie. Their full work is in the textbook Monetary Economics: An Integrated Approach to Credit, Money, Income, Production and Wealth The work is the best work I have seen in Economics and is actually the description of how an economy works.

    The most important thing in my view is that economists should learn is accounting. Mathematics is a rich subject and one can end up having all kinds of conclusion simply because there are varieties of behaviour in mathematics. Mathematics is a necessary but not a sufficient condition for economic modeling. The paradigm economists live with and work with and the whole of Wall Street is an incorrect paradigm - Unless one gets rid of this paradigm, no mathematics can ever add grace to the subject of Macroeconomics. Quoting the first paragraph from Basil Moore's 1988 book Horizontalists and Verticalists: The Macroeconomics of Credit Money (with the permission of the author):

    The central message of this book is that members of the economics profession, all the way from professors to students, are currently operate with a basically incorrect paradigm of the way modern banking systems operate and of the causal connection between wages, prices, and monetary developments, on the other hand. Currently, the standard paradigm, especially among economists in the United States, treats the central bank as determining the money base and thence the money stock. The growth of the money supply is held to be the main force determining the rate of growth of money income, wages, and prices.

    This paradigm may once have been revelant to a commodity or fiat money world, but is not applicable to the current world of credit money. This book argues that the above order of causation should be reversed. Changes in wages and employment largely determine the demand for bank loans, which in turn determine the rate of growth of the money stock. Central banks have no alternative but to accept this course of events. Their only option is to vary the short-term rate of interest at which they supply liquidity to the banking system on demand. Commercial banks are now in a position to supply whatever volume of credit to the economy their borrowers demand.

    ReplyDelete
  2. Now in the real world - money is both fiat and credit and the current institutional arrangements make them equivalent. Also, any textbook one picks up or any journal article treats money as M and M only. Money is an asset with a liability in the mainstream treatment. This is a huge accounting error ! From a personal perspective, I did theoretical physics in grad school (string theory) and deeply love mathematics and there is one thing that I will never unlearn - one needs to get the physics correct right away. Similarly, if one works in a false paradigm - one can never ever achieve anything great.

    Having said this, I liked a few things about the Nature article. The interaction of one sector of an economy (such as households) with things happening around them was researched a bit by James Tobin. See, for example his Nobel lecture. The work was picked by Wynne Godley and made into a sophisticated framework. I am not sure if Tobin knew about "horizontalism" and does not have endogenous money even if he thought so and he makes various neoclassical assumptions in his works. Also there are things such as inflation accounting in the work of G&L which I believe Tobin didn't have. The Nature article talks of finding out the effect of government policies quantitatively. G&L already have it! One of the authors seems to be from New School so they may know of the work I talked about in this post.

    ReplyDelete
  3. They're still trying to simulate with three patsies because more even supercomputers can't calculate. Instead all they'd need were sound money then people would behauve soundly ...

    ReplyDelete
  4. Another good example is the model by Domenico Delli Gatti and Mauro Gallegati and a large set of (mostly) Italian coauthors, which is presented in the 2008 volume from Springer, _Emergent Macroeconomics_.

    ReplyDelete
  5. Attempted to post a rathe long comment here, but failed. Will do shorter version.

    So, I think that in regard to the modeling of segregation and agent-based modeling of urban and regional dynamics, the SFI-centric folks ignore a lot of what has been done at places like Brussels (Prigogine group) and Stuttgart (Wedlich group), although I would agree that Schelling's 1971 paper was truly seminal. Many of these other papers either had the local interactions or the lattice kind of setup, but rarely both, and most of their work came after his, later in the 70s and into the 80s.

    Out of the Brussels group the most important figure was Prigogine's student, Peter Allen. He did complicated models with lattices and nonlinearities and multiple sectors and stochasticity. But it tended to downplay local interactions. Nobody at SFI has ever cited his stuff that I am aware of, but it did not appear in economics journals, mostly geography and others.

    Out of Stuttgart came sociodynamics and sociophysics, with Weidlich using master equations to model migration and various forms of regional dynamics, with his students Gunter Haag and later Dirk Helbing, who is now a leading agent-based modeler of congestion in transportation systems. Again, little of this work gets cited by economists or at SFI. A paper by Weidlich and Haag in 1987 in the Journal of Regional Science prefigured work Krugman did at SFi in the early 90s, but has never been cited by him or anyone at SFI.

    Regarding segregation models, Schelling's model shows how segregatgion arose from local interactions out of an integrated beginning. However, his model does not provide much help on some issues, such as why ghettos have certain shapes (many are quite elongated, running along wedges, or "finger ghettos" as Homer Hoyt described the one on the south side of Chicago in 1939).

    None of the alternatives used lattice setups, but some did involve local interactions. This started in 1959 with Martin J. Bailey's paper in Land Economics that argued how local negative externalities could depress land values across use boudaries, including when racial prejudice exists. Later papers in pursued these matters in the Journal of Urban Economics, including by Susan Rose-Ackermann in 1975, John Yinger in 1976, and Glenn Loury in 1978.

    A paper I published in 1980 in Urban Studies, "The dynamics of ghetto boundary shape and boundary movement," dynamized this using local interactions effects operating through expectations near a moving ghetto boundary, using the math of Stefan moving boundary problems (used to model glacier dynamics). Essentially it is a self-fulfilling prophecy model. When a portion of the boundary starts to move out more rapidly, prejudiced whites in its path see it coming and devalue their homes, making it easier for the boundary to keep moving in that direction. An elongated ghetto is the result with only local interactions effects operating, if not on a lattice.

    ReplyDelete
  6. Barkley, thanks for your comments, I have appended an update to the post to reflect them. My purpose here was not to review the literature, but rather to make a case for the methodology, building on the essay by Farmer and Foley.

    ReplyDelete
  7. The problem is that agent based models tend to arrive and non-intuitive answers and don't show a direct causal relationship between a inital condition and a generated effect. While I am most certainly not saying these conclusions are incorrect (quite the opposite) I am saying that the answers aren't palitable to those whom economists are selling their goods.

    Santa Fe is one of the few centres who are activily looking into complexity science, really something that only recently has recived study even though Whitehead laid the foundations some time ago. It is still an emerging feild, and therefore isn't accepted in the main stream.

    From what I can tell, economists need a better understanding of mathmatics, specifically Godel, Cantor and Whitehead. From what I can tell, they are relying mostly on rules of thumb and relationships that work only temporarily until the intial conditions of the system change, at which time they must reformulate. Such a approach is only useful up to a point.

    ReplyDelete
  8. Having poked a bit at SFI, for the record I note that a great deal of worthwhile work has been done there and continues to be done there.

    ReplyDelete
  9. Hi Rajiv, thanks for a very enjoyable post. It will be interesting to see if large-scale optimization methodologies catch on. I think going this computational route would be a worthwhile endeavor for the field. Coupled with the agent based approach, I would also advocate for the use of sophisticated optimization tools such as CPLEX.

    I believe this group from Amsterdam University does work that is relevant to your post.

    http://www1.fee.uva.nl/cendef/

    You and your readers may be interested in the following reference

    http://ideas.repec.org/p/ams/ndfwpp/08-05.html

    ReplyDelete
  10. Stergios, thanks, these are useful references. I might discuss the Hommes/Wagener paper in a subsequent post, it's related to some other literature that I've been meaning to discuss.

    ReplyDelete
  11. One of the better books on ABM in ecology is 'Individual-based modeling and ecology' by Volker Grimm and Steven Railsback. In fact they are putting up their new book on ABM online here which is more of a intro, DIY book for ABM not restricted to ecology. The website also has the NetLogo models available to download for those who want to tinker with them (I think you have to register first before downloading).

    Grimm and Railsback also offer a framework for ABM which they discuss in more detail here which they call Pattern-Oriented Modeling (POM). The essential principle of POM is to construct the simplest, most parsimonious model possible that can recreate the multiple patterns we observe in the complex system. The "multiple" is important here - multiple weak patterns eliminate more models than one strong pattern like cycles which can be replicated by numerous bottom-up approaches. Obviously, this approach is more fuzzy than most economists would prefer but I think its a valid approach.

    The Menger-Schmoller debate you refer to is the Methodenstreit with Menger arguing for a micro-founded, theoretical approach to economics and Schmoller for a historical approach. Menger's contribution can be read in his book 'Investigations into the Method of the Social Sciences' which was directed at the German Historical School. I think Hayek drew a lot from Menger's work and especially from this book.

    ReplyDelete
  12. Macro, thanks, this is helpful. The basic idea behind POM is something that is already appreciated in Economics: a theory that can simultaneously explain multiple seemingly unrelated stylized facts is accorded much greater credibility and value. The problem is that theories generally have to be based on equilibrium models, which ends up being very restrictive. Of course some would argue that this restrictiveness is a good thing - it exerts discipline on the model builder. But I think that the current balance between discipline and flexibility is too strongly tilted towards the former.

    ReplyDelete
  13. OBAMA and Bernanke are featured in a movie-- about greedy hedge funds called "Stock Shock." Even though the movie mostly focuses on Sirius XM stock being naked short sold nearly into bankruptcy (5 cents/share), I liked it because it exposes the dark side of Wall Street and revealed some of their secrets. DVD is everywhere but cheaper at www.stockshockmovie.com

    ReplyDelete
  14. Rajiv - Agreed. The real challenge is to replicate the dynamic patterns we observe in a macroeconomy or market through time, just like ecology is concerned with spatiotemporal patterns. And to do this, we most certainly have to drop the insistence on equilibrium models!

    ReplyDelete
  15. We use agent-based artificial market models for forecasting stock prices. In our models, thousands of heterogeneous agents with different trading rules trade on a virtual market, while analyzing and forecasting prices from real markets. They adapt using evolutionary computing. The results on the S&P 500 index were especially well during the 2008 collapse. More information at Altreva.

    ReplyDelete
  16. Why not work towards modeling the aggregates within a fractal framework instead of one that assumes normally distributed outcomes? Instead of having to do complicated agent-based models with assumed rules that may or may not be correct it seems more simple (but still complicated) to do the former.

    ReplyDelete
  17. The more I read about ABMs the more I believe this way must be followed in order to replace fallacious General Equilibrium approaches.
    But there needs not be a single approach in economics.

    The "Methodenstreit" seems to be gaining a new relevance in this context. On one hand agent modelling and individual coordination within markets can inform us about the emergence of macro patterns, but depending on calibration we might end up with counter-intuitive outcomes. On the other hand "history" tells us something about real macro patterns, which makes other ways such as the balance sheet approach or some other non-micro-founded approach necessary (could include new Keynesians or fractal economics(?)). The combination of both approaches (micro and macro-level based) represent a true chance to make economics get out of its current trap.

    ReplyDelete
  18. Hi. My company is looking for experts to collaborate with in this area. Would you be interested or be able to refer anyone else in the field of ABM? Thanks

    ReplyDelete