Sunday, April 06, 2014

Superfluous Financial Intermediation

I'm only about halfway through Flash Boys but have already come across a couple of striking examples of what might charitably be called superfluous financial intermediation.

This is the practice of inserting oneself between a buyer and a seller of an asset, when both parties have already communicated to the market a willingness to trade at a mutually acceptable price. If the intermediary were simply absent from the marketplace, a trade would occur between the parties virtually instantaneously at a single price that is acceptable to both. Instead, both parties trade against the intermediary, at different prices. The intermediary captures the spread at the expense of the parties who wish to transact, adds nothing to liquidity in the market for the asset, and doubles the notional volume of trade.

The first example may be summarized as follows. A hundred thousand shares in a company have been offered for sale at a specified price across multiple exchanges. A single buyer wishes to purchase the whole lot and is willing to pay the asked price. He places a single buy order to this effect. The order first reaches BATS, where it is partially filled for ten thousand shares; it is then routed to the other exchanges for completion. An intermediary, having seen the original buy order on arrival at BATS, places orders to buy the remaining ninety thousand shares on the other exchanges. This latter order travels faster and trades first, so the original buyer receives only partial fulfillment. The intermediary immediately posts offers to sell ninety thousand shares at a slightly higher price, which the original buyer is likely to accept. All this in a matter of milliseconds.

The intermediary here is serving no useful economic function. Volume is significantly higher than it otherwise would have been, but there has been no increase in market liquidity. Had there been no intermediary present, the buyer and sellers would have transacted without any discernible delay, at a price that would have been better for the buyer and no worse for the sellers. Furthermore, an order is allowed to trade ahead of one that made its first contact with the market at an earlier point in time.

The second example involves interactions between a dark pool and the public markets. Suppose that the highest bid price for a stock in the public exchanges is $100.00, and the lowest ask is $100.10. An individual submits a bid for a thousand shares at $100.05 to a dark pool, where it remains invisible and awaits a matching order. Shortly thereafter, a sell order for a thousand shares at $100.01 is placed at a public exchange. These orders are compatible and should trade against each other at a single price. Instead, both trade against an intermediary, which buys at the lower price, sells at the higher price, and captures the spread.

As in the first example, the intermediary is not providing any benefit to either transacting party, and is not adding liquidity to the market for the asset. Volume is doubled but no economic purpose is served. Transactions that were about to occur anyway are preempted by a fraction of a second, and a net transfer of resources from investors to intermediaries is the only lasting consequence.

Michael Lewis has focused on practices such as these because their social wastefulness and fundamental unfairness is so transparent. But it's important to recognize that most of the strategies implemented by high frequency trading firms may not be quite so easy to classify or condemn. For instance, how is one to evaluate trading based on short term price forecasts based on genuinely public information? I have tried to argue in earlier posts that the proliferation of such information extracting strategies can give rise to greater price volatility. Furthermore, an arms race among intermediaries willing to sink significant resources into securing the slightest of speed advantages must ultimately be paid for by investors. This is an immediate consequence of what I like to call Bogle's Law:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
I hope that the minor factual errors in Flash Boys won't detract from the book's main message, or derail the important and overdue debate that it has predictably stirred. By focusing on the most egregious practices Lewis has already picked the low-hanging fruit. What remains to be figured out is how typical such practices really are. Taking full account of the range of strategies used by high frequency traders, to what extent are our asset markets characterized by superfluous financial intermediation?


Update (4/11). It took me a while to get through it but I’ve now finished the book. It’s well worth reading. Although the public discussion of Flash Boys has been largely focused on high frequency trading, the two most damning claims in the book concern broker-dealers and the SEC.

Lewis provides evidence to suggest that some broker-dealers direct trades to their own dark pools at the expense of their customers. Brokers with less than a ten percent market share in equities trading mysteriously manage to execute more than half of their customers’ orders in their own dark pools rather than in the wider market. This is peculiar because for any given order, the likelihood that the best matching bid or offer is found in a broker’s internal dark pool should roughly match the broker’s market share in equities trading. Instead, a small portion of the order is traded at external venues in a manner that allows the information content of the order to leak out. This results in a price response on other exchanges, allowing the internal dark pool to then provide the best match.

There’s also an account of a meeting between Brad Katsuyama, the book’s main protagonist, and the SEC’s Division of Trading and Markets that is just jaw-dropping. Katsuyama had discovered the reason why his large orders were only partially filled even though there seemed to be enough offers available across all exchanges for complete fulfillment (the first example above). In order to prevent their orders from being front-run after their first contact with the market, Katsuyama and his team developed a simple but ingenious defense. They split each order into components that matched the offers available at the various exchanges, and then submitted the components at carefully calibrated intervals (separated by microseconds) so that they would arrive at their respective exchanges simultaneously. The program written to accomplish this was subsequently called Thor. Katsuyama met with the SEC to explain how Thor worked, and was astonished to find that some of the younger staffers thought that the program, designed to protect fundamental traders from being front-run, was unfair to the high-frequency outfits whose strategies were being rendered ineffective.

This account, if accurate, reveals a truly astonishing failure within the SEC to understand the agency’s primary mandate. If this is the state of our regulatory infrastructure then there really is little hope for reform. 

Wednesday, November 20, 2013

The Payments System and Monetary Transmission

About forty minutes into the final session of a recent research conference at the IMF, Ken Rogoff made the following remarks:
We have regulation about the government having monopoly over currency, but we allow these very close substitutes, we think it's good, but maybe... it's not so good, maybe we want to have a future where we all have an ATM at the Fed instead of intermediated through a bank... and if you want a better deal, you want more interest on your money, then you can buy what is basically a bond fund that may be very liquid, but you are not guaranteed that you're going to get paid back in full. 
This is an idea that's long overdue. Allowing individuals to hold accounts at the Fed would result in a payments system that is insulated from banking crises. It would make deposit insurance completely unnecessary, thus removing a key subsidy that makes debt financing of asset positions so appealing to banks. There would be no need to impose higher capital requirements, since a fragile capital structure would result in a deposit drain. And there would be no need to require banks to offer cash mutual funds, since the accounts at the Fed would serve precisely this purpose.

But the greatest benefit of such a policy would lie elsewhere, in providing the Fed with a vastly superior monetary transmission mechanism. In a brief comment on Macroeconomic Resilience a few months ago, I proposed that an account be created at the Fed for every individual with a social security number, including minors. Any profits accruing to the Fed as a result of its open market operations could then be used to credit these accounts instead of being transferred to the Treasury. But these credits should not be immediately available for withdrawal: they should be released in increments if and when monetary easing is called for.

The main advantage of such an approach is that it directly eases debtor balance sheets when a recession hits. It can provide a buffer to those facing financial distress, allowing payments to be made on mortgages or auto loans in the face of an unexpected loss of income. And as children transition into adulthood, they will find themselves with accumulated deposits that could be used to finance educational expenditures or a down payment on a home.

In contrast, monetary policy as currently practiced targets creditor balance sheets. Asset prices rise as interest rates are driven down. The goal is to stimulate expenditure by lowering borrowing costs, but by definition this requires individuals to take on more debt. In an over-leveraged economy struggling through a balance sheet recession, such policies can only provide temporary relief. 

No matter how monetary policy is implemented, it has distributional effects. As a result, the impact on real income growth of a given nominal target is sensitive to the monetary transmission mechanism in place. One of the things I find most puzzling and frustrating about current debates concerning monetary policy is the focus on targets rather than mechanisms. To my mind, the choice of target---whether the inflation rate or nominal income growth or something entirely different---is of secondary importance compared to the mechanism used to attain it.

Rogoff was followed at the podium by Larry Summers, who voiced fears that we face a long period of secular stagnation. Paul Krugman has endorsed this view. I think that this fate can be avoided, but not by fiddling with inflation or nominal growth targets. The Fed is currently hobbled not by the choice of an inappropriate goal, but by the limited menu of transmission mechanisms at its disposal. If all you can do in the face of excessive indebtedness is to encourage more borrowing, swapping one target for another is not going to solve the problem. Thinking more imaginatively about mechanisms is absolutely essential, otherwise we may well be facing a lost decade of our own.

Thursday, September 26, 2013

The Romney Whale

In my last post I referenced a paper with David Rothschild that we posted earlier this month. The main goal of that work was to try to examine the manner in which new information is transmitted to asset prices, and to distinguish empirically between two influential theories of trading. To accomplish this we examined in close detail every transaction on Intrade over the two week period immediately preceding the 2012 presidential election. We looked at about 84,000 transactions involving 3.5 million contracts and over 3,200 unique accounts, and in the process of doing so quickly realized that a single trader was responsible for about a third of all bets on Romney to win, and had wagered and lost close to 4 million dollars in just this one fortnight.  

While this discovery was (and remains) incidental to the main message of the paper, it has attracted a great deal of media attention over the past couple of days. (About a dozen articles are linked here, and there have been a couple more published since.) Most of these reports state the basic facts and make some conjectures about motivation. The purpose of this post is to describe and interpret what we found in a bit more detail. Much of what is said here can also be found in Section 5 of the paper.

To begin with, the discovery of a large trader with significant exposure to a Romney loss was not a surprise. There was discussion of a possible "Romney Whale" in the Intrade chat rooms and elsewhere leading up to the election, as well as open recognition of the possibility of arbitrage with Betfair. On the afternoon of election day I noticed that the order book for Romney contracts was unusually asymmetric, with the number of bids far exceeding the number of offers, and posted this:

This was circulated quite widely thanks to the following response:

In a post on the following day I explained why I thought that is was an attempt at manipulation:
Could this not have been just a big bet, placed by someone optimistic about Romney's chances? I don't think so, for two reasons. First, if one wanted to bet on Romney rather than Obama, much better odds were available elsewhere, for instance on Betfair. More importantly, one would not want to leave such large orders standing at a time when new information was emerging rapidly; the risk of having the orders met by someone with superior information would be too great. Yet these orders stood for hours, and effectively placed a floor on the Romney price and a ceiling on the price for Obama.
Ron Bernstein at Intrade has explained why the disparity with Betfair is not surprising given the severe constraints faced by US residents in opening and operating accounts at the latter exchange, and the differential fee structure. Nevertheless, I still find the second explanation compelling.

The strategic manner in which these orders were placed, with large bids at steadily declining intervals, suggested to me that this was an experienced trader making efficient use of the available funds to order to have the maximum price impact. The orders lower down on the bid side of the book served as deterrents, revealing to counterparties that a sale at the best bid would not result in a price collapse. This is why I described the trader as sophisticated in my conversations with reporters at the WSJ and Politico.  Characterizing his behavior as stupid presumes that this was a series of bets made with in the conviction that Romney would prevail, which I doubt.

But if this was an attempt to manipulate prices, what was it's purpose? We consider a couple of different possibilities in the paper. The one that has been most widely reported is that it was at attempt to boost campaign contributions, morale, and turnout. But there's another very different possibility that's worth considering.

On the afternoon of the 2004 presidential election exit polls were leaked that suggested a surprise victory by John Kerry, and the result was a sharp rise in the price of his Tradesports contract. (Tradesports was a precursor to Intrade.) This was sustained for several hours until election returns began to come in that were not consistent with the reported polls. In an interesting study published in 2007, Snowberg, Wolfers and Zitzewitz used this event to examine the effects on the S&P futures market of beliefs about the electoral outcome. They found that perceptions of a Kerry victory resulted in a decline in the price of the index, an effect they interpreted as causal. The following chart from their paper makes the point quite vividly:

Motivated in part by this finding, we wondered whether the manipulation of beliefs about the electoral outcome could have been motivated by financial gain. If Intrade could be used to manipulate beliefs in a manner that affected the value a stock price index, or specific securities such as health or oil and gas stocks, then a four million dollar loss could be easily compensated by a much larger gain elsewhere. Could this have provided motivation for the behavior of our large trader?

We decided that this was extremely unlikely, for two reasons. First, the 2004 analysis showed only that changes in beliefs affected the index, not that the Intrade price caused the change in beliefs. In fact, it was the leaked exit polls that affected both the Intrade price and the S&P 500 futures market. This does not mean that a change in the Intrade price, absent confirming evidence from elsewhere, could not have a causal impact on other asset prices. It's possible that it could, but not plausible in our estimation.

Furthermore, the partisan effects identified by Snowberg et al. were completely absent in 2012. In fact, if anything, the effects were reversed. S&P 500 futures reacted negatively to an increase in perceptions of a Romney victory during the first debate, and positively to the announcement of the result on election day. One possible reason is that monetary policy was expected to be tighter under a Romney administration. Here is the chart for the first debate:

The index falls as Romney's prospects appear to be rising, but the effect is clearly minor. The main point is that any attempt to use 2004 correlations to influence the S&P via changes in Intrade prices, even if they were successful in altering beliefs about the election, would have been futile or counterproductive from the perspective of financial gain.

This is why we ultimately decided that this trader's activity was simply a form of support to the campaign. While four million dollars is a fortune for most of us, it is less than the cost of making and airing a primetime commercial in a major media market. In the age of multi-billion dollar political campaigns, it really is a drop in the bucket. Even if the impact on perceptions was small, it's not clear to me that there was an alternative use of this money at this late stage that would have had a greater impact. Certainly television commercials had been largely tuned out by this point.

It's important to keep in mind that attempts at manipulation notwithstanding, real money peer-to-peer prediction markets have been very effective in forecasting outcomes over the two decades in which they have been in use. Furthermore, as I hope my paper with David demonstrates, the simplicity of the binary option contract makes the data from such markets valuable for academic research. It is true that participation in these markets is a form of gambling, but that is also the case for many short-horizon traders in markets for more traditional assets, especially options, futures and swaps. The volume of speculation in such markets exceeds the demands for hedging by an order of magnitude. From a regulatory standpoint, there is really no rational basis for treating prediction markets differently.

Sunday, September 22, 2013

Information, Beliefs, and Trading

Even the most casual observer of financial markets cannot fail to be impressed by the speed with which prices respond to new information. Markets may overreact at times but they seldom fail to react at all, and the time lag between the emergence of information and an adjustment in price is extremely short in the case of liquid securities such as common stock.

Since all price movements arise from orders placed and executed, prices can respond to news only if there exist individuals in the economy who are alert to the arrival of new information and are willing to adjust positions on this basis. But this raises the question of how such "information traders" are able to find willing counterparties. After all, who in their right mind wants to trade with an individual having superior information?

This kind of reasoning, when pushed to its logical limits, leads to some paradoxical conclusions. As shown by Aumann, two individuals who are commonly known to be rational, and who share a common prior belief about the likelihood of an event, cannot agree to disagree no matter how different their private information might be. That is, they can disagree only if this disagreement is itself not common knowledge. But the willingness of two risk-averse parties to enter opposite sides of a bet requires them to agree to disagree, and hence trade between risk-averse individuals with common priors is impossible if they are commonly known to be rational.

This may sound like an obscure and irrelevant result, since we see an enormous amount of trading in asset markets, but I find it immensely clarifying. It means that in thinking about trading we have to allow for either departures from (common knowledge of) rationality, or we have to drop the common prior hypothesis. And these two directions lead to different models of trading, with different and testable empirical predictions.

The first approach, which maintains the common prior assumption but allows for traders with information-insensitive asset demands, was developed in a hugely influential paper by Albert Kyle. Such "noise traders" need not be viewed as entirely irrational; they may simply have urgent liquidity needs that require them to enter or exit positions regardless of price. Kyle showed that the presence of such traders induces market makers operating under competitive conditions to post bid and ask prices that could be accepted by any counterparty, including information traders. From this perspective, prices come to reflect information because informed parties trade with uninformed market makers, who compensate for losses on these trades with profits made in transactions with noise traders.

An alternative approach, which does not require the presence of noise traders at all but drops the common prior assumption, can be traced to a wonderful (and even earlier) paper by Harrison and Kreps. Here all traders have the same information at each point in time, but disagree about its implications for the value of securities. Trade occurs as new information arrives because individuals interpret this information differently. (Formally, they have heterogeneous priors and can therefore disagree even if their posterior beliefs are commonly known.) From this perspective prices respond to news because of heterogeneous interpretations of public information.

Since these two approaches imply very different distributions of trading strategies, they are empirically distinguishable in principle. But identifying strategies from a sequence of trades is not an easy task. At a minimum, one needs transaction level data in which each trade is linked to a buyer and seller account, so that the evolution of individual portfolios can be tracked over time. From these portfolio adjustments one might hope to deduce the distribution of strategies in the trading population.

In a paper that I have discussed previously on this blog, Kirilenko, Kyle, Samadi and Tuzun have used transaction level data from the S&P 500 E-Mini futures market to partition accounts into a small set of groups, thus mapping out an "ecosystem'' in which different categories of traders "occupy quite distinct, albeit overlapping, positions.'' Their concern was primarily with the behavior of high frequency traders both before and during the flash crash of May 6, 2010, especially in relation to liquidity provision. They do not explore the question of how prices come to reflect information, but in principle their data would allow them to do so.

I have recently posted the first draft a paper, written jointly with David Rothschild, that looks at transaction level data from a very different source: Intrade's prediction market for the 2012 US presidential election. Anyone who followed this market over the course of the election cycle will know that prices were highly responsive to information, adjusting almost instantaneously to news. Our main goal in the paper was to map out an ecology of trading strategies and thereby gain some understanding of the process by means of which information comes to be reflected in prices. (We also wanted to evaluate claims made at the time of the election that a large trader was attempting to manipulate prices, but that's a topic for another post.)

The data are extremely rich: for each transaction over the two week period immediately preceding the election, we know the price, quantity, time of trade, and aggressor side. Most importantly, we have unique identifiers for the buyer and seller accounts, which allows us to trace the evolution of trader portfolios and profits. No identities can be deduced from this data, but it is possible to make inferences about strategies from the pattern of trades.

We focus on contracts referencing the two major party candidates, Obama and Romney. These contracts are structured as binary options, paying $10 if the referenced candidate wins the election and nothing otherwise. The data allows us to compute volume, transactions, aggression, holding duration, directional exposure, margin, and profit for each account. Using this, we are able to group traders into five categories, each associated with a distinct trading strategy.

During our observational window there were about 84,000 separate transactions involving 3.5 million contracts and over 3,200 unique accounts. The single largest trader accumulated a net long Romney position of 1.2 million contracts (in part by shorting Obama contracts) and did this by engaging in about 13,000 distinct trades for a total loss in two weeks of about 4 million dollars. But this was not the most frequent trader: a different account was responsible for almost 34,000 transactions, which were clearly implemented algorithmically.

One of our most striking findings is that 86% of traders, accounting for 52% of volume, never change the direction of their exposure even once. A further 25% of volume comes from 8% of traders who are strongly biased in one direction or the other. A handful of arbitrageurs account for another 14% of volume, leaving just 6% of accounts and 8% of volume associated with individuals who are unbiased in the sense that they are willing to take directional positions on either side of the market. This suggests to us that information finds its way into prices largely through the activities of traders who are biased in one direction or another, and differ with respect to their interpretations of public information rather than their differential access to private information.

Prediction markets have historically generated forecasts that compete very effectively with those of the best pollsters.  But if most traders never change the direction of their exposure, how does information come to be reflected in prices? We argue that this occurs through something resembling the following process. Imagine a population of traders partitioned into two groups, one of which is predisposed to believe in an Obama victory while the other is predisposed to believe the opposite. Suppose that the first group has a net long position in the Obama contract while the second is short, and news arrives that suggests a decline in Obama's odds of victory (think of the first debate). Both groups revise their beliefs in response to the new information, but to different degrees. The latter group considers the news to be seriously damaging while the former thinks it isn't quite so bad. Initially both groups wish to sell, so the price drops quickly with very little trade since there are few buyers. But once the price falls far enough, the former group is now willing to buy, thus expanding their long position, while the latter group increases their short exposure. The result is that one group of traders ends up as net buyers of the Obama contract even when the news is bad for the incumbent, while the other ends up increasing short exposure even when the news is good. Prices respond to information, and move in the manner that one would predict, without any individual trader switching direction.

This is a very special market, to be sure, more closely related to sports betting than to stock trading. But it does not seem implausible to us that similar patterns of directional exposure may also be found in more traditional and economically important asset markets. Especially in the case of consumer durables, attachment to products and the companies that make them is widespread. It would not be surprising if one were to find Apple or Samsung partisans among investors, just as one finds them among consumers. In this case one would expect to find a set of traders who increase their long positions in Apple even in the face of bad news for the company because they believe that the price has declined more than is warranted by the news. Whether or not such patterns exist is an empirical question that can only be settled with a transaction level analysis of trading data.

If there's a message in all this, it is that markets aggregate not just information, but also fundamentally irreconcilable perspectives. Prices, as John Kay puts it, "are the product of a clash between competing narratives about the world." Some of the volatility that one observes in asset markets arises from changes in perspectives, which can happen independently of the arrival of information. This is why substantial "corrections" can occur even in the absence of significant news, and why stock prices appear to "move too much to be justified by subsequent changes in dividends." What makes markets appear invincible is not the perfect aggregation of information that is sometimes attributed to them, but the sheer unpredictability of persuasion, exhortation, and social influence that can give rise to major shifts in the distribution of narratives. 

Saturday, August 03, 2013

The Spider and the Fly

Michael Lewis has written a riveting report on the trial, incarceration, release, and re-arrest of Sergey Aleynikov, once a star programmer at Goldman Sachs. It's a tale of a corporation coming down with all its might on a former employee who, when all is said an done, damaged the company only by deciding to take his prodigious talents elsewhere.

As is always the case with Lewis, the narrative is brightly lit while the economic insights lie half-concealed in the penumbra of his prose. In this case he manages to shed light on the enormous divergence between the private and social costs of high frequency trading, as well as the madness of an intellectual property regime in which open-source code routinely finds its way into products that are then walled off from the public domain, violating the spirit if not the letter of the original open licenses.

Aleynikov was hired by Goldman to help improve its relatively weak position in what is rather euphemistically called the market-making business. In principle, this is the business of offering quotes on both sides of an asset market in order that investors wishing to buy or sell will find willing counterparties. It was once a protected oligopoly in which specialists and dealers made money on substantial spreads between bid and ask prices, in return for which they provided some measure of price continuity.

But these spreads have vanished over the past decade or so as the original market makers have been displaced by firms using algorithms to implement trading strategies that rely on rapid responses to incoming market data. The strategies are characterized by extremely short holding periods, limited intraday directional exposure, and very high volume. A key point in the transition was the adoption in 2007 of Regulation NMS (National Market System), which required that orders be routed to the exchange offering the best available price. This led to a proliferation of trading venues, since order flow could be attracted by price alone. Lewis describes the transition thus:
For reasons not entirely obvious... the new rule stimulated a huge amount of stock-market trading. Much of the new volume was generated not by old-fashioned investors but by extremely fast computers controlled by high-frequency-trading firms... Essentially, the more places there were to trade stocks, the greater the opportunity there was for high-frequency traders to interpose themselves between buyers on one exchange and sellers on another. This was perverse. The initial promise of computer technology was to remove the intermediary from the financial market, or at least reduce the amount he could scalp from that market. The reality has turned out to be a boom in financial intermediation and an estimated take for Wall Street of somewhere between $10 and $20 billion a year, depending on whose estimates you wish to believe. As high-frequency-trading firms aren’t required to disclose their profits... no one really knows just how much money is being made. But when a single high-frequency trader is paid $75 million in cash for a single year of trading (as was Misha Malyshev in 2008, when he worked at Citadel) and then quits because he is “dissatisfied,” a new beast is afoot. 
The combination of new market rules and new technology was turning the stock market into, in effect, a war of robots. The robots were absurdly fast: they could execute tens of thousands of stock-market transactions in the time it took a human trader to blink his eye. The games they played were often complicated, but one aspect of them was simple and clear: the faster the robot, the more likely it was to make money at the expense of the relative sloth of others in the market.
This last point is not quite right: speed alone can't get you very far unless you have an effective trading strategy. Knight Capital managed to lose almost a half billion dollars in less than an hour not because their algorithms were slow but because they did not faithfully execute the intended strategy. But what makes a strategy effective? The key, as Andrei Kirilenko and his co-authors discovered in their study of transaction-level data from the S&P E-mini futures market, is predictive power:
High Frequency Traders effectively predict and react to price changes... [they] are consistently profitable although they never accumulate a large net position... HFTs appear to trade in the same direction as the contemporaneous price and prices of the past five seconds. In other words, they buy... if the immediate prices are rising. However, after about ten seconds, they appear to reverse the direction of their trading... possibly due to their speed advantage or superior ability to predict price changes, HFTs are able to buy right as the prices are about to increase... They do not hold positions over long periods of time and revert to their target inventory level quickly... HFTs very quickly reduce their inventories by submitting marketable orders. They also aggressively trade when prices are about to change. 
Aleynikov was hired to speed up Goldman's systems, but he was largely unaware of (and seemed genuinely uninterested in) the details of their trading strategies. Here's Lewis again:
Oddly, he found his job more interesting than the stock-market trading he was enabling. “I think the engineering problems are much more interesting than the business problems,” he says... He understood that Goldman’s quants were forever dreaming up new trading strategies, in the form of algorithms, for the robots to execute, and that these traders were meant to be extremely shrewd. He grasped further that “all their algorithms are premised on some sort of prediction—predicting something one second into the future.”
Effective prediction of price movements, even over such very short horizons, is not an easy task. It is essentially a problem of information extraction, based on rapid processing of incoming market data. The important point is that this information would have found its way into prices sooner or later in any case. By anticipating the process by a fraction of a second, the new market makers are able to generate a great deal of private value. But they are not responsible for the informational content of prices, and their profits, as well as the substantial cost of their operations, therefore must come at the expense of those investors who are actually trading on fundamental information.

It is commonly argued that high frequency trading benefits institutional and retail investors because it has resulted in a sharp decline in bid-ask spreads. But this spread is a highly imperfect measure of the value to investors of the change in regime. What matters, especially for institutional investors placing large orders based on fundamental research, is not the marginal price at which the first few shares trade but the average price over the entire transaction. And if their private information is effectively extracted early in this process, the price impact of their activity will be greater, and price volatility will be higher in general.

After all, it was a large order from an institutional investor in the S&P futures market that triggered the flash crash, sending indexes plummeting briefly, and individual securities trading at absurd prices. Accenture traded for a penny on the way down, and Sotheby's for a hundred thousand dollars a share on the bounce back.

In evaluating the impact on investors of the change in market microstructure, it is worth keeping in mind Bogle's Law:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
This is just basic accounting, but often overlooked. If one wants to argue that the new organization of markets has been beneficial to investors, one needs to make the case that the costs of financial intermediation in the aggregate have gone down. Smaller bid-ask spreads have to be balanced against the massive increase in volume, the profits of the new market makers, and most importantly, the costs of high-frequency trading. These include nontrivial payments to highly skilled programmers and quants, as well as the costs of infrastructure, equipment, and energy. Lewis notes that the "top high-frequency-trading firms chuck out their old gear and buy new stuff every few months," but these costs probably pale in comparison with those of cables facilitating rapid transmission across large distances and the more mundane costs of cooling systems. All told, it is far from clear that the costs of financial intermediation have fallen in the aggregate.

This post is already too long, but I'd like to briefly mention a quite different point that emerges from the Lewis article since it relates to a theme previously explored on this blog. Aleynikov relied routinely on open-source code, which he modified and improved to meet the needs of the company. It is customary, if not mandatory, for these improvements to be released back into the public domain for use by others. But his attempts to do so were blocked:
Serge quickly discovered, to his surprise, that Goldman had a one-way relationship with open source. They took huge amounts of free software off the Web, but they did not return it after he had modified it, even when his modifications were very slight and of general rather than financial use. “Once I took some open-source components, repackaged them to come up with a component that was not even used at Goldman Sachs,” he says. “It was basically a way to make two computers look like one, so if one went down the other could jump in and perform the task.” He described the pleasure of his innovation this way: “It created something out of chaos. When you create something out of chaos, essentially, you reduce the entropy in the world.” He went to his boss, a fellow named Adam Schlesinger, and asked if he could release it back into open source, as was his inclination. “He said it was now Goldman’s property,” recalls Serge. “He was quite tense. When I mentioned it, it was very close to bonus time. And he didn’t want any disturbances.” 
Open source was an idea that depended on collaboration and sharing, and Serge had a long history of contributing to it. He didn’t fully understand how Goldman could think it was O.K. to benefit so greatly from the work of others and then behave so selfishly toward them... But from then on, on instructions from Schlesinger, he treated everything on Goldman Sachs’s servers, even if it had just been transferred there from open source, as Goldman Sachs’s property. (At Serge’s trial Kevin Marino, his lawyer, flashed two pages of computer code: the original, with its open-source license on top, and a replica, with the open-source license stripped off and replaced by the Goldman Sachs license.)
This unwillingness to refresh the reservoir of ideas from which one drinks may be good for the firm but is clearly bad for the economy. As Michele Boldin and David Levine have strenuously argued, the rate of innovation in the software industry was dramatic prior to 1981 (before which software could not be patented):
What about the graphical user interfaces, the widgets such as buttons and icons, the compilers, assemblers, linked lists, object oriented programs, databases, search algorithms, font displays, word processing, computer languages – all the vast array of algorithms and methods that go into even the simplest modern program? ... Each and every one of these key innovations occurred prior to 1981 and so occurred without the benefit of patent protection. Not only that, had all these bits and pieces of computer programs been patented, as they certainly would have in the current regime, far from being enhanced, progress in the software industry would never have taken place. According to Bill Gates – hardly your radical communist or utopist – “If people had understood how patents would be granted when most of today's ideas were invented, and had taken out patents, the industry would be at a complete standstill today.”
Vigorous innovation in open source development continues under the current system, but relies on a willingness to give back on the part of those who benefit from it, even if they are not legally mandated to do so. Aleynikov's natural instincts to reciprocate were blocked by his employer for reasons that are easy to understand but very difficult to sympathize with.

Lewis concludes his piece by reflecting on Goldman's motives:
The real mystery, to the insiders, wasn’t why Serge had done what he had done. It was why Goldman Sachs had done what it had done. Why on earth call the F.B.I.? Why coach your employees to say what they need to say on a witness stand to maximize the possibility of sending him to prison? Why exploit the ignorance of both the general public and the legal system about complex financial matters to punish this one little guy? Why must the spider always eat the fly?
The answer to this, I think, is contained in the company's response to Lewis, which is now appended to the article. The statement is impersonal, stern, vague and legalistic. It quotes an appeals court that overturned the verdict in a manner that suggests support for Goldman's position. Like the actions of the proverbial spider, it's a reflex, unconstrained by reflection or self-examination. Even if the management's primary fiduciary duty is to protect the interests of shareholders, this really does seem like a very shortsighted way to proceed.


Update (August 6). RT Leuchtkafer, whose writing has been featured in several earlier posts, sends in the following by email (posted with permission):
I'd add the task for HFT shops is more than information extraction in short timeframes - they've expanded that task to be one of coaxing information leakage from the exchanges, for which they pay the exchanges handsomely.

On intermediation and its costs, intermediary participation in the equities markets has easily tripled since the HFT innovation (and the deregulation of intermediation), and so on net I've argued that aggregate position intermediation costs have gone up even as per share costs have gone down.  Intermediaries make much less on a share than they used to but thanks to deregulation they interpose themselves between natural buyers and sellers much more often than they did, with the result that even though portfolio implementation costs have gone down the portion of those costs captured by intermediaries has greatly increased. 
In addition, Steve Waldman has pointed out that the costs of defensive expenditures to counter HFT strategies are also subject to Bogle's Law and need to be accounted for. For some vivid examples see this post by Jason Voss (via Themis Trading).

Responses to this post on Economist's View and Naked Capitalism are also worth a look; I especially recommend the discussion of open source following this comment by Brooklin Bridge.

Thursday, April 25, 2013

Macon Money

Among the many fascinating people currently affiliated with the Microsoft Research New York lab is Kati London, judged by MIT's Technology Review Magazine (2010) to be among the “Top 35 Innovators Under 35.” Through her involvement with the start-up area/code, Kati has developed games that transform the individuals who play them and the communities in which they reside.

One such project is Macon Money, an initiative involving the Knight Foundation and the College Hill Alliance in Macon, Georgia. This simple experiment, amazingly enough, sheds light on some fundamental questions in monetary economics, helps explain why conventional monetary policy via asset purchases has recently been so ineffective in stimulating the economy, suggests alternative approaches that might be substantially more effective, and speaks to the feasibility of the Chicago Plan (originally advanced by Henry Simons and Irving Fisher, and recently endorsed by a couple of IMF economists) to abolish privately issued money.

So what exactly was the Macon Money project? It began with a grant of $65,000 by the Knight Foundation, which was used to back the issue of bonds. These bonds were (literally) sliced in two and the halves were given away through various channels to residents of Macon. If a pair of individuals holding halves of the same bond could find each other, they were able to exchange the (now complete) bond for Macon Money, which could then be used to make expenditures at a variety of local businesses. These business were happy to accept Macon Money because it could be redeemed at par for US currency.

The basic idea is described here:

The demographics of the participant population, the distribution of expenditures, and the strategies used by players to find their "other halves" are all described in an evaluation summary. The project had the twin goals of building social capital and stimulating economic development. Although few enduring ties were created among the players, participation did create a sense of excitement about Macon and greater optimism about its future. And participating businesses managed to find a new pool of repeat customers.

Macon money was a fiscal intervention (an injection of funds into the locality) accomplished using the device of privately issued money convertible at par. There was a temporary increase in the local money supply which was extinguished when businesses redeemed their notes. An interesting thought experiment is to imagine what would have happened if, instead of being convertible at par, businesses could only convert Macon Money into currency at a small discount.

Businesses that accept credit card payments are exactly in this situation, facing a haircut of 1-3 percent when they convert credit card payments into cash. Most businesses that participated in the original experiment would therefore likely continue to participate in the modified one. After all, businesses involved in Groupon campaigns accept a 75% haircut once Groupon takes its share of the discounted price.

But there is one critically important difference between Macon Money and a credit card payment: the former is negotiable while the latter is not. That is, instead of being redeemed at a small discount, Macon Money could be spent at par. If enough businesses were participating, it would make sense for each one to spend rather than redeem its receipts. The privately issued money would therefore remain in circulation.

What about a business that had no interest in spending its receipts on locally provided goods and services? Even in this case, there would be better alternatives to redeeming at a discount. For instance, if the discount were 3%, there would be room for the emergence of a local intermediary who offered cash at a more attractive 2% discount to the business, and then sold Macon Money at 1% below par to those who did wish to spend locally. Again, the privately issued money would remain in circulation.

As a result, the local money supply would have grown not just for a brief period, but indefinitely. The discount itself would allow for more money to be injected for any given amount of backing funds. And as long as convertibility was never in doubt, substantially more money could be issued than the funds earmarked to back it.

This simple thought experiment tells us something about policy. Macon Money provided an injection of liquidity that improved the balance sheets of those who managed to secure bonds. This allowed for an increase in aggregate expenditure, and given the slack in local productive capacity, also an increase in production.

It was expansionary monetary policy, but quite different from the kind of policy pursued by the Federal Reserve. The Fed expands the money supply by buying securities, which leads to a change in the composition of the asset side of individual balance sheets. Higher asset prices (and correspondingly lower interest rates) are supposed to stimulate demand through increased borrowing at more attractive rates. But in a balance sheet recession, distressed borrowers are unwilling to take on more debt and the stimulative effects of such a policy are accordingly muted. This is why calls for an alternative approach to monetary policy make analytical sense.

Furthermore, the fact that Macon Money was accepted only locally meant that it could not be used for imports from other locations. The monetary stimulus was therefore not subject to the kinds of demand leakages that would arise from the issue of generalized fiat money.

Finally, the project provided a very clear illustration of the difficulty of abolishing privately issued money. Unless one were to prevent all creditworthy institutions from issuing convertible liabilities, it would be virtually impossible to halt the use of such liabilities as media of exchange. Put differently, we are always going to have a shadow banking system. But what the Macon Money initiative shows is that the creative and judicious use of private money, backed by creditworthy foundations, can revitalize communities currently operating well below their productive potential. Whether this can be done in a scalable way, with some government involvement and oversight to prevent abuse, remains unclear. But surely the idea deserves a closer look?


Update. Another important feature of Macon Money is the fact that it cannot be used to pay down debt unless the creditors are themselves local. This means that even highly indebted households will either spend it, or pay down debt by selling their notes to someone who will. If increasing economic activity is the goal, this is vastly superior to disbursements of cash.

Joseph Cotterill asks (rhetorically) whether Macon Money is the anti-Bitcoin. Exactly right, and very well put.


Ashwin Parameswaran has sent in the following via email (posted with permission):
Just read your post on Macon money - fascinating experiment and your thought experiment on how it could stay in circulation was equally interesting.  
On the thought experiment, there's also a possibility that if Macon money can only be converted into currency at a discount then Macon money itself would be valued at a discount. Coming back to your example on credit cards, this often happens in countries where retailers can get away with it. Lots of small retailers in India offer cash discounts even when they give you a receipt i.e. its not just a tax dodge. Many retailers in the UK simply don't accept Amex cards because of the size of the haircut they impose.  
On the broader subject of imagining various types of money, this is probably closest to private banking money whereas Bitcoin is by design closest to gold. Another experiment is the idea of pure local credit money without even the intermediation of a private bank-like entity which seems to be the idea behind Ripple although the current implementation seems to be a little different. Over the last year I've done a lot of reading on 14th-17th century English history of credit/money and its almost universally accepted that most of the local money worked largely with such peer-to-peer credit systems with gold perennially being in short supply. The section of this post titled 'Interest-Bearing Money: Debt as Money' summarises some of my reading. The first half of Carl Wennerlind’s book ‘Casualties of Credit’ is excellent and has some great references in this area. 
You could see the entire arc of the last 400 years as an exercise in making these private webs of credit more stable. So peer-to-peer credit became private banking. Then comes the lender of last resort and fiat money so that the LOLR is not constrained. At the same time we make the collateral safer and safer - govt bonds during the English Financial revolution, now MBS, bank debt etc. The irony is that now banks finance everything except what they started out financing which is SME bills of exchange/invoices. Partly the reasons are regulatory but fundamentally the risk is too idiosyncratic and "micro" in an environment where macro risks are backstopped. 
In fact here in the UK there's a lot of non-bank and even peer to peer interest in some of these spaces. See this one for invoice financing (the interest is partly because peer-to-peer lending in the UK has almost no regulatory burden, not regulated by the FSA at all). In a way this is just a modern-day reconstruction of the same system that existed in 16th-17th century England - peer-to-peer webs of credit. But with the critical difference that the system is not as elastic and doesn't really need to be. There are enough individuals, insurers etc who are more than capable of taking on the real risk and giving up their own purchasing power in the interim period for an adequate return. 
Lots to think about here. Briefly, on the issue of Macon Money being valued at a discount, this seems unlikely to me except in a secondary market for conversion into cash. Unlike credit card receipts, Macon Money is negotiable, and as long as it can be converted into goods and services at par it will be valued at par by those who plan to spend it. Of course there may be an equilibrium in which vendors themselves only accept it at a discount, which then becomes a self-sustaining practice. This would be equivalent to a selective increase in price, possible only if there is insufficient competition.

Here's more from Ashwin:
Another tangential point on the peer-to-peer credit networks in 16th century England was that although they had the downside of being perpetually fragile (there are accounts of middle-class traders feeling permanently insecure because they were always entrenched in long webs of credit), this credit money could not be hoarded by anyone. In this sense it really was the anti-gold/bitcoin. I wonder what you would need to do to Macon Money to protect against the potential leakage of being just hoarded as a store of value. This is of course what people like Silvio Gesell were concerned with (there are some excellent comments by anonymous commenter 'K' on this Nick Rowe post on the paradox of hoarding). I think there's merit to a modern money that could be a medium of exchange but could not serve as a store of wealth. I often think about what such a money could look like but at the end of the day we really need experiments and trials to figure out what could work. 

Saturday, April 06, 2013

Haircuts on Intrade

When Intrade halted trading abruptly on March 10, my initial reaction was that the company had commingled member funds with its own, MF Global style, in violation of its Trust and Security Statement. I suspected that these funds were then dissipated (or embezzled), leaving the firm unable to honor requests for redemption.

The latest announcement from the company confirms that something along these lines did, in fact, occur:
We have now concluded the initial stages of our investigations about the financial status of the Company, and it appears that the Company is in a cash “shortfall” position of approximately US $700,000 when comparing all cash on hand in Company and Member bank accounts with Member account balances on the Exchange system.
A shortfall of this kind could not have emerged if member funds had been kept separate from company funds. As it stands, the exchange is technically insolvent and faces imminent liquidation.

But the company is looking for a way to "rectify this cash shortfall position" in hopes of resuming operations and returning to viability. It has requested members with large accounts to formally agree to allow the exchange to hold on to some portion of their funds indefinitely:
The Company has now contacted all members with account balances greater than $1000, and proposed a “forbearance” arrangement between these members and the Company, which if sufficient members agree, would allow the Company to remain solvent... 
By Tuesday, April 16, 2013, we expect to be able to inform our members if sufficient forbearance has been achieved. If so, we will then resume limited operations of the Company and we will be able to process requests for withdrawals as agreed. If sufficient forbearance has not been achieved, it seems extremely likely that the Company will be forced into liquidation.
So traders find themselves in a strategic situation similar to that faced by holders of Greek sovereign debt a couple of years ago. If enough members accept the proposed haircut, then the remaining members (who do not accept) will be able to withdraw their funds. The company might then be able to resume operations and eventually allow unrestricted withdrawals. But if enough forbearance is not forthcoming, the company will be forced into immediate liquidation.

What should one do under such circumstances? As Jeff Ely might say, consider the equilibrium.  The best case outcome from the perspective of any one member would be immediate reimbursement in full. But this can only happen if the member in question denies the company's request, while enough other members agree to it. As long as members can't coordinate their actions, and each believes that his own choice is unlikely to be decisive, it makes no sense for any of them to accept the haircut. Liquidation under these circumstances seems inevitable.

On the other hand, what choice do members really have? Although their funds are senior to all other claims on the firm's assets, the cash shortfall will prevent such claims from being honored in full. And since members are scattered across multiple jurisdictions and lack the power to coordinate their response, even partial recovery through litigation seems improbable. Facing little or no prospect of getting anything back anytime soon, some might choose to roll the dice one last time.

The obvious lesson in all this is that in the absence of vigorous oversight, "trust and security" statements can't really be trusted to provide security.